This project is made with a group of professional dancers in the London Contemporary Dance School that aims to create a project in any form of creative media studies, this gives us very broad options to choose and it could verify with the creator’s wishes, creative media includes Virtual reality, live concert, 3D or 2D performance and any forms that able to express human body movements can be challenging for us. In this case, our groupmates from LDCS are more interested in virtual reality and impressive 2D animations, so we finally chose to make the connection between them.
The project is named ” the other realms” referring to a multi-universe concept that we build to connect virtual reality with 2D animations. The VR environment is the main stage where the dance performance takes place. It is a fully immersive 3D space where the audience can feel as if they are part of the performance. The dancers interact with the environment and the audience in “real space”, creating a dynamic and engaging experience. The 2D animation is a complementary element to the VR environment. It is a flat, 2-dimensional representation of the performance, but with its own unique look and feel. The animation can be projected onto a screen or displayed on a monitor, allowing the audience to see a different perspective of the performance. The multiverse concept connects these two elements by creating parallel versions of the same performance. In one universe, the performance takes place entirely in the VR environment. In another universe, the performance is represented through 2D animation. These universes coexist and interact with each other, creating a dynamic and engaging experience for the audience. The dancers and the animation can interact with each other in different ways. For example, the dancers can move from the VR environment into the 2D animation and vice versa. They can also interact with objects and characters in the animation, or even change the environment itself. This interaction creates a sense of continuity between the two universes and enhances the overall experience for the audience.
Motion & Choreography
The whole storyline is composed of 2D line and circle animation, vr water scene with a clear sky, and a cyberpunk theme scene with the neon light concept. The purpose of the piece “The other realms” express a parallel universe explained by contemporary dance and visual arts in the forms of 2D&3D animation, creating a multiverse travelling experience. The overall representation of the characters will be two- or three-dimensional models of female and asexual bodies that are closer to the theme of surrealism.
We chose the pure non-lyric steam boe song gives more variety in choreography it’s also made with a light rhythm that offers many possibilities in picturing. Vaporwave music, characterized by its dreamy and nostalgic soundscapes, can work well with contemporary dance choreography by providing a unique and atmospheric backdrop for the movements of the dancers. The ethereal and otherworldly quality of vaporwave music can lend itself well to more abstract and interpretive styles of contemporary dance, allowing the dancers to explore different moods and emotions through their movements. The smooth and flowing nature of the music can also provide a sense of continuity and flow to the performance, creating a sense of unity between the movements of the dancers and the soundscape they inhabit. Overall, vaporwave music can be a powerful tool for enhancing the visual and emotional impact of contemporary dance choreography and can allow for a more immersive and engaging experience for the audience.
Choreographies & concepts:
01.Geometric—–Technical and angular movement, accentuating the lines of the body.
02.Water—–Smooth and fluid movement. Exploration of body and water.
02.Cyberpunk—–Personification of neon lights. Embodiment of coloured light.
Creative Process
Creating a VR project with contemporary dance involves a variety of processes, including modelling a character, rigging & animation, motion capture, environment building & light rendering, interactive water shader & coding, and model shading with shader graph. The first step is to create a highly detailed 3D model of a human character that will be used in the project. This model should include features such as facial expressions, body movements, and clothing. Once the character model is complete, it needs to be rigged with a skeletal structure that will allow for realistic movements. Animations, such as dance movements and gestures, can then be created for the character using motion capture technology to make them as realistic as possible.
Model & Rigging
In order to show the dynamism of the dance and for the integrity of the movements, I chose a model with female body features but without individual body, symbols to carry all the animated copies.
The model has perfect proportions and the muscles of the legs and abdomen, including the arms and back, have been drawn out by the original modeller in order to show the smoothness of the model’s lines and to give a sense of attraction to focus on the movements.
This ties in better with the special shader graph I had planned to give the model a flowing galaxy and gives me more creative ideas. The decision to carry complex and gorgeous graphics in a simple and fluid silhouette is in itself a balanced visual art and allows for more and more exciting frames.
The final method I use to apply the skeleton on my model is through the Mixamo AI recognizing and rigging function to attach, it was convenient to be auto-applied as Im requiring a highly adjustable model where each joint has its stretching strength according to the movement.
The reason why I didn’t choose to use MAYA binding skin system to rig is it requires a huge work to make rigging and strength mapping, it will be too much and might be a non-efficient way to do this as we are working with advanced motion capture technology at this time. The Rokoko motion capture suit is designed to work in sync for full-body mocap recordings using the Smartsuit Pro, Smart gloves, and Face Capture. The Mocap suit will capture body, finger, and facial animations in a single performance where every dynamic feature of the movement can be detected directly without any further work on it required, so I think the Mixamo auto-rigging system is the best choice to work with the Rokoko mocap suit in terms of efficiency and productivity.
Motion capture
There are two methods we can choose to do the motion capture work, both of them are based on Rokoko software and or hardware. Rokoko is a company in the motion capture and animation fields that is already succeeding in technology development, it offers free motion capture tech for each user to create their work.
1.Rokoko AI (video based)
Rokoko has launched Rokoko Video, a free AI-trained browser-based tool that extracts motion data from video footage and retargets it to 3D characters. The animation can be cleaned in Rokoko’s Studio software, which can also be used for free, and exported in FBX and BVH format, for use in 3D apps like Blender or Maya, or game engines like Unity and Unreal Engine. The Studio software can also be used for free, although Starter accounts don’t include active customer support or Rokoko’s integration plugins for streaming data to DCC software.
Cooperating with contemporary dance students to do motion capture for their choreographies can be a fascinating experience. To begin with, it is important to understand the choreography, including its theme, mood, and the intended emotional impact on the audience. This can be achieved by attending rehearsals and discussing the choreography with the dancers and the choreographer.
Once the choreography has been understood, the next step is to prepare for the motion capture session. This involves setting up motion capture equipment, which typically includes specialized sensors that can capture the movements of the dancers in real-time. The dancers will then need to wear these sensors during the motion capture session.
During the motion capture session, it is important to create a safe and supportive environment for the dancers. This can involve providing them with clear instructions, guiding them through the choreography, and ensuring that they are comfortable with the equipment and the setup. The motion capture session can be recorded from multiple angles to ensure that all the movements are captured accurately.
The results of the AI video recognition technique were not as complete as expected, the 3D position of the characters and the details of the arm and leg movements were not well recognised, we used a plain white background as much as possible in this scene and brought the video quality up to the required level, but it was clear that the AI was not able to recognise the overly detailed dance movements, not to mention that the choreography for the water scene I was responsible for was a concept based on the interaction between the human body and a body of water, where the dancers’ joint movements were incredibly complex and difficult to recognise.
For these reasons, the final rendered animation by Rokoko AI shows the character’s arms frequently displaced or moving behind the character itself, and the movement of the body joints is unnatural and stiff. After communicating with the dancers and giving them advice, we all agreed with meeting together again to give the motion capture suit a try as their works were not represented perfectly and plenty of animation and rigging work is required if we have to make it as perfect as we can as a VR student that should be responsible to this whole project, but to be honest, it’s a little bit of wasting time for me if we have a more efficient way to do this.
2. Mocap studio (mocap suit)
The Smartsuit Pro is an inertial wireless, all-in-one motion capture solution that is intuitive to use and affordable for anyone. Faster to set up than any other system on the market, robust and durable to withstand close contact and active use, and a high-touch support team is always available to ensure the best user experience. With the Smartsuit Pro, character animation is definitively democratized.
Rokoko Smartgloves can capture the full spectrum of an actor’s hand performance, giving VFX, VR, game and digital artists a faster way to create character animations.
Every detail of dance movement is captured relatively more straightforward this time with motion capture suits on, the method is not left with Rokoko AI but making a skeleton connection under the Rokoko system made work more accessible, the Skeleton is automatically binned on top of the suit where the movements happen, capturing dots placed on each part of the suit worked perfectly under space, size, humanoid prefab settled.
The advantages of using this technology were obviously the efficiency, it didn’t take our whole group a long time to finish all three motion capture processes compared to a 3-hour long preparation and filming process taken for Rokoko AI video motion capturing, and most importantly, our dance school groupmates were more motivated this time with a si-fi device put on, this fulfilled their imagination fo incorporating with us — to have a remarkable experience with technologies.
I didn’t even take a picture of us working at the motion-capturing studio we finished it really fast.
Motion capture results
We finally turn up with a wonderful animation with skeleton results under the professional mocap tutors and we learned different working ethics under several toolkits produced by big companies, for example, the skeleton exporting type could be changed inside the Rokoko studio to match different modelling software such as Maya, Blender, Cinema4D etc, the formation could be differentiated at this very first process that may influence the whole project and to choose which on matches the company’s constrain.
After exporting from Rokoko studio, we will get an FBX file corresponding to the file type selected earlier with all the animation attached on the HumanIk system for MAYA and a preset character body position of “T_Pose”, which will make our work super easier as we only need to bind the skeleton with model mesh for setting up. We can of course adjust the length of every single movement within the Rokoko studio editor (youtube video shown right-hand side) where the displacement and positioning changes were all recorded with different skeleton animation layers so that we can delete every parts that are redundant clutter in considering the completed visual performance. However we are not sufficient to use this function as a student borrowing the mocap studio for just several hours, so we were only allowed to adjust them by ourselves in modelling software, but it’s still powerful enough for the current level that we are not doing professional detail seeking capture at this point. The first processing result was like the video shown left-hand side, the animation was quite natural compared to the previous ones but I still need to manually retarget those important body parts with the reference video provided by Dance school students especially the hand and kneel’s for the choreography I have for my scene interacting with the water surface.
During this process, we didn’t meet a lot of issues as we were prepared before actually using the mocap studio, so the problems such as communicating or role distributions or capturing turn managing while working simultaneously with the other groups were not disturbing too much, and dance school students were all role keeper that we finished them in a very organized management with all of we 20+ members working at the same time. The only thing that was worth to be noticed was the file saved after every single shoot will be important as we were not the only group using the studio at the moment, we need to carefully name our projects in order to get the right file back.
Rigging & Animating
When I was able to get to the step of binding the model and animating it I found that things were far from being as simple as I had thought. By the time I got the hard drive with the results of the motion capture I had already decided on the style of model I wanted to use and had downloaded it waiting to be combined with the motion animation. The first thing that came to my mind was to manually add static bones to the model using the humanik tool that comes with maya and bind the bones to the model’s body parts using the Bind Skin tool after aligning them with each other and then selecting the animation preset in the humanik toolbar to have However, after actually trying to do this, I found that the model I had set first was not in the basic motion capture skeleton initial setup (I was looking for an A-word static motion model and the motion capture default was a T-word model) and I struggled for days with this dilemma.
This fatal problem has never been solved so after this I have been working on unity material selection and rendering related knowledge in advance and I found the following youtube instructional video after being inspired by my motion capture teacher. The Google generation method in this video is directly through a motion capture and skeleton creation site called mixamo, where I can upload any model I want to use in any motion pose and ask the site to automatically bind the skeletal system to my model by its own AI algorithm. There is a downside to this method, which is that the AI algorithm will not work when the bound object is not human. But it ultimately saved me a lot of tedious work and time doing boring work. After getting the AI algorithm to build a model of the human skeleton, all I had to do was download and match the animation up.
Having learnt from the previous experience, I found that the above-mentioned youtube tutorial followed up with a detailed animation binding method, so I decided to abandon the original method (which was to rationalise the movement by adjusting the bone position and rotation parameters to solve the through-moulding problem) and added a control rig to the bones (a simple version of the skeleton system but one that the maker could use). The bones that are connected to the joints will be adjusted by the algorithm to follow the movements of the joints, not too much and not too little) and this made my job much easier.
Control RigControl Rig
After the character is rigged and animated, it is time to build the virtual environment where the performance will take place. This involves creating a 3D set with various props and setting up lighting to create the desired mood and atmosphere. An interactive water feature can also be added to the environment by using a water shader and coding it to respond to the movements of the dancers. This can create a dynamic and engaging visual element to the performance.
Building Environment
Finally, a shader graph can be used to create custom shading for the character model. This involves creating custom textures and materials, as well as adjusting the lighting and other visual effects to create the desired look. By following these processes, a VR project with contemporary dance can be created that is immersive, engaging, and visually stunning. The combination of realistic character animations, detailed environments, and interactive visual elements can create a truly unique and memorable experience for the audience.
Unity URP pipeline shader graphs
Shader Graph is a tool that enables you to build shaders visually. Instead of writing code, you create and connect nodes in a graph framework. Shader A program that runs on the GPU. More info See in Glossary Graph gives instant feedback that reflects your changes, and it’s simple enough for users who are new to shader creation. Shader Graph is available through the Package Manager window in supported versions of the Unity Editor. If you install a Scriptable Render Pipeline (SRP) such as the Universal Render Pipeline (URP) or the High Definition Render Pipeline (HDRP), Unity automatically installs Shader Graph in your project.
Shader Graph is only compatible with the Scriptable Render Pipelines (SRPs), namely the High Definition Render Pipeline (HDRP) and the Lightweight Render Pipeline (LWRP). These two SRPs are available in Unity 2018.1 and later. The legacy built-in render pipeline does not support Shader Graph.
Water Shader
The shader graphs Im gonna be using are unity URP real water interaction shader and a moving light changing Galacy shader graph. where the real-time water shader allows the player with a VR headset to view it as well as move their heads and controllers to make interaction with the environment built, this will be the highlight for the whole project part that I’m responsible for, to make the viewer stunning when first step into the scene filled with water.
The interactive water consists of four main effectors in the URP pipeline to make the little ball water interactable, including two particle systems, one water displacement module and one physical ball containing components of Riggidbody and sphere collider that could be influenced by the gravity engine in unity. Other elements like reflection probe and global volume including directional lights are needed to make the water surface movement visible.
This is the preview of water interacting surface with a ball running with unity gravity damping through the surface. This is happening because the water’s surface simulator in form of plane itself contains nature water floating property that can bring the ball upwards once it drops to the largest buoyancy point inside water plane and the wave generates according to ball’s speed and size, the coding applied on water surface will then calculate wave bounce with distance it travels behind the ball’s centre. Another important thing happening is particle system that reacting to the ball’s entry and exit to surface plane, one is going upwards and then spread twice per movement to simulate the water splash, and one is following the ball’s anti-gravity movement to create water drop effect, all these properties are able to be change with the size of that ball to make the game play space reasonable.
The sphere collider and rigid body applied on little ball is used to simulate the physical shape and the mass of it in order to let script on water face interactable.
Galaxy Shader
Pack of space and nebula materials that can be used on any mesh. Multiple textures and gradients are included in this package to help you customize the effects. Materials have many parameters, you can tweak rim, distortions, gradient texture, scroll speed of background nebulas, and much more. Also supported on mobiles.
The Built-in Render Pipeline is Unity’s default render pipeline. It is a general-purpose render pipeline that has limited options for customization. The Universal Render Pipeline (URP) is a Scriptable Render Pipeline that is quick and easy to customize and lets you create optimized graphics across a wide range of platforms. The High Definition Render Pipeline (HDRP) is a Scriptable Render Pipeline that lets you create cutting-edge, high-fidelity graphics on high-end platforms.
In this case with the package of shader graph provided, I will just need to adjust the base colour a little bit in order to fit my skybox colour set-up, therefore, I need to read the shader graph inside the package ( image shown downwards) the components “nodes” that create the shader step by step were clearly marked as different roles given, for example, the colour background part was generated by the shader nodes inside the frame called “Main Background” and the section that related to material normal is “Normal” and the specular particles and shining moving stars are generated by the section called “Rim”, therefore, the design logic will be simply separated into those parts and I can easily change any of them manually according to my own design requirements.
In responding to the rotation and displacement given in the shader graph the result of applying this material with the shader is going to shine the character’s surface and to live it with moving clouds and particles, at the same time, the colour gradient moves. Never the end, these colour changes also reflect on the water surface as the shader for water has the reflection property that duplicates every single colour above itself.
The material will be controlled by the shader which maintains its own reaction to the environment lighting and light bouncing function and also shadowing but with more creative details that producers can play with.
Galaxy shader graph overviewMain Background
The colour rendered with shader could be easily changed by finding out the nodes in the shader graph map that are related to “base color” or “Color” In this case the colour node is connected to each “cube” section of every reflected cubemap, this will decide which three colours are rendered through all the surfaces with this material, and it has three in galaxy shader to specify the grading colours and positioning of each colour, where the displacement and movement have already claimed at the left-hand side of the graph map, therefore, a lifelike galaxy could be changed its colour and settled in different speed easily.
Post-processing & Detail setting
In order to make the body parts interactive to the water surface, I achieved this by simply placing the little ball prefab to each body parts that are doing interaction with water, for instance, character’s hands must be one of them so I found the rig reference prefab to place the ball with the animation applied on hands, in this way we don’t have to animate the ball separately to make it follow our hands, but by just drag and drop the ball prefab under the skeleton with animation.
Each character mesh has its Control rig reference and skeleton reference that tells the control rig to do what kind of rotation or displacement to make an animation alive. In this case, the water interactive little ball shouldn’t be placed under the control rig but as a child of specific body parts down in the “mixamorig:hips” game object, we are doing this because the control rig isn’t the one with animation attached. Before we do this, the rigid body component on the ball that we mentioned before shouldn’t be set as “using gravity” but “kinematic” otherwise the ball will directly drop down at the same time you hit the play button.
A water surface asset that can generate waves by user operation, available in Unity’s Universal RP and HDRP.by light wave calculation by compute shader and water refraction expression that does not collapse with VR binocular vision. In addition, when tiles are placed on the surface of the water, the waves propagate and can be used to express elongated shapes and wide lake surfaces. By drawing wave obstacle information on the mask texture, you can create a water surface that reflects waves in any shape. We also include a sample that floats on the surface of the water due to buoyancy, which is often used with the surface of the water. As an extension of the tiling idea, I added a 6-sided sphere mesh object and a sample of waves propagating on the sphere. Also added samples of gravity and buoyancy towards the center of the sphere. You can also color the surface of the water by specifying any color. I also added a refracting colored glass material, although it’s not a water surface. In addition, we added a sample that moves Humanoid in VR and interacts with the water surface with the grip and trigger of the controller. The height of the water surface can be obtained in Script by specifying its position in real time.
It is strongly recommended to use it on a PC in general. Waves are simulated by GPU compute shaders. It works lightly in an environment with a large amount of GPU computation. You can also build for mobile devices. However, if GPU performance cannot be expected depending on the device, it will not operate at a high frame rate like a PC. WebGL will not work. This is because the compute shaders used are not supported by WebGL.
Post-processing volume & lights
Post-processing is a generic term for a full-screen image processing effect that occurs after the camera draws the scene but before the scene is rendered on the screen. Post-processing can drastically improve the visuals of your product with little setup time. This function is based on both camera settings and a volume set as global that has post-processing properties attached, and the camera that carries this volume component should have its “Post-processing” feature clicked on.
Camera settings
Volumes can contain different combinations of Volume overrides that you can blend between. For example, one Volume can hold a Physically Based Sky Volume override, while another Volume holds an Exponential Fog Volume override.
The Post-process Volume component allows you to control the priority and blending of each local and global volume. You can also create a set of effect overrides to automatically blend post-processing settings in your scene.
The Depth of Field effect blurs the background of your image while the objects in the foreground stay in focus. This simulates the focal properties of a real-world camera lens. A real world camera can focus sharply on an object at a specific distance. Objects nearer or farther from the camera’s focal point appear slightly out of focus or blurred. This blurring gives a visual cue about an object’s distance and introduces “bokeh” which refers to visual artefacts that appear around bright areas of the image as they fall out of focus.
Bloom is an effect used to reproduce an imaging artifact of real-world cameras. The effect produces fringes of light extending from the borders of bright areas in an image, contributing to the illusion of an extremely bright light overwhelming the camera or eye capturing the scene .
SkyDome
To make the whole environment in “Round”, I used Maya to model a rounded sky dome surface mesh inversed to render the sky texture only inside the dome, the teleportation area is circled by a cylinder made ground with a collider on it. In this method, I made the world round for the game space design completed.
RENDERING
Render Test
Water Shader Rendering
Body Shader Rendering
The shader used was in a high polygon scale that might be crashed within the VR environment, it turns out that several rendering tests were required during the project building process to make sure the element added was not exciting the maximum pixel that the VR headset can carry.
Rendering brief
Connection between scenes
In our project, “The Other Realms,” we aimed to create a seamless connection between our 2D animations and two VR experiences using a lighting flying particle. This article acted as a visual element that passed through the entire storyline, serving as a transitional element and establishing connections between scenes. By introducing a consistent visual element that travelled through the narrative, we enhanced the overall coherence and engagement of the performance. The particle not only facilitated smooth transitions between scenes but also added intrigue and anticipation for the audience. Its movement and behaviour were synchronized with the accompanying music, evoking emotional responses and heightening the impact of the performance.
As group mates, we collaborated closely to ensure the successful implementation of the lighting flying particle throughout the project. We allocated responsibilities and tasks among ourselves, with one member focusing on animating the particle, another integrating it into the VR experiences, and the third synchronizing its movement with the music. Regular meetings and constant communication allowed us to exchange feedback, refine our approaches, and ensure a cohesive execution. Throughout the development of our project, we encountered challenges and made adjustments to enhance the effectiveness of the lighting flying particle. We experimented with different particle effects settings and fine-tuned the synchronization with the music, ensuring that the particle’s movements complemented the rhythm and mood of each scene.
In conclusion, the lighting flying particle played a significant role in “The Other Realms” project by connecting our 2D animations and VR experiences. It served as a transitional element, seamlessly guiding the audience through different scenes, and symbolising the unity and interconnectedness of the various realms explored in our performance. Through collaboration, experimentation, and attention to detail, we successfully integrated the particle into our project, enhancing its visual appeal, coherence, and immersive qualities. The lighting flying particle truly brought our project to life, creating a captivating and fluid experience for our audience.
Documentation
The tutor from LCDS organized a documentation filming activity on the date when we were doing a final presentation with our dancers, this records our whole presentation through the camera which shows how the work are be done also the interview with both tutors from UAL & LCDS.
Thrilled to have led the first collaboration between UAL and LCDS, in this new BA tackling the use of creative technologies in dance and virtual production. Supported by case studies and hands-on explorations, spent the past few weeks covering:
– Cross-field collaborations. – Differences in Mocap technologies. – Incorporating AI tools. – UX in Hybrid Performance.
Thank you to Omari Carter and Manos Kanellos for making this happen. Congratulations to both UAL and LCDS students for collaborating through dance and VR production. Check out the short video below:
Critical Reflection
Critical Reflection on “The Other Realms” VR Dance Project
Introduction: As a student who actively participated in the creation of the “The Other Realms” project, which aimed to connect virtual reality (VR) with 2D animations through contemporary dance, I would like to critically reflect on my experience. This project presented numerous challenges and opportunities, allowing me to explore the intersection of creative media, dance, and technology. In this reflection, I will discuss the strengths and weaknesses of our project, the collaborative process, the integration of motion capture technology, and the overall learning outcomes.
Strengths and Weaknesses: “The Other Realms” project offered a unique and innovative approach to contemporary dance by combining VR, 2D animations, and motion capture. The use of VR as the main stage for the dance performance provided an immersive and engaging experience for the audience, enabling them to feel as if they were part of the performance. The integration of 2D animations added a complementary visual element, allowing the audience to view the performance from different perspectives.
One of the strengths of our project was the choice of music. The selection of vaporwave music added an atmospheric and dreamy backdrop to the dance movements, enhancing the emotional impact of the choreography. The music provided a sense of continuity and flow to the performance, creating a unified experience. Another strength was the collaboration between the dancers and the animation team. The dancers’ input and understanding of the choreography were crucial in creating meaningful interactions between the virtual and animated worlds. The seamless transitions between the VR environment and the 2D animations added depth and dynamism to the performance.
However, there were also some weaknesses that we encountered during the project. One notable challenge was the limitations of the motion capture technology we initially used. The AI video recognition technique did not accurately capture the intricate dance movements, resulting in unnatural and stiff animations. This required us to pivot and explore alternative solutions, leading us to incorporate the Rokoko motion capture suit. Although this introduced efficiency and improved results, it would have been ideal to have used this technology from the beginning to avoid wasting time and effort. Another weakness was the initial struggle with rigging and animating the 3D character model. The process of manually adding static bones and binding them to the model proved to be time-consuming and complex. However, we eventually found a more efficient solution through Mixamo, which allowed us to automatically bind the skeletal system to the model using AI algorithms. This saved us valuable time and enabled us to focus on refining the animations.
Collaborative Process: The collaborative process was a vital aspect of our project’s success. Working with a group of professional dancers from the London Contemporary Dance School provided valuable insights into choreography, movement, and the expressive potential of the human body. Their artistic input and expertise were crucial in creating choreographies that effectively utilized the capabilities of VR and animation. Communication and coordination were key factors in the collaborative process. Regular meetings, attending rehearsals, and discussing the choreography with the dancers and choreographer helped us align our creative visions. It was essential to understand the intended theme, mood, and emotional impact of the choreography to effectively translate it into the VR and animation elements.
Integrating Motion Capture Technology: The integration of motion capture technology played a significant role in our project. The initial use of Rokoko’s AI video recognition technique proved to be inadequate for capturing complex dance movements. However, after transitioning to the Rokoko motion capture suit, the process became more efficient and accurate. The ability to capture the nuances of the dancers’ movements in real time greatly enhanced the authenticity and realism of the animations.
The use of motion capture technology also facilitated a more engaging and motivating experience for the dancers. Wearing the motion capture suit allowed them to see their movements
Virtual Experience Developer: Ronger Huang & Clara Childerley Garcia
Sound Arts Designer:
Juice Shuting Cui
Maria Grigoriu
Rysia Kaczmar
Benjamin Thorn
Hanifa Uddin
Jacob Lyttle
Week 1 Unit Introduction and Brainstorming
A rough central theme of Surrealism was decided after the first in-person meeting happened a week later the introduction with a very high agreement rate, and all the brainstorming sectors were based on this central topic relating to fantasy, unreal and creepy VR visual and sound arts.
Breton’s definition of surrealism (more automatic/exploration of lucidity and dreams ) is “Psychic automatism in its pure state, by which one proposes to express—verbally, by means of the written word, or in any other manner—the actual functioning of thought. Dictated by thought, in the absence of any control exercised by reason, exempt from any aesthetic or moral concern.”
Margritte’s surrealism is more representational René Magritte described his paintings as “visible images which conceal nothing; they evoke mystery and, indeed, when one sees one of my pictures, one asks oneself this simple question, ‘What does that mean?’. It does not mean anything, because mystery means nothing either, it is unknowable.”
Surrealism is formed following function – the function being to contradict the constraints of perceived reality.
Surrealism is a cultural movement that developed in Europe in the aftermath of World War I in which artists depicted unnerving, illogical scenes and developed techniques to allow the unconscious mind to express itself.[1] Its aim was, according to leader André Breton, to “resolve the previously contradictory conditions of dream and reality into an absolute reality, a super-reality”, or surreality.[2][3][4] It produced works of painting, writing, theatre, filmmaking, photography, and other media.
“the mash-up of cuteness and darkness is the central theme to Madoka, and Kyubey is an epitome of that theme”.A central goal in Urobuchi’s writing was to highlight the moral and ethical dissonance between Kyubey and the young middle school girls, done through actions like Kyubey eating its own corpse to recycle energy. He compared the character to monsters in the works of horror fiction author H. P. Lovecraft, commenting on Kyubey: “he isn’t evil, it is his lack of feelings that make him scary”.
Surrealism aims to revolutionise the human experience. It balances a rational vision of life with one that asserts the power of the unconscious and dreams. The movement’s artists find magic and strange beauty in the unexpected, uncanny, disregarded, and unconventional. At the core of their work is the willingness to challenge imposed values and norms, and a search for freedom.
This weird creepy style of animation called Madoka was the very first thing that came to my mind, this particular final monster refers to a girl that wanted to save her friend. The designer was keen to build a broken image of a lost girl that finally went crazy The queen monster is sad and upset, mostly lost, and is led by a group of little guys acting like her dark inner side, the toys let’s say, are in a cute style that will only be seen in a child story book but cutting things with smiling faces, could be the reason why it looks creepy, also the bloody moments are all painted in children styled.
“Dark fairy tale” would be the best guide for me to research the theme at the time, so I made more like this under the keyword “Creepy Collage arts” to find the contract between 2D and 3D animation can create what kind of strange feelings. A group of girl soldiers wearing dresses grabbed my attention and it felt even more “Uncomfortable” to see plenty of repeated characters appearing continuously, it’s a little bit annoying because that generated the feeling of fear. So this time, the method of repeating would be the way to create a wired gaming environment.
For further research on themes and level designs, I focused on “Collage art”, ” Creepy models”, “Endless environment”, and “Contrast between real and unreal”. According to the Internet’ helpful searching action, I found several examples that could explain my understanding best, including the arts works from Korean artists, Japanese animator, and industrial design show cases, the image of having monsters in each of the levels were kind of decided in my mind: A interactable life-like talking face in a cyberpunk room; several elders playing with collaged animals; a huge blue bear in an abandoned construction area etc.
After the idea-sharing process that we did in our first and half week, the initial design was changed a lot in order to make it achievable, adding lighted colours; finding objects that are more abstract; linking the surreal idea into the design etc.
Alice in Wonderland
From its first appearance in 1865, Alice’s Adventures in Wonderland was devised as a visual work as much as a text. In the first room of the exhibition, we can marvel at the original manuscript by Charles Lutwidge Dodgson (who adopted Carroll as a pen name) with his own sketches of characters, plus learn how the Oxford maths tutor recruited John Tenniel as the illustrator for its original publication.
What’s it all about? Imagination, of course—but there’s more to this “literary nonsense.” Just like kids’ adventures in real life, Alice’s Adventures in Wonderland help her work through awkwardness, confusion, and sadness to get where she’s going Alice in Wonderland is a story that can’t be forgotten. This story is so inspiring that many people find big inspiration in it and made very interesting diy crafts.
From its first appearance, the main theme was kind of decided as a fantasy and wondering land that contains several imaginative experiences that take the player to another environment. A VR sound experience where everything happens like a wonderful dream, A sun-drenched attic may be an unfamiliar space or may evoke a certain moment in your memory. It‘s like an unrealistic corridor into the unknown universe, where you move between fantasy and reality, where you don’t know what is real and what is not. You may catch a glimpse of an interesting story on the way, or you may come across a mystery, but all you have to do is return to that initial space and to a memory that may not be real. A whale sealed in a painting, a music box that leads to another world, a memorable book and a vintage computer that works faintly will be the gateway to fantasy, leading you to infinite possibilities. Never the end when you return to this attic countless times, it will become your ‘reality’ and your ‘Place of belonging’ at the time.
Week 1.5 Narrative Buildup
Narrative type
These are the types of storytelling formats that are linear and non-linear.
a linear environment could be seen as beneficial as mentioned above (like one long hallway)
could also be more nodal
In the “telling a story element” another could be environmental storytelling, where the user moves through an open space creating their own narrative through interaction with objects and the environment.
in writing about this, I’m more referring to a space that the user/player moves through, as opposed to a ‘rollercoaster’ experience (I guess the pure experiential?) where the user is mostly stationary and the environment happens around them.
There are still elements that could be interesting with this, the film A Ghost Story by David Lowery is set in a house that moves through time (based on the perception of the camera/main character)
could also explore a matryoshka/Russian doll kind of idea where the user’s surroundings gradually expand (like walls crumbling down to reveal a larger space.
The final result of discussing narrative formation is we gonna use the second one which is non-linear in that the player can choose their starts and endings that may depend on each person in different ways, and lead to different endings. A simple idea of having a main space for the player to choose which to interact with and come back with, and several interactable objects as a transportation point to the other rooms.
Functions follow form
This refers to the game/experience designed that is based on a specific style, When the form/style comes first, the function is designed in accordance with the form. Form follows function given that the space’s purpose is to serve as an office, it should be designed in a manner that is appropriate for an office. Modular, Spacious and Efficient. In this case, our project is based on “Function follows form”, where a style of “Surrealism” is decided before logic design flow.
Storyboard & Storyline
By imaging having an attic room to start the whole game, I designed a playable space as an introduction part and pull out the idea of having a creepy but surreal talking face appear from the window of the attic to make the atmosphere feels more strange (shown on the second board). And a simple scene transition was made for the music box that lead the player to the dancehall (shown on the first board).
The player would be able to interact with four objects in the bedroom. The objects would encourage interaction through sound.
The interaction involves the player shrinking (mechanics to be discussed) to the object’s scale and entering the world attached to it. (transitioning from the macro of the hub to the micro of the objects)
The interactive objects were chosen for their ability to create virtual worlds.
Environmental Storytelling/Exploration
Each location has different aesthetics, and ‘rules’ of sound.
This leads to five different storyboards referring to five stories happening in each scene for example, the main scene will have its own storyline with four possible progressions and the music box aka dancehall will have a specific line stick with finding puzzles around the scene, and the painting scene will basically go through with a whalebone or something with instructions in the environment.
The storyboard shows downward are music box and the main scene attic room, with the description of each story progression listed.
The gaming flow and story will be told to the player in this kind way that provides a Music box, painting, computer and book as an open door for the world that the player can go and explore.
Exploring the imaginary worlds/virtuality of objects. Interacting with an object in the bedroom enters its world.
Initially, we were going with a more representational surrealist style, but it’s a bit closer to magic realism.
A Miro board that facilitates collaborative brainstorming and idea generation can greatly enhance group development in game scene designs. By providing a shared digital canvas, it allows every team member, including VR/game designers and sound art designers, to contribute their imaginations and ideas to each game scene design. This open and inclusive approach encourages diverse perspectives and fosters creativity within the group.
With a Miro board, team members can visually express their ideas by adding images, sketches, and text annotations to specific game scenes. This enables a free-flowing exchange of concepts and stimulates a wide range of possibilities. Different team members may have unique insights or expertise that can contribute to the overall design, and the Miro board allows for the seamless integration of these diverse ideas.
The interactive nature of a Miro board also encourages collaboration and builds upon the suggestions made by others. Team members can build upon existing concepts, add their own elements, and create a rich tapestry of ideas. Discussions can take place directly on the board, fostering real-time communication and enabling the refinement of ideas through constructive feedback and iteration.
Overall, a Miro board empowers the team to collectively generate a variety of ideas, leveraging the strengths and perspectives of each member. It encourages a collaborative and inclusive environment that leads to more innovative and comprehensive game scene designs, ultimately enhancing the quality of the final product.
Team roles
Collaboration between virtual reality (VR) or game designers and sound art designers is crucial to creating immersive and engaging experiences. Establishing an effective workflow between these two teams can greatly enhance the quality and coherence of the final product. A workflow chart serves as a visual representation of the process, outlining the various stages, tasks, and dependencies involved in the collaborative effort.
The workflow between VR/game designers and sound art designers typically begins with concept development. Both teams come together to brainstorm and discuss the overall vision, theme, and aesthetic of the project. This initial stage is important for aligning their creative ideas and ensuring a shared understanding of the desired outcome.
Once the concept is established, the workflow moves into the design phase. VR/game designers focus on creating the virtual environment, designing characters, and mapping out gameplay mechanics. Simultaneously, sound art designers start crafting audio assets such as music, sound effects, and ambient sounds. Continuous communication is essential during this phase, as the sound designers need to understand the specific requirements and timing of the visuals, while the VR/game designers should provide feedback and guidance to ensure the audio complements the immersive experience.
After the design phase, the workflow progresses to implementation. The VR/game designers integrate the visual assets and mechanics into the virtual environment, while the sound art designers work on integrating the audio elements. This stage requires close collaboration to synchronize visuals with corresponding sounds, ensuring a seamless and immersive user experience. Regular meetings and iterative testing help identify and resolve any issues or discrepancies between the audio and visual components.
Once the implementation is complete, the workflow transitions into the refinement and polish stage. Both teams work together to fine-tune the details, optimize performance, and enhance the overall audio-visual experience. This stage often involves multiple iterations and feedback loops to achieve the desired level of quality.
A workflow chart acts as a roadmap for this collaborative process. It provides a clear overview of the sequential steps involved and helps manage the workflow efficiently. The chart can include milestones, deadlines, and designated responsibilities for each team member, ensuring that everyone is aware of their role and the project’s progress. By visualizing the workflow, it becomes easier to identify bottlenecks, allocate resources effectively, and maintain effective communication between VR/game designers and sound art designers.
In conclusion, establishing a well-defined workflow and utilizing a workflow chart is essential for effective collaboration between VR/game designers and sound art designers. It promotes clear communication, ensures alignment of creative ideas, and helps manage the project efficiently, resulting in a cohesive and immersive virtual reality or game experience.
MechanicsOverview
Interactive objects — This will be applied to each object in the starting room to increase the “exploring” fac
“shrinking” effects — In some special cases, make the surrounding environment objects larger or make the player’s own perspective preset proportionally smaller to achieve a similar effect similar to “Alice in Wonderland”
Triggering conditions — This refers to some special scenes or perspective changes that will appear when the player starts certain conditions or interacts with certain objects
Scene transitions — The cinematic transition effect
Special effects (environmental) — In terms of “surrealism”, several special processes are to beautify the rendering effect of the environment. (e.g. Particle, reflection, realistic lights)
3D painting effect — A popular presentation of 3D artworks (pics or paintings) that combines 2D plane and 3D contents.
Face capture — A unity program that allows detailed 3D facial expressions and auto-generated face animations.
A list of mechanic functions was drawn after giving that basic imagination to the sound art students, it basically formed with password input, and puzzle recognization for my two scenes, and the password is gonna be at painting, the puzzle pattern should be at the music box scene.
Main Scene
Design & Modeling
This is the place that everything starts with, an old attic that might be familiar to everyone, this is a magical space where the player will not know what is going to happen when they touch anything or get closer to something, it’s an ordinary rooftop with dust and unused stuff, but it also can be the doorway to the other universe.
The earliest step in a well-thought-out design would be to visualise the design in the minds of each member of the group through a picture reference, and to gradually and thoroughly understand the ideas of each member of the group and to highlight the parts of the design that could be improved and implemented from the perspective of the actual designer and put them together for the purpose of realisation and adjustment. In this process, there is of course no shortage of trial and error, including problems such as poor models, inappropriate materials, and imagery that is too abstract for modelling purposes, where I, as the designer and operator, will take on the responsibility of communicating and correcting members, if a member’s imagination is too difficult to realise I and the rest of the group will have to explain the phenomenon and tax another member. I will also ensure that the wishes and feelings of the other person are taken into account in the process and that we do not simply reject or deny, but that compliments and acknowledgements are included in the dialogue as appropriate. This is particularly important for me as a designer because all the unrealistic and realistic designs will be presented and visualised by me alone so that during the whole project there is no group meeting where I am not allowed to go off.
Billed as a “story exploration game,” Gone Home has users exploring an empty house and piecing together why no one is home.
My imagination was more dominant in the design and realisation of the main scene, so I had to communicate more frequently with the group. the attic was not really an old attic, but rather a child’s bedroom, in order to achieve the idea that we were designing it as a ‘cradle of memories’. The story of the Otherworld will start in a cosy but old room, the idea is to guide the evening to explore this strange but familiar space and find objects to interact with, so the area will be familiar to all. For example, the boy’s room would be mainly blue and grey in colour with some personal elements such as basketball games. Still, the girl’s room would generally be pink and white in colour with more personal items such as cute dolls, so I decided to talk to the group about changing the bedroom set to a loft that every family would have. An attic is a space that is inherently old and stereotypically evocative, and at the same time, it can hold any object that seems impractical, such as a discarded computer or an old music box or even an unused tricycle, which would seem logical in an attic.
Environment
This particular attic model, named “Grandpa’s Attic,” was obtained from the Unreal Engine assets store. Its pre-existing design saved me valuable time, as I didn’t have to engage in extensive modelling work during the environment-building process. However, finding a suitable asset proved to be a challenge. The images showcased the model in a way that didn’t align with the desired theme for the main room. While the original model featured elements related to “life,” “bedroom,” and “attic rooftop,” its style and environment presentation didn’t precisely reflect the “familiar to everyone” setting we intended to create. Additionally, the assortment of items within the model, such as snacks, unwashed dishes, and tobacco cases, didn’t fit the concept of childlike objects, including abandoned toys, books, and children’s artwork that we were aiming for. Notably, the presence of a motorcycle in the corner further deviated from our vision.
Consequently, we made the decision to change assets, recognizing the need for a more appropriate model. However, finding a detailed model that fulfilled our specific requirements proved to be a challenging task, necessitating extensive Internet research. After selecting a suitable asset from the Unreal Engine asset library and successfully acquiring the package, our focus shifted towards transferring the assets from Unreal Engine to Unity, which presented its own set of difficulties. I had to rely on tutorials to learn how to import the prefab asset folder into Unreal Engine and ensure that each material was appropriately aligned, and ready for export. Working with prefab folders in UE5, while assigning all the necessary materials, proved to be a complex task during that period, as I was still unfamiliar with the process.
Overall, the process of sourcing and implementing the appropriate assets required careful consideration and effort to align with our project’s professional standards.
These are the interactive object settled in the main scene which are a painting, a music box, a computer etc.
Mechanics
The mechanical research for this scene was quiet at the very start the main goal is to make the scene transition kind of making a scene for everyone, this original idea was to have an animation triggered for the music box so that once the player picked up the music box, the animation will play, and a stencil shader for the painting to make a Life-life picture on the wall for player to get closer to.
Music Box
For the trigger setting of music, the box is basically an animation on it, so the very first step of doing this was obviously to animate a music box.
So the Very first thing to do is find a model for the music box and animate it
The needs
-Animated music box that can be triggered by the player’s hand
-A melody play at the same time with the animation of “rolling”
-A scene transition after the animation is played
public class SceneTransition : MonoBehaviour
{
public string SceneName;
public bool SceneHasTriggered = false;
public GameObject ItemToDestory;
public WorldManager WM;
public Animator FadeAnim;
public GameObject WhalecCard;
public Transform Player;
public Transform Target;
public float distance;
public Vector3 dirToTarget;
yield return new WaitForSeconds(fadeScreen.fadeDuration);
//}
private void Start()
{
WM = GameObject.Find("WM").GetComponent<WorldManager>();
//check world manager script, is it 1st time visit? or 2nd
//if 2nd time, destroy me
WM.TriggerArea1 = transform.gameObject;
WM.ST_Script = transform.gameObject.GetComponent<SceneTransition>();
if (WM.LevelState == "MB" && WM.LevelPlayed == 2)
{
Destroy(this.gameObject);
WhalecCard.SetActive(true);
}
if (WM.LevelState == "PT" )
{
Destroy(this.gameObject);
WhalecCard.SetActive(true);
}
}
// Start is called before the first frame update
void OnTriggerEnter(Collider other)
{
if (!SceneHasTriggered && other.tag == "Player")
{
//FadeAnim.SetBool("SceneChanging", true);
SceneHasTriggered = true;
WM.LevelState = "PT";
WM.LevelPlayed += 1;
SceneManager.LoadScene(SceneName);
Debug.Log("Scene loaded");
//change world manager state, +1 visit
//Destroy(ItemToDestory);
}
}
public void Update()
{
distance = Vector3.Distance(Player.position, Target.position);
if(distance < 2.2f)
{
FadeAnim.SetBool("SceneChanging", true);
}
}
}
public class Musicbox_Animation : MonoBehaviour
{
//public XROrigin XROrigin;
public Animator MusicBoxAnim;
public bool MscBoxhasAnimated = false;
public GameObject ItemToDestroy;
public Animator FAnim;
public GameObject MusicCard;
public WorldManager WM;
void Start()
{
WM = GameObject.Find("WM").GetComponent<WorldManager>();
WM.MusicboxOpening = transform.gameObject;
WM.MA_Script = gameObject.GetComponent<Musicbox_Animation>();
if (WM.LevelState == "MB" || WM.LevelPlayed == 2)
{
Destroy(this.gameObject);
MusicCard.SetActive(true);
}
}
private void OnTriggerEnter(Collider other)
{
if (!MscBoxhasAnimated && other.tag == "Player")
{
MusicBoxAnim.SetBool("BoxOpen", true);
}
}
public void OnSceneFade()
{
FAnim.SetBool("SceneChanging", true);
}
public void OnMusicBoxAnimationEnd()
{
MscBoxhasAnimated = true;
WM.LevelState = "MB";
WM.LevelPlayed += 1;
SceneManager.LoadScene("DanceHall");
Debug.Log("Scene loaded");
}
// Update is called once per frame
//void Update()
//{
// if(MscBoxhasAnimated == true)
// {
// Destroy(ItemToDestroy);
// MusicBoxAnim.SetBool("BoxOpen", false);
// }
//}
}
The animation was made in Maya by simply adding and playing keyframes on the music box cover and a rolling- melody playing the animation playing after the box is opened. This was made inside in the same animation clip instead of separated ones, this is because the script that is controlling the animation play point is depending on the player’s triggering action that happens with the OnTriggerEnter recognization on the invisible area settled if I use to different animation clips, it will be hard for the trigger area to know which of them is asked to play with only one trigger point if I’m doing two different trigger area on the same object it will be hard for the controller to interact exactly with one of them in its correct order where the music box cover opening should happen before the melody playing animation in a logical sequence.
As a result of this consideration, I choose to put two animations in one animation clip and be triggered but only one player action, this will be much more achievable and might make more sense at this point.
And several claims related to World Manager should be added to the “MusicBox Animation” script for this specific script to recognize whether the player has entered the Musicbox scene before and to destroy this music box physical state and generate a music note wooden puzzle for the player to exit the game.
public class MusicBoxPlay : MonoBehaviour
{
public AudioClip Musicbox;
public AudioSource AudioSource;
public AudioClip CollideSound;
// Start is called before the first frame update
void Start()
{
}
private void OnCollisionEnter(Collision collision)
{
AudioSource.PlayOneShot(CollideSound);
}
// Update is called once per frame
void Update()
{
}
void MusicboxSound()
{
AudioSource.PlayOneShot(Musicbox);
}
}
public class InstrumentPLay : MonoBehaviour
{
public AudioSource AudioSource;
public AudioClip[] GuitarClips;
// Start is called before the first frame update
void OnTriggerEnter(Collider other)
{
if (other.tag == "Player")
{
PlayRandomGuitarSound();
}
}
void PlayRandomGuitarSound()
{
int randomIndex = Random.Range(0, GuitarClips.Length); // Generate a random index within the array length
AudioClip randomClip = GuitarClips[randomIndex];
AudioSource.PlayOneShot(randomClip);
}
}
public class CollisionSound : MonoBehaviour
{
public AudioSource AudioSource;
public AudioClip Clip;
// Start is called before the first frame update
void Start()
{
}
private void OnCollisionEnter()
{
AudioSource.PlayOneShot(Clip);
}
// Update is called once per frame
void Update()
{
}
}
These small scripts were used to make several interactable sounds in the main scene to immerse the player in it, the physical interaction is recognized in the same way as the other triggered areas where an invisible area was set to read the player’s contact with the term “OnTriggereEnter” and an Audiosource play one shot after the interaction. And the one for collision sound is used to make sounds when the toys or other interactable objects are dropped on the ground or collide with each other, in this case, the “Collider” will not need to be determined as a “player” but everything that is collidable in its physical term.
public class Follow : MonoBehaviour
{
// 路径脚本
[SerializeField]
private WaypointCircuit circuit;
//移动距离
public float dis;
//移动速度
public float speed;
public bool ActivateMove; //this activate the cat movement to next point
public string CatPos;
public Animator CatAnim;
public AudioSource AudioSource;
public AudioClip CatCry1;
public AudioClip CatSpeak;
public AudioClip CatPur;
public AudioClip CatStep;
public GameObject ItemToDestroy;
public ActivateCat AC_Script;
public WorldManager WM;
// Use this for initialization
void Start()
{
CatPos = "Cat_At_A";
WM = GameObject.Find("WM").GetComponent<WorldManager>();
WM.cat_model = transform.gameObject;
WM.Cat_Script = gameObject.GetComponent<Follow>();
if (WM.LevelState.Length == 2)
{
ActivateMove = true;
CatAnim.SetBool("Run", true);
}
else {
dis = 0;
}
//speed = 2;
}
void Update()
{
if (ActivateMove == true)
{
CatAnim.SetBool("Run", true);
//AudioSource.PlayOneShot(CatStep);
//计算距离
dis += Time.deltaTime * speed;
//获取相应距离在路径上的位置坐标
transform.position = circuit.GetRoutePoint(dis).position;
//获取相应距离在路径上的方向
transform.rotation = Quaternion.LookRotation(circuit.GetRoutePoint(dis).direction);
speed = 2;
if (WM.LevelState.Length > 0)
{
Destroy(ItemToDestroy);
}
}
if (ActivateMove == false)
{
}
}
public void Intro()
{
Debug.Log("Point A triggered");
Destroy(ItemToDestroy);
ActivateMove = false;
CatAnim.SetBool("CatSpeak", true);
//AudioSource.PlayOneShot(CatSpeak);
}
void CatSpeakStart()
{
AudioSource.PlayOneShot(CatSpeak);
}
public void OnCatSpeakEnd()
{
ActivateMove = true;
CatAnim.SetBool("Run", true);
}
void OnTriggerEnter(Collider other)
{
if (other.tag == "PointB") //a Pole loop
{
CatAnim.SetBool("PointB_Stop", true);
ActivateMove = false;
AudioSource.PlayOneShot(CatPur);
print("PointBTriggered");
CatPos = "Cat_At_B";
}
if (other.tag == "Cat_Toy")
{
ActivateMove = true;
CatAnim.SetBool("Ball_Play", true);
}
if (other.tag == "PointD") //a dig
{
ActivateMove = false;
print("PointDTriggered");
CatAnim.SetBool("PointD_Stop", true);
AudioSource.PlayOneShot(CatPur);
CatPos = "Cat_At_D";
}
}
void OnTriggerExit(Collider other)
{
if (other.tag == "PointB")
{
ActivateMove = true;
CatAnim.SetBool("PointB_Stop", false);
CatAnim.SetBool("Ball_Play", false);
CatPos = "Nowhere";
}
if (other.tag == "PointD")
{
ActivateMove = true;
CatAnim.SetBool("PointD_Stop", false);
CatAnim.SetBool("Ball_Play", false);
CatPos = "Nowhere";
}
}
//void OnColliderEnter(Collider other)
//{
// if (CatPos == "Cat_At_B")
// {
// CatAnim.SetBool("Ball_Play", true);
// }
// if (CatPos == "Cat_At_D")
// {
// CatAnim.SetBool("Ball_Play", true);
// }
//}
public void ResetState()
{
ActivateMove = true;
CatAnim.SetBool("Run", true);
//CatAnim.SetBool("PointB_Stop", false);
//CatAnim.SetBool("Ball_Play", false);
CatPos = "Nowhere";
}
}
[System.Serializable]
public class WaypointList
{
public WaypointCircuit circuit;
public Transform[] items = new Transform[0];
}
public struct RoutePoint
{
public Vector3 position;
public Vector3 direction;
public RoutePoint(Vector3 position, Vector3 direction)
{
this.position = position;
this.direction = direction;
}
}
public class WaypointCircuit : MonoBehaviour
{
public WaypointList waypointList = new WaypointList();
[SerializeField] bool smoothRoute = true;
int numPoints;
Vector3[] points;
float[] distances;
public float editorVisualisationSubsteps = 100;
public float Length { get; private set; }
public Transform[] Waypoints { get { return waypointList.items; } }
//this being here will save GC allocs
int p0n;
int p1n;
int p2n;
int p3n;
private float i;
Vector3 P0;
Vector3 P1;
Vector3 P2;
Vector3 P3;
// Use this for initialization
void Awake()
{
if (Waypoints.Length > 1)
{
CachePositionsAndDistances();
}
numPoints = Waypoints.Length;
}
public RoutePoint GetRoutePoint(float dist)
{
// position and direction
Vector3 p1 = GetRoutePosition(dist);
Vector3 p2 = GetRoutePosition(dist + 0.1f);
Vector3 delta = p2 - p1;
return new RoutePoint(p1, delta.normalized);
}
public Vector3 GetRoutePosition(float dist)
{
int point = 0;
if (Length == 0)
{
Length = distances[distances.Length - 1];
}
dist = Mathf.Repeat(dist, Length);
while (distances[point] < dist) { ++point; }
// get nearest two points, ensuring points wrap-around start & end of circuit
p1n = ((point - 1) + numPoints) % numPoints;
p2n = point;
// found point numbers, now find interpolation value between the two middle points
i = Mathf.InverseLerp(distances[p1n], distances[p2n], dist);
if (smoothRoute)
{
// smooth catmull-rom calculation between the two relevant points
// get indices for the surrounding 2 points, because
// four points are required by the catmull-rom function
p0n = ((point - 2) + numPoints) % numPoints;
p3n = (point + 1) % numPoints;
// 2nd point may have been the 'last' point - a dupe of the first,
// (to give a value of max track distance instead of zero)
// but now it must be wrapped back to zero if that was the case.
p2n = p2n % numPoints;
P0 = points[p0n];
P1 = points[p1n];
P2 = points[p2n];
P3 = points[p3n];
return CatmullRom(P0, P1, P2, P3, i);
}
else
{
// simple linear lerp between the two points:
p1n = ((point - 1) + numPoints) % numPoints;
p2n = point;
return Vector3.Lerp(points[p1n], points[p2n], i);
}
}
Vector3 CatmullRom(Vector3 _P0, Vector3 _P1, Vector3 _P2, Vector3 _P3, float _i)
{
// comments are no use here... it's the catmull-rom equation.
// Un-magic this, lord vector!
return 0.5f * ((2 * _P1) + (-_P0 + _P2) * _i + (2 * _P0 - 5 * _P1 + 4 * _P2 - _P3) * _i * _i + (-_P0 + 3 * _P1 - 3 * _P2 + _P3) * _i * _i * _i);
}
void CachePositionsAndDistances()
{
// transfer the position of each point and distances between points to arrays for
// speed of lookup at runtime
points = new Vector3[Waypoints.Length + 1];
distances = new float[Waypoints.Length + 1];
float accumulateDistance = 0;
for (int i = 0; i < points.Length; ++i)
{
var t1 = Waypoints[(i) % Waypoints.Length];
var t2 = Waypoints[(i + 1) % Waypoints.Length];
if (t1 != null && t2 != null)
{
Vector3 p1 = t1.position;
Vector3 p2 = t2.position;
points[i] = Waypoints[i % Waypoints.Length].position;
distances[i] = accumulateDistance;
accumulateDistance += (p1 - p2).magnitude;
}
}
}
void OnDrawGizmos()
{
DrawGizmos(false);
}
void OnDrawGizmosSelected()
{
DrawGizmos(true);
}
void DrawGizmos(bool selected) //this function for DrawingLine Debug
{
waypointList.circuit = this;
if (Waypoints.Length > 1)
{
numPoints = Waypoints.Length;
CachePositionsAndDistances();
Length = distances[distances.Length - 1];
Gizmos.color = selected ? Color.yellow : new Color(1, 1, 0, 0.5f);
Vector3 prev = Waypoints[0].position;
if (smoothRoute)
{
for (float dist = 0; dist < Length; dist += Length / editorVisualisationSubsteps)
{
Vector3 next = GetRoutePosition(dist + 1);
Gizmos.DrawLine(prev, next);
prev = next;
}
Gizmos.DrawLine(prev, Waypoints[0].position);
}
else
{
for (int n = 0; n < Waypoints.Length; ++n)
{
Vector3 next = Waypoints[(n + 1) % Waypoints.Length].position;
Gizmos.DrawLine(prev, next);
prev = next;
}
}
}
}
}
Another interactable thing in the mains scene is the cat that I was using in my last project to make the cat react to different objects applied to it to make the cat “playable”. This is achieved by adding two functional scripts to the animated cat model. The first one is called the waypoint circuit is used to generate the route of the movement. In order to move the cat from one point to another point I put several position values on update to make a smooth movement_1. But during the research process, I also found several alternative methods to move the object (character or cat). One of them resulting an unnatural movement but with simple coding contents, which is editing the position values of the ‘game object’ directly, and another is changes made by the animator instead which implies the several keys with different positions complete the move, but in the case that I’m doing a lot of animation on the cat itself so the last method would not be the best to choose. This specific coding allows me to control the cat’s moving path by coding a Waypoints script and a Follow script with several GameObjects (cube) to refer to the separated points inside the path design. Another script is used as the control command applier that tells the specific object to follow the path generated by waypoint, inside this script, the object attached is called by “follow” to transit its position in terms of x,y,z in world’s position that corresponding to each small points don’t he path in yellow line, this allows the user to set exactly the speed and displacement that refers to the activate move in the script which controls the time of this movement happens.
A special wait before start command will be called in this script as well where the time could be set as a string with a specific time to wait whenever the game starts or begin, this function is named as “IEnumerator coroutine” which can ask a timer like function when the game or something stars and to make some action to happen after the timing ends.
Stencil Shader
The stencil shader is used to make the painting “alive” in terms of turning 2d image into 3d model inside a visible mask, and the modles behind this mask will be only visible if the camera sees right infront of it.
The stencil shader in Unity URP is a powerful tool that allows developers to create complex rendering effects by selectively rendering specific parts of a scene based on a stencil buffer. The stencil buffer is an additional buffer that stores information about pixel visibility, and it can be used to mark pixels that meet certain criteria defined by the developer. With coding, developers can utilize the stencil shader to achieve various effects such as masking, outlining, and selective rendering.
To use the stencil shader in Unity URP, developers first need to define the stencil buffer operations and comparisons. This is done through coding by setting up stencil states and configuring the desired stencil operations and comparisons. These operations and comparisons determine how the stencil buffer is updated and how pixels are rendered based on the stencil values. For example, a developer can set up the stencil buffer to mark pixels that belong to a specific object or pass a certain depth test.
Once the stencil states are defined, developers can apply the stencil shader to specific materials or renderers in the scene. By attaching the stencil shader to a material, developers can control how the rendered pixels interact with the stencil buffer. For instance, they can choose to render only the pixels that pass a specific stencil test or perform custom operations based on the stencil values. This allows for advanced rendering effects like rendering outlines around objects or applying specific rendering techniques to selected parts of the scene.
But the creepiest thing to notice before actually start making stencil shader is that unity has the realated settings to cahnge in order to make this stencil property readable for unity engine, otherwise it will not give you any results even yu got all your settings right.
Painting Scene
Design & Modeling
In the artistic environment I envision, the predominant color palette revolves around shades of blue, evoking a sense of tranquility, calmness, and sadness. The overall atmosphere carries a subtle warmth, offering a contrasting element to the melancholic tone. As visitors step into this ethereal world, they are transported to an abandoned rounded habitat area that has been ravaged by a catastrophic event, such as a massive flood. The remnants of this disaster become evident through the presence of a lonely door, standing as a solemn reminder of what was once a bustling and thriving community.
The door itself exudes a sense of desolation, its weathered appearance telling tales of a bygone era. It bears the scars of time and the forceful impact of the flood that swept through the area. Its wooden frame is cracked, and the paint has peeled away, revealing layers of history and vulnerability. The door’s forlorn state signifies the passage of time and the abandonment of what was once a vibrant habitat.
Behind the door lies a poignant symbol of the disaster and a call for environmental protection—a whale’s bone. Resting gracefully on the ground, the bone acts as a haunting testament to the power and fragility of nature. Its sheer size and presence remind visitors of the impact humans have on the environment and the need for conservation and stewardship. The juxtaposition of the lonely door and the whale’s bone creates a sense of melancholy and reflection, urging viewers to contemplate the consequences of our actions and the importance of preserving our natural world.
Despite the underlying sadness of the environment, a feeling of warmth permeates the air. Soft lighting casts a gentle glow, hinting at the hope that remains even amidst tragedy. The warmth is represented through warm-toned lighting fixtures scattered throughout the area, casting a soft golden glow that contrasts with the cool blue hues. This combination creates a harmonious balance between melancholy and solace, inviting visitors to embrace the emotions evoked by the scene and find solace within the poignant narrative.
In summary, the artistic environment I envision is a melancholic yet warm space, characterized by shades of blue and a sense of profound sadness. The abandoned rounded habitat area houses a weathered and cracked door, serving as a reminder of a once-thriving community that fell victim to a devastating flood. Behind the door lies a whale’s bone, symbolizing the need for environmental protection. Despite the melancholic atmosphere, the presence of warm lighting offers a glimmer of hope, encouraging visitors to reflect on the consequences of our actions and find solace amidst the beauty of the scene.
The progress of using Maya to model the signature sign of the whole painting scene was not as difficult as I thought at first, it was quite straightforward that I have experience cutting faces and objects with different shapes. Another core tool I was using is the bend tool which is a nonlinear deformation of the selected mesh, with this tool I can freely create any smooth and curved formation on models, this was used on making the cage round and the columns bend on their top parts.
Maya, a powerful 3D modeling software, can bring the specific environmental design I envisioned to life with its extensive capabilities. Using Maya, I can meticulously craft the details of the abandoned rounded habitat area, capturing the weathered texture of the broken door and the subtle nuances of the surrounding environment. With Maya’s versatile modeling tools, I can recreate the cracked wooden frame of the door, showcasing its aged appearance and conveying the passage of time. The software’s rendering features allow me to experiment with various lighting setups, ensuring the warm and melancholic atmosphere is precisely captured. Additionally, Maya provides me with the flexibility to sculpt the whale’s bone, enabling me to create intricate details that reflect its significance and emphasize the importance of environmental protection. With Maya’s comprehensive toolset, I can seamlessly merge artistry and technicality to bring this evocative environment to fruition.
It was quite hard to match the size of the models that related to the height settings of XR origin and also the scales of the other models, at the starting position of modelling and environment building process, this is hard for me as a beginner to in modelling buildings without any scale references given but only several pictures of design, it was hard to tell the exact height and width at the same time while I need to imagine how to build up the base shape under the design skim drawn. For this specific environment with a round water plane at the centre and a bird-cage-like construction around and the half-circle-shaped buildings align aside,
the importance of imaging the depth of field d and making it match the height of each column was extremely time-consuming, I was trying my best but it didn’t really come to the reality that the whole garden let’s say, was kind of smaller than what I aimed to set, and the light of the most obvious part of this scene (broken construction in the middle) end up like a shortened version of the one shown in the reference picture, in this case, that I realized it super late, I have no option to change anything as all the other scaling setting are all done according to it.
The last model I created for this painting scene is the entrance of the whole park, in order to make the style match the Barlock style, I chose to make a church window as the fence around the main entrance door, which offers a weird but not strange kind of visual feeling to this area as the “born place” for the player to the first entry. the modelling process didn’t take too long as I have been familiar with all the tools in Maya and got the rough design draft in my mind this time. the core was to the nonlinear band tool to reshape all the meshed at once.
Environment
The next step after modelling and matching is to put all of my models together to build up a completed scene in our imaginations. This was not difficult to do but plenty of work was needed to find a specific material that belongs to each model and surface, and not all of them has one at first so I’ll need to search for suitable normalized realistic materials from the Internet sometimes and an effort on adjusting colours. I adjusted the colour, normal details, and lighting settings in Unity’s Universal Render Pipeline (URP) to create a more realistic appearance for the 3D models in my scene. To achieve this, I focused on three key elements: the water shader, the glass material, and the circular Barlock-style buildings.
I fine-tuned the colour properties of the models. I experimented with different colour schemes and adjusted the saturation, brightness, and contrast to evoke a natural and lifelike feel. By carefully selecting appropriate colour palettes for each object, I ensured they harmonized with the overall scene. Next, I paid attention to the normal details of the models. By modifying the normal maps and adjusting the intensity of the surface details, I enhanced the perception of depth and texture. This added a sense of realism to the objects, making them appear more tangible and believable within the environment.
In terms of lighting, I meticulously adjusted the light sources and their properties. I considered the position, intensity, and colour of the lights to create the desired atmosphere. Soft, warm lighting was used to simulate natural sunlight, while subtle shadows were incorporated to add depth and dimension. By carefully balancing the lighting setup, I aimed to recreate the interplay of light and shadow as observed in real-world environments, specific attention was given to the water shader and glass material. I applied appropriate shaders and adjusted their properties to achieve realistic reflections and refractions. The water shader was configured to simulate the movement and distortion of waves, while the glass material was made transparent and refractive to mimic the appearance of real glass surfaces.
Throughout this process, I relied on my artistic judgment and knowledge of real-world materials and lighting. By carefully adjusting the colour, normal details, and lighting settings, I was able to create a scene in Unity URP that portrayed the 3D models as lifelike and convincing, bringing a sense of realism to the virtual environment.
Final result
Mechanics
Apart from making it bueatiful, the interaction in this scene were not decided until the final render result’s out, students from sound art gave me a wonderful idea to make this more interesting and playable after seeing this specifc creation — adding sounds on the bones. This was not expected at the very start where I believe our project did gradualy becoming professional durign the basic progresses, for instance the idea might change while sound creator is trying to record something for a specific object but he/she may suddenly turns out with some interesting but creazy new thing that only can be found when you are actually making things, this is amazing cuz we did find lots of interactions with creativity by doing that. The bone bell sound was popped into Maria and Ben’s mind that is super fancy for me at that time, and then the main mechenic goal gradually came out with the idea of making a password system on the bone with music notes playable.
The keypad system with coding written by Herman was hard to read for me at first, the only thing I can try is to download the entire package and click literally every section whatever the game object or scripts they are and try to understand it by simply breaking them into parts.
This image is the process of my trying to understand how those scripts work.
I tried to divide them into two rough parts which consist of the physical model and the scripts on them. Two main scripts are “Keypad Manager” and “Bottom”, the bottom script is placed on each bone that I want to set active with the “On Trigger Enter” to recognize the input and then, send it to the manager to check whether the input is correct or not. And the manager is the one that controls the events set for password correct (whalebone dissolve) and stays the same if it’s incorrect, to compare the actual input and the correct answer, a canvas in the text showing the number imported with the correct answer set in the form of string “Correct Answer” will be read by every frame, of course, a reset function should be called once the input is more than the number of letters in the answer.
TO replace these functions to my own keypad, several changes will be made as well.
public class Keypad : MonoBehaviour
{
// public Text Ans;
public TMP_Text Ans;
public string Answer;
public int Input;
public string K_State;
public DissolveChilds D_script;
public float WaitingTime;
public GameObject ItemToDestory1;
public GameObject ItemToDestory2;
public bool IsDestoried = false;
public AudioSource AudioSource;
public AudioClip WrongSound;
public void Start()
{
K_State = "Lock";
Input = 0;
Ans.text = ""; //reset to nothing
}
public void Number(int number)
{
if (K_State == "Lock")
{
Ans.text += number.ToString();
Input += 1;
}
}
public void Reset()
{
if (K_State == "Lock")
{
Input = 0;
Ans.text = ""; //reset to nothing
}
}
bool Update()
{
if (K_State == "Lock")
{
if (Input > 4)
{
AudioSource.PlayOneShot(WrongSound);
Reset();
return false;
}
if (Ans.text == Answer)
{
print("Correct");
K_State = "Unlock";
Ans.text = "" + K_State;
StartCoroutine(DissolveCoroutine());
Destroy(ItemToDestory2);
}
}
if (K_State == "Unlock")
{
D_script.Dissolve();
return false;
}
return false;
}
IEnumerator DissolveCoroutine()
{
//StartCoroutine(FadeOut());
//Print the time of when the function is first called.
Debug.Log("Started Coroutine at timestamp : " + Time.time);
//yield on a new YieldInstruction that waits for 5 seconds.
yield return new WaitForSeconds(WaitingTime);
//After we have waited 5 seconds print the time again.
Debug.Log("Finished Coroutine at timestamp : " + Time.time);
//Distroy the chrosen item
Destroy(ItemToDestory1);
yield return IsDestoried = true;
}
}
namespace DissolveExample
{
public class DissolveChilds : MonoBehaviour
{
// Start is called before the first frame update
//List<Material> materials = new List<Material>();
public bool PingPong = false;
public Material DissolveM;
public float value;
public ParticleSystem PS;
public InputAction gripAction;
//AudioSource audioSource;
public AudioSource BoneAS;
public AudioClip BoneDisapearClip;
public int Playonce;
void Start()
{
//audioSource = GetComponent<AudioSource>();
PS.Stop();
}
private void Reset()
{
Start();
DissolveM.SetFloat("_Dissolve", 0);
#
}
// Update is called once per frame
public void Dissolve()
{
Playonce += 1;
//var value = Mathf.PingPong(Time.time * 0.05f, 1f);
//SetValue(value);
DissolveM.SetFloat("_Dissolve", value);
//AudioSource = this.GetComponent<AudioSource>();
//audioSource.Play();
value += 0.001f;
PS.Play();
}
void Update()
{
if (Playonce > 0 && Playonce < 10)
{
BoneAS.clip = BoneDisapearClip;
BoneAS.Play();
Playonce = 11;
}
}
}
public class Button : MonoBehaviour
{
public string TypeofButton;
public int Num; //my assigned number
public Keypad KP_script;
public void Input()
{
if (TypeofButton == "Number")
{
KP_script.Number(Num);
print("inputted");
}
}
void OnCollisionEnter(Collision collision)
{
print("touched");
if (collision.gameObject.tag == "Stick")
{
Input();
}
}
This is how it works, when the wooden stick tagged with “stick” collides with whalebones, the collision will input a specific number to the keypad manager directly and after several trails, the number of input will reach a set amount (its 4 in this case) a Reset function will be called to clear input numbers to 0, and if the input shown on invisible text plate is equal to correct answer, the dissolve system with lighting and sound effects will be activated and a physical key hidden on top of the bone will drop with the “IEnumerator DissolveCoroutine” that counts the length of this dissapearing effct and dextroy the physical mech of whalebone when it ends, therefore the key will appear.
The script on key and door lead to scene transition would be easy to write with simply a door open animation be called after the key triggers the door handle and a scene transition action placed right after the door animation ends by adding animation event at the very last key.
“fade anim” is a completely white plane put right in front of the camera that will make the whole visible screen to be filled with white colour, and create the barrier for scene transitioning effect. the material applied should be lit and specular. This screen-fading animation is made by decreasing and increasing the transparency with keyframes.
public class SceneTransition3 : MonoBehaviour
{
public Animator FadeAnim;
public AudioSource AudioSource;
public AudioClip DoorNoise;
public void OnSceneTransiting()
{
FadeAnim.SetBool("SceneChanging", true);
}
// Start is called before the first frame update
public void OnDoorOpenAnimationEnd()
{
SceneManager.LoadScene("maintest");
}
public void OnDoorAnimationPlay()
{
AudioSource.PlayOneShot(DoorNoise);
}
}
DanceHall Scene
Design & Modeling
Matches the wood tones of a music box. The music box might play a basic version of the music, and then a more interesting one will be available in the actual space. The original idea of the world inside the music box was a typical mid-age dancehall made with Barlock-style columns and lighting, here are the image references for this imagination, the sound space should match the exact physical space of the dancehall to create the ambience volume and reverb zone settings, but this comes with the model finished which wasn’t quite easy for me at that time, so what I did for sound art student was finding plenty of reference pictures for them to imagine.
The style was decided as a splendid musical dancehall, a realm where joyous melodies and graceful movements intertwine. As you step into this enchanting scene, your senses are immediately captivated by a symphony of colors, sounds, and motion. The dancehall is adorned with ornate Baroque-style columns that reach towards the heavens, their intricate designs reminiscent of a bygone era of grandeur and opulence. Soft, warm lighting casts a gentle glow, bathing the space in an inviting ambiance.
Amidst this breathtaking setting, you’ll find an array of whimsical nut toys joyously playing orchestral instruments. From delicate squirrels gracefully strumming violins to charismatic chipmunks skillfully tickling the ivories of grand pianos, the air is filled with a harmonious cacophony of music. Each nut toy is meticulously crafted with attention to detail, their expressions reflecting pure delight as they bring the instruments to life. As you explore the dancehall, the melodies seamlessly blend together, transporting you to a world where music transcends boundaries and touches the very depths of your soul. It’s a magical spectacle that sparks a sense of wonder and invites you to join in the rhythm and movement, creating an immersive experience that celebrates the beauty of music and dance.
Using Maya, I meticulously crafted a nut toy character that embodied a unique blend of creepiness, cuteness, and fanciness. I began by sculpting the character’s body, giving it a round, nut-like shape with subtle textures and details. With a mischievous grin and large, expressive eyes, the nut toy exuded a sense of intrigue and playfulness.
To add an eerie touch, I incorporated subtle details like cracks on the surface, giving the impression of a slightly damaged and aged toy. The nut toy was adorned with intricate, fanciful attire, including a dapper top hat and a frilly collar, creating a whimsical and theatrical vibe.
Animating the nut toy was an exciting endeavor. I brought it to life with a combination of quirky movements and graceful gestures. Its motions were carefully choreographed to complement the music and dancehall setting. The nut toy’s animations combined unsettling yet endearing actions, such as a wobbly walk, sudden jumps, and playful spins. This created an eerie yet captivating presence within the splendid dancehall, adding an unexpected element of surprise and whimsy to the overall experience.
I rigged the model with Mixamo AI by importing the fbx file on its web page and let it generate at first, but the completed fbx file that I get from it were too complicated in this case I just need sveral slight movements on them, so I redid it myself with simple bones and skelton provided by maya humanoid, this made it much easier to do animation key settings.
Environment
The entire modle of dance hall were found on Unreal Engine package center and I bought it on a Chinese website that sells it, but as the same process I did for the main attic, the way of transpoting UE5 models to unity were quite harsh as they are based on two completely different engines where finding the right way to place and light the scene up could take me a whole day in unreal engine. But thanks for the “prefab” thing in unreal engine and unity, it was easier to meke all meshes placed.
In Unity URP, I harnessed the power of camera post-processing settings and carefully crafted lighting to elevate the splendor of my scene. I began by fine-tuning the camera’s post-processing effects, such as color grading and bloom. By adjusting the color grading, I enhanced the vibrancy and richness of the colors in the scene, making them more visually appealing and captivating. The bloom effect added a touch of ethereal beauty, creating a soft and radiant glow around the light sources, giving the scene an enchanting and dreamlike atmosphere.
To further enhance the scene’s splendor, I meticulously placed and adjusted the lights. I used a combination of point lights and spotlights strategically positioned to highlight key elements and create depth and dimension. Soft, warm lights were employed to mimic the gentle glow of ambient lighting, while carefully positioned spotlights accentuated important focal points, such as the dance floor or ornate decorations. The interplay of light and shadow added a sense of drama and elegance to the scene, amplifying its visual impact and evoking a feeling of grandeur and magnificence.
By leveraging the camera post-processing settings and fine-tuning the lighting, I transformed my scene into a truly splendid spectacle. The harmonious combination of vibrant colors, ethereal glow, and carefully crafted lighting created a visually captivating experience that immersed the viewers in a world of beauty and grandeur.
Mechanics
To add an element of playability to the dance hall scene, I came up with the idea of incorporating a music notes puzzle. I strategically placed several missing pieces of the puzzle throughout the dance hall, cleverly hidden amidst the grandeur and intricacies of the environment. These missing pieces would need to be discovered by the players, encouraging them to explore every nook and cranny of the scene while engaging their curiosity and sense of discovery.
Each missing piece of the puzzle would be shaped like a music note, and players would need to find and collect them to complete the puzzle. The challenge lay in locating these hidden pieces within the vast dance hall, as they were cleverly concealed in unexpected places. This interactive puzzle added an engaging and interactive aspect to the scene, encouraging players to actively participate and solve the musical mystery while enjoying the splendid surroundings of the dance hall.
The coding goal I had for this specific functionality was similar to the keypad puzzle, but with a slight difference. In this case, no input numbers were required to determine correctness. Instead, I needed to create empty places in the puzzle where objects could snap into position, accompanied by a snapping effect. The main challenge was detecting when an object had entered the designated area and triggering the appropriate response in the puzzle manager.
To tackle this challenge, I approached it in two steps. First, I created an area with the XRsocketInteractable module applied, which allowed the objects to snap into place when they entered the area. This provided the desired snapping effect and ensured the objects aligned correctly. The second step involved creating a separate area responsible for communicating with the puzzle manager. When an object entered this area, it would trigger a message to the manager, but crucially, it would only send the input once. To achieve this, I programmed the area to destroy itself after sending the message, preventing any further instructions from being processed.
By implementing these two distinct areas—one for the snapping effect and another for triggering the puzzle manager—I successfully solved the challenge of ensuring the correct interaction and input handling in the puzzle. This allowed the player to place the objects in their designated spots, triggering the necessary actions without the risk of duplicate or unnecessary inputs.
public class PuzzleSlot : MonoBehaviour
{
public PuzzleManager PM_script;
public string PuzzleTag;
public bool isFilled = false;
private Collider TriggerCollider;
public GameObject ItemToDestory;
public AudioSource AudioSource;
public AudioClip SnapSound;
private void Start()
{
TriggerCollider = GetComponent<Collider>();
}
private void OnTriggerEnter(Collider other)
{
if(other.tag == PuzzleTag)
{
print("Applied");
isFilled = true;
PM_script.AmountofPuz += 1;
Destroy(ItemToDestory);
AudioSource.PlayOneShot(SnapSound);
}
}
void Update()
{
if(isFilled == true)
{
TriggerCollider.enabled = false;
}
}
}
public class PuzzleManager : MonoBehaviour
{
public int AmountofPuz; //track total amount of puzzle
public bool P_State = false;
public GameObject ExitDoor;
public GameObject ItemToDestroy;
public AudioSource AudioSource;
void Start()
{
AmountofPuz = 0;
ExitDoor.SetActive(false);
}
void Update()
{
if (AmountofPuz == 3)
{
P_State = true;
print("unlock");
ExitDoor.SetActive(true);
AudioSource.Play();
Destroy(ItemToDestroy);
//FadeAnim.SetBool("Scenechanging", true);
//SceneManager.LoadScene("maintest");
}
}
}
And whether the puzzle is applied to its own place will be detected by adding interact layers to each of the correct puzzle and its own snapping area, by doing this, player will not be able to snap the wrong puzzle and the invisible area set for giving input to the manager will not therefore sending any messages to it.
The “IEnumerator coroutine” is something used to do an automatic timer for a specific object on its script attached, by writing this code, the unity engine will count seconds to detect timestamps, and therefore activate something. In this case, the event to trigger is toyanim–Toywalk, this means the toy walking along way the path will be played after “timebeforestart” is completed.
This same term was used in every scene of mine for different purposes like setting time for whalebone to be destroyed or time before the cat finishes the introduction animation etc.
public class ToyWalk : MonoBehaviour
{
//public bool ActivateMove;
[SerializeField] float timeBeforeStart;
public Animator ToyAnim;
private void Start()
{
ToyAnim.SetBool("ToyWalk", false);
StartCoroutine(MyCoroutine());
}
IEnumerator MyCoroutine()
{
//Print the time of when the function is first called.
Debug.Log("Started Coroutine at timestamp : " + Time.time);
//yield on a new YieldInstruction that waits for 5 seconds.
yield return new WaitForSeconds(timeBeforeStart);
//After we have waited 5 seconds print the time again.
Debug.Log("Finished Coroutine at timestamp : " + Time.time);
ToyAnim.SetBool("ToyWalk", true);
//ActivateMove = true;
}
}
Story & World
public class CatWelcome : MonoBehaviour
{
public Animator CatWAnim;
public Animator CatDisplace;
// Start is called before the first frame update
void Start()
{
}
private void OnTriggerEnter(Collider other)
{
if(other.tag == "Player")
{
print("isTriggered");
CatWAnim.SetBool("RunDoor", true);
CatDisplace.SetBool("DisCat", true);
}
}
// Update is called once per frame
void Update()
{
}
}
In this scene where everything starts, we designed it as a returnable place the player can go back from the other scenes, but what is the main goal in the mechanic in order to achieve this specific function is the “world Manager” aka “Level manager”. This manager refers to the “remembering system” that will be placed at the very first scene with the condition “Don’t destroy me”. The reason why it has to be not destroyed is Unity Engine tends to load every scene under the scene manager in build settings to be started at its own beginning. and continue as a brand new level, this means that every time the player travels back from the other scenes, the main scene will be exactly what it was at the beginning with non of the player level or messages saved. In this case, we have to put a big manager that can read or be read by the scene every time when the scene is loaded, so there will be messages or level information will be saved and remembered.
public class WorldManager : MonoBehaviour
{
public string LevelState;
public SceneTransition ST_Script;
public Musicbox_Animation MA_Script; // MusicboxOpening script
public Follow Cat_Script;
public GameObject TriggerArea1;
public GameObject MusicboxOpening;
public GameObject cat_model;
public GameObject BookSlot;
public bool onetime;
public int LevelPlayed;
void Awake()
{
LevelPlayed = 0;
SceneManager.LoadScene("Welcome");
DontDestroyOnLoad(this.gameObject);
}
// Update is called once per frame
void Update()
{
if (LevelState == "PT")
{
TriggerArea1 = null;
ST_Script = null;
}
if(LevelState == "MB")
{
MusicboxOpening = null;
MA_Script = null;
}
}
}
The world manager script is started with a simple “public” class that literally calls every single script in each scene that we want it to communicate with the manager and therefore, pass the messages. There could be two ways of doing this: letting the manager-script to do everything we want on different levels or, telling the scripts that related to reading the manager by themselves and doing the conditions or events by themselves. For my current situation, the first method didn’t seem to be available in this game as the Unity engine is having trouble reading the scripts that are not destroyable, so I only got one selection left — to let the scripts read the manager’s orders.
In the overall experience, I strategically placed a book towards the end to serve as a guide for the player. Upon entering one of the worlds and successfully completing it, the player would encounter a sign with the words “put puzzle here.” This sign was designed to draw their attention to a specific interactable object where they could place the wooden puzzle they had acquired during their journey. It served as a visual cue for the player to recognize that they needed to place the puzzle in its original location.
To add a sense of challenge and condition for exiting the experience, I implemented a script to check if the player had applied the right puzzle to the right place. The script would verify if the player had correctly placed more than two puzzles in their respective locations. This condition ensured that the player had engaged with the puzzles throughout the experience and had successfully solved them. Only when this condition was fulfilled would the player be able to exit the experience and progress to the next stage.
By incorporating the book as a visual guide and implementing the script to check the placement of the puzzles, I created a clear objective for the player and added a layer of complexity to the overall puzzle-solving experience. This encouraged the player to pay attention to details, complete multiple puzzles, and ultimately unlock the path towards advancing in the game.
public class PieceSlot : MonoBehaviour
{
public GameObject ExitHint;
public GameObject EndingText;
public int PieceNum;
public AudioSource BookAS;
public AudioClip EndingM;
public WorldManager WM;
// Start is called before the first frame update
void Start()
{
WM = GameObject.Find("WM").GetComponent<WorldManager>();
WM.BookSlot = transform.gameObject;
if(WM.LevelState == "MB" || WM.LevelPlayed == 2 || WM.LevelState == "PT")
{
ExitHint.SetActive(true);
}
}
//Update is called once per frame
void Update()
{
if(PieceNum == 2)
{
EndingText.SetActive(true);
BookAS.PlayOneShot(EndingM);
}
}
}
public class PieceSnap : MonoBehaviour
{
public bool isSnapping = false;
public GameObject Piece;
public PieceSlot Slot_Scipt;
// Start is called before the first frame update
void Start()
{
isSnapping = false;
}
void OnTriggerEnter(Collider other)
{
if(other.tag == "WhalePiece")
{
isSnapping = true;
Slot_Scipt.PieceNum += 1;
Destroy(this.gameObject);
}
}
// Update is called once per frame
//void Update()
//{
//}
}
Same thing as the one I used for music puzzle, there will be a snap area and a area for detect, and then the event will happen after every conditions are fullfilled.
Final presents
Pitching!
Step into a world where imagination knows no bounds and dreams come alive in a stunning virtual reality experience. In this captivating adventure, players will be transported to an attic filled with immersive objects that hold the keys to surreal dreamscapes, waiting to be unlocked.
Imagine standing in the centre of a room surrounded by a diverse array of fascinating objects—a breathtaking painting that seems to pulsate with life, an ancient book whispering forgotten tales, and a mysterious computer glowing with untold secrets. Each object holds a portal to a unique and mesmerising dream world, where reality bends and dreams become tangible. As the player explores, they will have the freedom to choose any object that captivates their curiosity. Once they make their selection, they will be instantly transported, diving headfirst into an awe-inspiring, surreal dreamscape crafted to their chosen object’s essence.
The attic will gradually transform as you immerse yourself in the dreamscapes. The once unfamiliar space becomes a haven, a sanctuary where you can return between your adventures, forming a deeper connection with your surroundings. It becomes a place of solace, a true home within the virtual realm.
The beauty of this experience lies not only in its captivating visuals and immersive environments but also in its ability to evoke profound emotional responses. From the wonder and awe of exploring extraordinary landscapes to the contemplation of life’s mysteries, players will be deeply engaged and connected with each dream world they encounter.
By fusing cutting-edge virtual reality technology with surreal dreamscapes, we open a gateway to an entirely new form of experiential storytelling. It transports players beyond the confines of reality, offering an escape into worlds limited only by their imagination.
Prepare to embark on a transformative journey, where dreams manifest as reality, and the boundaries of your imagination blur. We invite you to experience the magic, wonder, and limitless possibilities of the human mind. Are you ready to explore the surreal depths of your dreams?
Critical Reflection
Introduction:
In the realm of virtual reality (VR), the game designer’s role is to transport players into immersive and captivating experiences. As a college student specializing in VR game design, I had the opportunity to work on a project called “Whimsy Attic.” This critical reflection delves into my step-by-step process, overcoming coding and modeling challenges, and the importance of effective collaboration with both my fellow game design partner and students from the BA Sound Art course.
Crafting Immersive Dreams-capes:
The first step in creating “Dreams-capes” involved meticulously designing the virtual environments that would serve as portals to surreal dream worlds. Drawing inspiration from various sources such as paintings, books, and mysterious objects, I aimed to evoke a sense of wonder and captivation. Through careful selection of color palettes, attention to detail in modeling objects, and realistic lighting, I sought to imbue the dreams-capes with a tangible sense of realism. By utilizing my artistic judgment and knowledge of real-world materials, we aimed to blur the boundaries between reality and imagination.
Overcoming Technical Challenges:
In any game development project, coding and modeling hurdles are bound to arise. As a college student, I faced my fair share of challenges. One obstacle I encountered was the creation of the “world Manager” or “Level manager.” This component was crucial in ensuring seamless transitions between scenes and saving player progress. Initially, I attempted to develop a single manager that could handle all required tasks. However, due to Unity Engine limitations, I had to explore an alternative approach. By allowing individual scripts to read the manager’s orders and execute the necessary conditions or events, I managed to overcome this obstacle. Other than this, the scripts were staying in same logic that is corpora-ting with “OnTriggerEnter” with various actions but circulating with three main functional patterns of “puzzle manager”; “keypad manager” and “Sound playing trigger” etc.
Collaboration: Bridging Design and Sound Art:
Collaboration played a vital role in bringing “Dreams-capes” to life. Communication with my game design partner was key to aligning our visions and merging our expertise effectively. However, the process of this in-VR communication were much harder than off-major collaborations with sound students that it was not easy to give a effective way to optimize the VR experience and enhance the narrative journey. But luckily, collaboration with students from the BA Sound Art course added another layer of depth to the project. By understanding the physical space of the dance-hall and its corresponding ambience volume and reverb zone settings, not only imaging the breath-taking environment inside painting, we could synchronize sound design with the virtual environment, elevating the immersive experience to new heights.
Effective Communication and Problem-Solving:
Throughout the development process, effective communication was crucial in overcoming obstacles and fostering a collaborative environment. Regular meetings with my sound art partners allowed us to address challenges, share progress, and provide valuable feedback to improve each other’s work. Additionally, maintaining open lines of communication with each others enabled us to align our visions, synchronize sound elements, and ensure a cohesive and immersive audiovisual experience. The team working format of distributing each art creation sections in roles helped us to effectively understand what we need to do in future work, and most importantly, avoiding the potential controversial issues of imbalanced task distribution.
Conclusion:
Designing “Whimsy Attic” as a college student VR game designer was a trans-formative journey that required a multidisciplinary approach. By meticulously crafting immersive dreams-capes, overcoming technical challenges through problem-solving, and fostering effective collaboration with both my game design partner and students from the BA Sound Art course, I was able to create an experience that transcended the boundaries of reality and imagination. This project not only honed my technical skills but also emphasized the importance of communication, adaptability, and collaboration in the realm of virtual reality game design. As I embark on future endeavors, I will carry the valuable lessons learned from “Dreams-capes” and apply them to create even more captivating and immersive experiences, pushing the boundaries of virtual reality and storytelling. The limitless possibilities of the human mind await, and I am ready to continue exploring the surreal depths of dreams through the trans-formative power of VR.
This will be an Art game for players to feel life with a cat and provide a feeling of warmth and lovely and healing and so on. The shining point of this game would be the lovely cat and the beautiful environment settlements with chill music and stress-releasing sounds
The initial positioning of this project is a pet interactive experience game that can heal people’s hearts, so in the conception process, before the plan is implemented, I will not add a checkpoint mechanism and player-level mechanism to this game. On the contrary, I will work on scene production and aesthetic conception. More designs. Therefore, in the eyes of the game designer himself, the satisfaction of the player’s visual experience after entering the game will be my first goal to achieve. According to the meaning of the game and the understanding of expressing emotions, a colour that is biased towards pink and full of girlishness has also become the basic colour, and the modelling style that is cute and can capture the player’s heart is also determined by this. They can be unrealistic The unreality reflects the warmth of the game, so the modelling style will be a low-pixel low poly style with multiple rounded corners.
Playing with cats
Like all mammals, cats play when they are young and continue to play even as adults. Play is a complex learning activity that helps kittens develop social bonds and helps hone physical and mental skills. Plus, your cat enjoys playing, which is why adult cats are still keen to play. You can spend many happy hours watching your cat play. Watching cats play is one of the most enjoyable pastimes for cat owners. There are three forms of play in cats, although it can sometimes be difficult to tell them apart.
Cats that are kept indoors a portion of the time are less likely to be relinquished than cats that are kept outdoors (Patronek et al., 1996). In an effort to increase life spans, minimize predation by cats on birds, and decrease shelter relinquishment, animal professionals advise that domestic cats be kept indoor (AVMA’s Animal Welfare Position, 2014 and Humane Society of the United States Safe Cats Program, 2005). It has been reported that 35% of the owners keep their cats indoors all the time (American Bird Conservancy Cats Indoors!, 1997), 56% keep their cats inside at least part of the day (American Pet Products Manufacturers Association 2003-2004 National Pet Owners Survey), and 2 of every 3 veterinarians encouraged owners to keep their cats indoors (Humane Society of the United States, 2005). Although disease risks have been evaluated for indoor cats, (Buffington, 2002), evaluation of the behavioral wellness of an indoor cat has not been adequately addressed. In comparison with the outdoor environment, an indoor environment is often predictable and unchanging, which may result in stress and inactivity in the indoor cat (Rochlitz, 2005). An important element in the assessment of behavioral wellness of indoor cats is information on what owners currently provide as enrichment for their indoor cats and to evaluate their benefit.
As the area of enrichment for indoor domestic cats continues to be explored, this assessment of toys and activities used by owners in the home provides information regarding several aspects of the environment of the indoor domestic cat in this sample population. The results obtained indicate that many cat owners may not know what toys and activities are available for enrichment of feline environments and may not know that advice and assistance is available for behavioral issues from
Strickler, B. L., & Shull, E. A. (2014). An owner survey of toys, activities, and behavior problems in indoor cats. Journal of Veterinary Behavior, 9(5), 207-214. https://doi.org/10.1016/j.jveb.2014.06.005
Picking, hitting, and tossing small objects is how kittens learn how to deal with prey. During such play, they develop survival skills if they need to fend for themselves. You may see your kitten stomping on toys, turning them over, and circling around them as soon as they hit the ground—a behaviour that mimics the natural instinct of subduing prey in order to hunt. Object play teaches a cat how to perceive the world and the things in it, and tells it what is animate and what is inanimate. He might jump up from the toy as if startled by a poisonous ray emanating from the toy and bask in the joy of discovering something new.
A simple basic theory of play is also derived from this, especially cats that are kept indoors especially need company and play. However, once the owner who can accompany the cat to play in daily life goes out or is not at home, the cat’s sense of security will be affected. It will also decline accordingly, which is inevitable, because for a human being living in a modern society, work and social interaction are undoubtedly important factors for livelihood and life, so it can be concluded that the theme of this game will be It expresses the fact that cats feel lonely and need companionship.
Game Style
Pictures based on healing and cuteness are undoubtedly an important source of inspiration before I start production. This inspiration can not only be used in model making and environment design but also promote ideas.
These are the images I found online that could help me to come up with the whole image of my game and therefore start the basic design. All of them are creamy, rounded and simple with 2D-styled modelling techniques, but for the game, in 3D it would be quite awkward to let the VR gamers see everything around in 2D due to the common discomfort reason of motion sleekness. So it will only provide ideas but not applications.
Existing references
According to the main concept decided before, related research on games that already exist on player-cat interaction or cat behaviour-based logic should be done, therefore. Here are the references:
A game for pure cate peopleseak and hide-based cat gameThe most popular cat game currently
These games are all cute or experiential games for cat lovers, including the first example with strong ornamental value. Players mainly experience the cuteness of cats through rich pictures and paintings, as well as some simple animation works; Or it is about finding cats hidden in complex buildings centred on the player’s search for interaction, or making the player become a stray cat to experience the secrets and emotions in the cyber city that cannot be observed from a human perspective.
Conclusion
According to all the research and thinking above, a preliminary model of the game has been formed in my mind. This will be an interactive game with cats as the main body, in which the player adapts to the cat. The cats in the game will be aloof but still cute and wait for the correct feedback from the player to give a corresponding reply. During the whole game, the player needs to try to guess the cat’s psychological activities and cat’s emotional assimilation.
Why VR?
Just like the conclusion drawn in the previous section, this game based on the cat’s emotional expression will not give the player too much sense of presence and does not require the player to think too much in the virtual world so as to reduce the influence of motion sickness. It will be the most suitable work for players who enter and exit VR games that this game can achieve under the current technical performance of virtual reality and the range of public acceptance. Its special feature is the biggest advantage brought by the conceptual design based on relaxation and environmental experience. Considering that most people will not easily adapt to the virtual reality environment, this simple game that does not require too much thinking will arouse the interest of players. Improving it as much as possible is also a great tool to promote VR games to the public. Compared with the discomfort caused by moving in the virtual world, a game that has made more efforts in environmental design will appear more friendly.
Therefore, the sense of experience has become particularly important, and the most decisive thing is the design and rendering of the modelling. Therefore, a modelling style and system that cannot be too close to reality to reflect the cuteness and try to bring the player into the cat’s emotions are required. However, only the virtual world can have the possibility of being close to the real experience. Compared with the 2D screen, the visual impact and experience of the 360-degree surround is the game platform I need. If the game I made is only displayed in a flat dimension, it will greatly limit my main purpose of expressing and expressing the cat’s emotions.
There’s nothing cute and loving that touches your heart more than being real and close to you
Ronger Huang
Design & Prototyping
Apartment structure research & design.
A loft is a building’s upper storey or elevated area in a room directly under the roof (American usage), or just an attic: a storage space under the roof usually accessed by a ladder (primarily British usage). A loft apartment refers to a large adaptable open space, often converted for residential use (a converted loft) from some other use, often light industrial. Adding to the confusion, some converted lofts include upper open loft areas.
In British usage, lofts are usually just a roof space accessed via a hatch and loft ladder, while attics tend to be rooms immediately under the roof accessed via a staircase. Lofts may have a specific purpose, e.g. an “organ loft” in a church, or to sleep in (sleeping loft). In barns a hayloft is often larger than the ground floor as it would contain a year’s worth of hay.
*Modelling process & outcome*
The project started with modelling a simple second floored apartment. Several pinky and clean colours are chosen to represent the cute style of the game applied on both walls and items of furniture.
The preliminary construction of this loft-style apartment strictly refers to the basic settings and interior design of modern flats to achieve a standard close to reality. At the same time, the low poly 3D model reflects the sense of the game and a cute and lively design style emerges.
The cat’s model was taken from the sketchfab.com website with animation applied; the models of interior designs were partially taken from the free model website and partially by my own hand. And the design
Colour used
Prototyping
*Function follows Form*
This specific design of my room space is basically followed with a hand-drawn draft of a popular Loft apartment design, as for modern apartments, this kind of space was used by most of the teen properties holders to create their dream home, in response to this trend I designed the space with the feelings of a modern, dream, warm and lovely place of home.
Some of the details were changed during the modelling process to make the game to be clearly shown as a Cat interacting simulator, an over-decorated apartment might be too much for the player to be focused on the cat’s movement, so I tried hard to keep it as simple as it can be with the modelling style of low poly furniture with very few colours applied. Therefore basically the functions and forms are
Game logic & Storytelling
After completing all the steps of modelling and layout setting, I tried to present the running logic of the game with a simple mind map. This kind of mind map in the form of handwritten pictures can allow the author to think and consider as much as possible during the writing process, and at the same time, it can also clarify the order of things and how to design the plot in a simple way. Easy to understand and profound experience for players. This can also better show the thinking process and the state of writing at that time. It is a primitive, simple and rough expression, but it is full of delicate thoughts and soft brushstrokes.
The above-mentioned pictures are basically theoretical presentations with strong logic in the form of text. When it comes to storytelling, I also use hand-painted methods to express the cat image and behaviour initially formed in my mind with a brush. come out. As a cat lover with many years of experience in raising cats, I have gradually been able to understand cats’ psychological activities through observation and research of living habits during the few years of getting along with cats. This is the starting point of my entire project. source of inspiration. In order to closely combine the cat’s psychological feelings with the player, I designed a complete storyline for this.
In the process of narrating the story, the behaviour of cats is undoubtedly an important factor that can reflect the authenticity of the game. In order to be closer to the behaviour of real cats, I specifically refer to the usual behaviours of two kittens raised in my own home. One of them is two years old and the other is only one year old. At the same time, their breeds are different. One is a normal-sized British shorthair blue and white cat, and the other is a Munchkin short-legged orange-white British shorthair. Mao Mao, their personalities are also different, which just improves the variety and authenticity of the reference.
The above videos and pictures are evidence of my interaction with my cats in my daily life. The source of my inspiration and creative foundation comes from their two cats. In the process of interacting and communicating with cats, I can deeply understand Although the emotional communication between humans and animals is very simple and crude and basically based on food, cats may be more straightforward in most cases than their interest in people, but in In the process of being together day and night, I can also feel their anxiety and loneliness and feel sorry for them, so the emotion I want to spread through this game is this unique and charming relationship between people and cats. I want to Through this game, people who have no experience in raising cats can also experience the fun of interacting with cats, and hope that people with experience in raising cats can pay more attention to the feelings of cats.
Movement design
This game will be an experiential game with animation as the main form, so the difficulty setting of the game and the interactive design of the winter vacation does not need to be too complicated.
According to the root designed for a cat to run, the running action should correctly follow the lines drawn and apply the unity character movement function. Also, this should be synchronised with the cat’s moving speed and length of feet steps.
And according to the tutorials given, I need to create cubes at the right position corresponding to the picture I draw in each path separately.
In order to move the cat from one point to another point I put several position values on updata to make a smooth movement_1. But during the research process, I also found several alternative methods to move the object (character or cat). One of them resulting an unnatural movement but with simple coding contents, which is editing the position values of ‘gameobject’ directly, and another is changes made by the animator instead which implies the several keys with different positions complete the move, but in the case that I’m doing a lot of animation on the cat itself so the last method would not be the best to choose.
Although the coding language used in the processes mentioned above is quite simple and easy to be applied to make the cat’s moving path more complicated to make it closer to reality, a well-coded sample found in Unity Demos is quite suitable for this case. The original finder of this specific part of coding said: ‘This is coded and presented by the Unity official website in their coding demos.’ however most of the coded scripts provided by Unity officials are not friendly to all fresheners new to C#, so the blogger shared the most important part and posted them for easy use.
This specific coding allows me to control the cat’s moving path by coding a Waypoints script and a Follow script with several GameObjects (cube) to refer to the separated points inside the path design.
According to this particular script, my game complexity can also be improved accordingly, because this script allows me to add enough invisible squares to represent dense movement routes, compared to the first script written method Due to the limitations of creation, this special function and expression can make the main body of the game, the cat, have more flexible and vivid movements, so it is closer to the real cat behaviour, and it can also improve the player’s experience. The complex and flexible movement route will also bring corresponding difficulties in the subsequent production while it is in operation.
Cat’s Animation
As the mindmap shows, my animation on the cat needs 5 different statuses to change with the player’s action —- “Stay”, “Run”, “Play”, “Eat”, and “Run fast”. And I’m planning to do these four more animations through Maya but the thing that happens is my AutoDesk account doesn’t work anymore. Therefore, I found a cat animation package online with several movements such as ‘smelling’, ‘digging’, ‘jumping’, ‘running’, ‘eating’, ‘idel’, ‘punch’ and so on, but an important preview of that animation is needed and some of them are not suitable for some occasions, so I spent several hours on reviewing and adjusting the details for further use, what changes I made were mostly on the animation window adjusting the rigged boy in order to make movements looks smoothy and somehow make the transition works easier to manage.
As shown in the diagram above, the yellowed sections are preset animations following the game’s logic claim, so the following traditions between each movement need to be claimed and adjusted to a proper speed and timelines. The basic interaction process is claimed as —- cat runs and stops in front of the cat poler — pole — jumps as the right ball is attached — punch at ball — runaway.
After collecting and integrating this series of cat behaviour animations, based on my game design concept and the description of the cat’s movement trajectory, a cat can stop moving at the top of the terbium and interact correctly with the player. Later code that can resume the move is therefore needed.
According to the analysis of the underlying logic, the two points of “stop moving at the designated place” and “resume movement after the corresponding item touches” can be compared with the invisible square used to mark the place used in the process of setting the cat’s moving track. A good tool for practising ideas. By assigning “tags” to different marker blocks, they can be referenced in the code to refer to special places that cats will pass by.
From the example of “Point B”, I hope that when the cat passes through point B, it can stop following the movement of the route and the animation called “run” and start the animation called “pole” at point B. At the same time, Point B is that the “pole” action of the cat in front of the cat climbing frame also interacts with this object in the environment. Therefore, a logic that claims “when Cat is triggering the point” then “stop movement” is based on “OnTriggerEnter” and “OnTriggerExit” where the cube is the place the collision happens. For the animation changes in this case a technique called “SetBool” is used to apply conditions that control the cat’s animation plays where codings claim whether “true” or “wrong” provides the biggest functional work that if the “SetBool” is true, a corresponding animation will play afterwards and it could be defined with animation transitions and being adjusted with speed and length. And if I want the cat only activated by a specific object applied to it, then the additional condition should be added when claiming the term “if()” with the sign “&&” following “other.tag”, and I can set the object under a unique tag onto it to let the conditions completed. Also, “ResetEvent” is needed in this case for resetting the original running status with the cabin where the corresponding “SetBool” goes to false.
The first step is also the most basic, the integration of the animation of the cat’s action has come to an end here, and the next step is the connection between the more expressive and more difficult animations. This step is particularly important because if there is a gap between the animation In other words, if the connection speed is not matched enough, the cat’s movements will not be coherent and the authenticity of the experience will be reduced. This will be a very difficult and important link.
This transition is tagged as set bool “PointB_Stop” which let the cat stop running and start poling, as shown in the picture, the “A_run” animation was looping before the cat triggers into the point B and the “A_pole_loop” animation is applied inside the looping which requires a correct timing of enter action to avoid the solid or fake movement result, so I did a repeating work to find the right position for pole animation to be transformed and find this specific length of transition at the end.
This picture shows the transition in set bool “ResetEvent” for the cat to continue to run, and there are no animation clips attached to it which means that the speed applied to this process (two transition arrows and one space event) should be extremely fast to avoid the blank area of animation, so I set the speed of “ResetEven” as 15 and the transition times to be less than 0.001.
Another challenge in animation is the movement that represents the cat waiting for its owner to come home which requires cat animation of sitting and crying but without any position movements following the script “Waypoint.cs” and “follow.cs”, in this case, the cat’s animation will be “B_idel” and “B_cry” and looping between these two clips, once the cat’s mesh collider is triggered by the tag. player, the set bool “ResetEvent” will therefore activate and push the animation to the next stage, but in case we are not been taught the use of “Blend Tree” in unity animation creations, so I was not able to do my work inside a blend tree system and a simple way of creating transitions between each animation clips with the main clip in next stage (“A_run”) was the best solution to make sure the animation will change whichever animation clip is playing at the time player triggers.
Player interaction
Sound Play
Also in order to improve the authenticity of the game, I added the function of cat sound feedback in the player interaction settings. For the function of sight, I collected some sound sources of cat calls in different moods, including “no Patience”, “Want to play”, “Thank you”, “Angry” and other elements, these audios will be played under some specific interaction results, such as the player chooses the wrong toy or the player chooses the correct toy, the player approaching and so on.
catcry script
In my game, the player needs to have a simple interaction with the door of the apartment, that is, after the player enters a certain range of the apartment door, the door opening animation set in advance should be played, and when the player After moving out of this set range, the animation will be played in reverse to represent the action of closing the door.
Trigger zone
It will not be difficult to practice this idea according to the examples described in the previous courses, but it should be noted that every step is indispensable, even a small placement action will become the final script The determinant of whether it is really running, so it is very important to check for bugs and check the corresponding operation details after each new script is written, even the steps that I think have been completed should be checked meticulously.
Transition between scenes
This is my last scene with the other space that represents the cat’s inner world which is Beautifull and warm. This scene is less realistic than the previous environment scene because it represents the inner world of the cat, so I found a more dreamy scene modelling with its own lighting and environment rendering effects to better reflect The unreal nature of the scene.
This conversion run script is exemplified in the latest scene-switching engine instructions on the official website of unity. Before this engine was launched, it was very cumbersome to switch scenes and impose trigger conditions on this action. It is a complicated process, but this engine solves this problem very well, and game producers only need to use a few lines of code to realize the switch.
But it should be noted that all the scenes entered in the scene-switching script should have corresponding names and be placed in the Build Settings in advance for the use of the script. I once forgot to drag the scene modelling into the settings As a result, the script does not work.
Atmosphere
Additional Environment design
According to the apartment design mentioned above, I set a huge Window inside the living room area corresponding to a Loft-style room space design, therefore a detailed environmental setting on views outside the window becomes important afterwards. In order to let the player be able to see the environment background, I made several light transparent glass materials to make sure it looks real and a unity asset package called ‘HDRI’ settlement will be needed in this case, I found HDRI pictures in 2D style paintings and several city scenes to respond the time settings inside the game. As I set the player (cat owner) will be back home at 17:00 to 20:30 in the evenings, all my selected HDRI pictures should be at night with city lights and dark clouds, and in order to make the game more immersive for all players, two different weather background with raining voices and clouded night settings would be a good reference for this requirement.
This also requires the related background music with a raining sound in case the environment is raining. I chose a few Lofi-styled pure instrumental kinds of music as my background music and a pack of raining sounds corresponding to the rainy day. Here are the original youtube videos.
Background music 1
Background music 2
Background music for the last scene
Playtest
play test inside
playtest outside
This test went much smoother than I expected, but it also helped me figure out the problem of scene switching. This problem was effectively solved by modifying the settings of Directional lights in the later stage.
Critical reflection
In the entire process of game production, from the initial stage of project establishment to brainstorming to actual operation to running test and then to the end, this whole process is a test of our basic ability and accomplishment in game design as a beginner. It is not just a process of transforming game concepts into lines of code, but more of a test of how to choose a method, how to pay attention to feasibility, and how to run it successfully in the implementation process. None of these steps can be called Simple and easy to do, even a simple copy and paste will have unpredictable problems during operation, and what we have to do is to better avoid or better solve them.
From the initial model design, the problem of model size control and matching with the size of the player model in the later XR toolkits was not clearly recognized by me from the very beginning, although the whole process of modelling and placing design was not difficult. It’s not too cumbersome, but this step took me more than two weeks of time and energy when I forgot to save again and again or Unity suddenly crashed. However, even so, the final model did not fully meet my psychology. It is expected that the choice of colour is one of them. Although the deviation in the initial colour selection will not appear so inconsistent after adding the superimposed rendering effects such as lighting and ambient light in the later stage, the overall effect is consistent with the original reference picture. It still lacks some warm yellow elements and the use of materials with a strong sense of wood. Maybe if the rendering test is performed at the same time when the model is initially built, this problem will be effectively avoided. But there are also things that satisfy me in the whole process of finding and using the model, such as the selection of the cat model. Although the model I use is not the closest to the real cat, its “unreal” characteristics are just in line with my scene. The modelling styles are consistent, neither makes one of them too prominent nor suppresses one of them too much, on the contrary, they present a complementary and unexpected effect. There are also multiple animation templates for the animations that come with the cat model. Although they are of various types and their names are not very clear, I need to go through a lot of previews and screenings and even label them one by one with easy-to-understand labels before using them. They will cover all the assumptions in my mind about the behaviour of cats and also satisfy my animation design needs for cats to the greatest extent.
I really did a lot of mental preparation before I started writing script codes, because this is an area I have never touched before, and before I started, I put more effort into overcoming my inner fears. In the end, my steps started by reviewing the content already mentioned in the class, but what is unpredictable is that without accurate goals, my Xu Xi’s efficiency is not so efficient, so even if I try my best before starting After reviewing all the course content, the content that I can absorb and accept is not as much as I imagined, so I finally chose the method of looking for specific tutorials according to specific needs. During the entire research process, I also browsed the Internet. Many complex and interesting logic writing methods include the WayPoint script that I finally adopted. This is the essence extracted by a netizen from the code example officially released by Unity. It can accurately realize the irregular path I want the cat to pass through. exercise needs. However, the subsequent script code to make the cat follow the path was the most difficult part of the whole production process for me. Problems I encountered such as:
— Avoid the player entering the stop point before the cat and top the cat by accident.
— How to use the Set Bool tool correctly without bugs?
— How to let the cat sit in front of the door before the player enters?
— how to attach sounds on the cat with different waypoints? etc
And because I am a beginner with zero foundation, I will have more problems in understanding script logic, but after reading over and over again with the intention of trying to understand and trying to succeed after failure, I gradually figured out the way, What I didn’t expect was that in the later stage of the project operation, I could already achieve most of the effects I wanted to achieve by changing the basic code writing template. Although my use is still very rusty and the logic is not very clear, even though It is a lengthy and unsavvy approach, it is also a testimony of my efforts and progress. This will be the biggest truth I have gained in the whole production process. When it comes to the post-production process, although I feel that I have learned a little bit, I still make some unexpected mistakes. For example, when making the animation of opening and closing the door, my script writing has no problem at all, but there is a fatal error in the test because the animation cannot be played at all, which brings me a lot of frustration, but in one pass After carefully reading and comparing the tutorials, it dawned on me that I wrote the script and didn’t apply it to our model correctly, so my previous struggles and frustrations were solved so simply and idiotically.
As I mentioned before, the more you get to the later stage of the project, the more you become proficient in the principles of operation, and the more you will find that there are really many decisive settings and changes that I failed to do in the early stage. If I had such Proficiency and understanding may be able to complete the project more in line with expectations. If I want to develop my game at a deeper level, I will choose to use more complex cat movement routes and more diverse and realistic animations, and I will also set up more in the room for players to interact with. Items to increase the fun of interacting with cats. In the existing foundation, the interactive feedback I set for cats can only stay on whether the cat continues to move or not depending on whether the toy chosen by the player is correct, but if it can be done better, I will Different toys will be selected corresponding to different sounds and animation effects until the correct toy is applied to the cat before the movement script is triggered. I may also set the environment more realistically, and set up many cabinets or doors and windows that can be opened interactively for cats to hide to increase fun and playfulness. At the same time, I will use the different moving routes I designed in different scenes. Changing some hidden conditions to make different scenes trigger playback can increase the player’s thinking during the game.
My research started by assuming that Full-Dive virtual reality technology is becoming the future of media communication methods. And, therefore, the virtual communication factors used (e.g. user experience design, user interaction design, graphic design) are closely linked to user engagement and experience, and somehow, influence their actions and thoughts.
<Reading1>
The story raises many questions, still relevant today, about the impact of digital media and related technology on our brains. This issue of Dialogues in Clinical Neuroscience explores in a multifaceted manner how, by what means, and with what possible effects digital media use affects brain function—for the good, the bad, and the ugly sides of human existence.
What remains to be determined is whether the increasing frequency of all users moving toward being knowledge distributors themselves might become a great threat to the acquisition of solid knowledge and the need that each has to develop their own thoughts and to be creative. Or will these new technologies build the perfect bridge to ever more sophisticated forms of cognition and imagination, enabling us to explore new knowledge frontiers that we cannot at the moment even imagine? Will we develop completely different brain circuit arrangements, like we did when humans started to learn to read? Taken together, even if much research is still needed to judge and evaluate possible effects of digital media on human well-being, neuroscience can be of tremendous help to distinguish causal effects from mere correlations.
Martin’s research perfectly shows how digital languages could have positive or negative effects on the human brain and how knowledge could be absorbed by the brain being related to how they been conveyed, thus, digital designs are taking an essential role in developing human brains.
His research for this part is mainly based on the digital media industry like social media SNS and the reading parts, this image shown was his figure to compare human brain activities when they are reading through different media, e.g. physical books, on screens, e-books and on the web pages.
As a side note, the evidence that violent games do have a profound effect on human behaviour or is better defined. A meta-analysis of current papers shows that exposure to violent video games is a highly significant risk factor for increased aggressive behaviour and for a decrease in empathy and lower levels of prosocial behaviour.
Although his citation asserts that the negative effects of visual games on the human brain cannot be ignored, his research methods and theory provide a good direction for my claims to develop. He perfectly proved that people’s exposure to and use of language and visual information transmission media can directly affect the results of media reception, and this influence will continue to increase with the degree of authenticity of the media and design concepts. People’s reactions to two-dimensional design are certain To some extent, it reflects the future response to the three-dimensional world.
Therefore, my research was pushed to the next stage — how could the difference between the 2-dimensional and 3-dimensional environment act as a working, studying and living tool for human life.
<Reading2>
Video game technology is changing from 2D to 3D and virtual reality (VR) graphics. In this research, we analyze how an identical video game that is either played in a 2D, stereoscopic 3D or Head-Mounted-Display (HMD) VR version is experienced by the players, and how brands that are placed in the video game are affected. The game related variables, which are analyzed, are presence, attitude towards the video game and arousal while playing the video game. Brand placement related variables are attitude towards the placed brands and memory (recall and recognition) for the placed brands. 237 players took part in the main study and played a jump’n’run game consisting of three levels. Results indicate that presence was higher in the HMD VR than in the stereoscopic 3D than in the 2D video game, but neither arousal nor attitude towards the video game differed. Memory for the placed brands was lower in the HMD VR than in the stereoscopic 3D than in the 2D video game, whereas attitudes towards the brands were not affected. A post hoc study (n = 53) shows that cognitive load was highest in the VR game, and lowest in the 3D game. Subjects reported higher levels of dizziness and motion-sickness in the VR game than in the 3D and in the 2D game. Limitations are addressed and implications for researchers, marketers and video game developers are outlined.
The findings of our study are relevant for game developers, marketers and researchers. This study contributes to research and theory in various ways, since it empirically examines the evaluation of video games and the brand placements within the games, by directly comparing players’ reactions to a 2D, stereoscopic 3D and Head-Mounted Display VR video game. Our results indicate that 3D and VR lead to higher presence, i.e. to a pronounced feeling of “being in the game”, but game evaluation did not differ between the 2D, 3D, and VR version. This is an important finding, because it shows that an enhancement in technology to 3D or VR does not necessarily lead to a better game evaluation. The additional depth perception in the 3D environments and the increased presence lead to a higher cognitive load and also come along with negative aspects such as dizziness and eye fatigue that probably impair video game evaluation. Subjects who played the VR game in particular reported higher levels of dizziness and motion-sickness while playing the game. Hence, the fact that video game evaluation was not worse in the 2D condition as compared to the 3D and VR condition indicates that game developers can still be quite successful by continuing to offer “traditional” 2D video games. Game developers of 3D or VR video games need to be aware that the 3D and VR experiences can come along with negative feelings that could possibly harm game evaluation, so they need to develop video games in which the advantages of 3D or VR use clearly outweigh these associated disadvantages.
Roettl, J., & Terlutter, R. (2018). The same video game in 2D, 3D or virtual reality – How does technology impact game evaluation and brand placements? PLOS ONE, 13(7), e0200724. https://doi.org/10.1371/journal.pone.0200724
How does technology impact game evaluation and brand placements?
According to the abstract and the H1 hypothesis given by this research, Roettl and Terlutter assert that in point of the video games and visual environments, virtual reality is performing the best compared to 2D and 3D conditions, and it will be the best way to present brand concepts in ignoring the negative physical feelings. This also supports my assumption of human increases visual sensory system acuity with the development of immersive experiences, as people’s
<Reading3>
Artistic practice is an indispensible tool for strengthening imaginative consciousness and developing creativity, awareness, understanding, and visual knowledge. However, Winner et al. (2013) concluded that integrating the arts improves academic performance and makes children more innovative thinkers has not yet been proven. Winner and Cooper’s (2010) findings failed to support the view that creativity is causing academic achievement. Although it has been speculated that creativity and innovation cannot be translated into better general academic achievement based on scores on the kind of tests that children now take in school, it has been stressed on the educational value of learning as a process matter, “to know and understand”, such improvement function is due to the effect of visual art experience students received. Undoubtedly, technologies integration would give a vital means of reaching students in and through the arts as investigative methods.
Sylvia Stavridi, Special Libraries Directorate, Bibliotheca Alexandrina, Alexandria, Egypt, (2015). The Role of Interactive Visual Art Learning in Development of Young Children’s Creativity, Vol.06 No.21(2015), Article ID:61827,9 pages, https://www.scirp.org/html/5-6302786_61827.htm?pagespeed=noscript
This research article mainly describes the results of a study on the development of visual arts and children’s creativity and process. For children and students in the best condition, visual design can help teachers and children’s future development and achieve effective results at the educational level.
After considerable debate on creativity, contemporary approach to creativity research has adopted a definition that creativity is the human process leading to novel ideas (Mishra & Singh, 2010) , whereas creative thinking “innovative” encompasses the acts: to inquire, explore, imagine the outcome, take risks, reflect, and innovate and focuses upon the nature of the interaction between the human and medium rather than upon outcomes (Ross, 1985; Erik et al., 2011 ). Hence, artful educator and teacher should pay great regard on the thinking process and how to engage younger students emotionally, intellectually, and not to settle with one perspective to nourish their natural creativity and adaptability. Ross (1985) identified flexibility as a creative process, and the feature of a creative act is the absence of any rules so children can constantly explore utilizing a particular range of artistic methods, and move freely between making and receiving (maker & receiver). This in turn enables him to visualize a range of further possibilities within a work in term of another subject area. Abbs believes that when a child feels comfortable making mistakes, and is encouraged to go beyond the context of the formal approaches he has learned, will be genuinely empowered to aesthetically recognize vital links and connections in each aspect. In other words, a child is more likely to learn to notice the discovery in science or simple geometrical nature in mathematics, and to be creative in making a discovery and shaping meaning into expressive and comprehensible forms, when he enjoys the learning process, and is thrilled to know and understand such a thing cannot be a formal thing with rules. In conclusion, creativity is whenever imagination and divergent thinking come first, the ability to reconstruct reality (Wilson, 2014; Wilson & Myhill, 2013) , a process that vitally includes creating originality, not just a copy of the original, but more like mentally envisioning the formation of images which can then guide actions and problem solving.
This article puts an important punctuation mark on my conception, and it does a good job of summarizing the content and results of the previous two articles and integrating them into a piece of evidence that I can use in my argument. Because the experiment with young children who are most suitable for learning and absorbing knowledge cannot well reflect the response of all the people I imagined to visual stimulation and learning. Although a single experimental group will have great limitations as a disadvantage, I believe that It is believed that children in the educational stage will undoubtedly be the decisive factor when looking at future development from the perspective of development and creation and innovation.
<Reading4>
A book published by the Salzburg global seminar found in the National Endowment of the arts records the best researches work on how people innovate and create something new with our brains, which could perfectly support my project and push it to the next stage.
What are the Sources of Creativity and Innovation?
The Edward T. Cone Foundation
“This was a very forward-looking and experimental session for Salzburg Global Seminar. The session was poised at the frontier of the research that is happening at the nexus of neuroscience and the arts.”
For more than a decade, Arne Dietrich, a psychology professor at American University of Beirut, has worked to demolish existing ideas about creativity. According to Dietrich, we still have no understanding of how the brain generates new ideas, despite a tidal wave of neuroscientific research. Nevertheless, he loves to study creativity, which might be the most distinctive feature of the human species. Dietrich is calling for a new start in the search for creativity, and he is hunting for mechanisms – as opposed to locations – in the brain. He criticized recent fMRI results, arguing that scientists have confused location for mechanism, which he compared to phrenology. He suspects that creativity is a distributed network, throughout different areas of the brain – a sort of “brain vaudeville” with screens happening at many locations at the same time. Furthermore, there is no real demarcation between creativity and non-creativity, which prevents there from being a true experimental control group. For a neurocognitive framework, Dietrich offered a distinction between two systems in the brain: the explicit and implicit. In his opinion, the implicit system, unconscious, experiential, and “not verbalizable,” is responsible for what is called the “flow state”, colloquially known as “the zone.” He laid out theory of transient hypofrontality, meaning the lessening of activity in the explicit system, which associated with higher cognitive functions.
Award-winning filmmaker Noah Hutton is the founder of The Beautiful Brain, an online magazine dedicated to art and neuroscience. He has been active in the field of neuroaesthetics, speaking at a Venice Biennale symposium and curating an exhibition at the Human Brain Mapping Conference. Last year, Hutton won a commission from the Times Square Art Alliance for a creative project involving brain images. For the month of November, every night at midnight, many advertising screens around one of the busiest places in the world showed a video of virtual brain maps, created by four leading international research teams. Hutton sees a mimetic relationship between the brain and the external world, but he has noticed that art is not really represented in neuroaesthetics books, which seem to cite artworks only as the means to an end, explaining brain processing. Hutton called for a bold new theoretical approach, which he called “the Apollo 13 Theory,” after the 1995 American docudrama starring Tom Hanks, Bill Paxton, Kevin Bacon, Gary Sinise, and Ed Harris, directed by Ron Howard. The Apollo 13 Theory encourages intellectual travelers to fuel up on art, gathering aesthetic and visual examples, heading toward the moon of the brain to learn about neuroscience mechanisms, but sling-shotting around the moon, ending up back home in art, only now with the knowledge from the journey. Neuroscience is not necessarily the end domain; artistic creation can be the mission control as well.
After reading through the book, Arne Dietrich and Noah Hutton conveyed important information and research results to me in this book:
#Concept#
The concept of virtuality and simulation is actually a perfect tool to guide the further development of human beings. This concept is extended by the underlying inactive part of the human brain. One of the sources of this inactivity may be the limitations of the real world caused lack of imagination.
In this way, the formation of arts applied in visual languages that are taking a significant weight in communicating with humans and somehow, influencing their worldview will doubtlessly become a unique and most valid way of expression.
This is not going to be the discussion on how visuals and arts will work on educating (sth discovered by ancestors that people need to know) but the unknown field which cannot be imagined with existing pieces of knowledge, something beyond reality and makes cognition to the next level. This will be supported by studies on the “Activated and Deactivated functional brain area” and even “Metabolic underpinnings of activated and deactivated cortical areas in the human brain” corresponding to the application of visual expression on stimulating hidden potentials.
In this topic, it could be developed into a more experimental discussion about wether the full dive technology or the ‘perfect’ virtual reality environment will be invented in the upcoming future and how it will affect the world in aspects of financial, arts, and people’s day-to-day life. This will be a field worth exploring and also a future imagination that attracts much attention, because it not only includes the research on the configuration of hardware and computer facilities but also touches people’s inner desire to explore and create the unknown.
#Structure#
According to the research shown before and the concept decided, I created an essay structure in form of a mind map to clarify the development of the whole essay and discussion with my opinion.
First brainstorm map
This is the first version of my essay structure mind map, and it is simply divided into three parts researching the background and one big part discussing on my assumption and how it develops according to the three researching parts. And the essay will start with an introduction and end with a conclusion that includes the result of the researches and assumption processes.
Although not being able to personally design experiments and conduct demonstrations will be the biggest shortcoming of my article, the existing articles and experimental conclusions are enough to help me discuss the possibility and limitations of my point of view, and what I have done The most important thing is to finally push the argumentation to a happy ending through reading and integrating resources. The three research areas I have chosen are all well-tested, including explorations and hypotheses about the human brain in the field of biology.
In order to better present and organize the factual viewpoints demonstrated in the previous article, I simply recorded my thoughts after divergent thinking and the diversified perspectives on facts that I had discussed with my friends before writing the part discussing the pros and cons. Angle, so that the subsequent integration notes can be written.
The concept of virtuality and simulation is actually a perfect tool to guide the further development of human beings. This concept is extended by the underlying inactive part of the human brain. One of the sources of this inactivity may be the limitations of the real world caused lack of imagination.
In this way, the formation of arts applied in visual languages that are taking a significant weight in communicating with humans and somehow, influencing their worldview will doubtlessly become a unique and most valid way of expression.
This is not going to be the discussion on how visuals and arts will work on educating (sth discovered by ancestors that people need to know) but the unknown field which cannot be imagined with existing pieces of knowledge, something beyond reality and makes cognition to the next level. This will be supported by studies on the “Activated and Deactivated functional brain area” and even “Metabolic underpinnings of activated and deactivated cortical areas in the human brain” corresponding to the application of visual expression on stimulating hidden potentials.
Crow: The Legend VR | 360 Animated Movie [HD] | John Legend, Oprah, Liza Koshy scene 4
Analysis of shapes & Lines & props uses
In the 4th scene of “CROW”, the main character traveled to ask for help at the centre of the universe. Here the presence of lighted lines shows a mysterious place where the curator of the universe lives, while naturalised patterns represent Big Bang in a fantastic way of art performance.
9:38 entering the mystery place
After the protagonist enters the mysterious realm, the bottom-up filming angle presents the viewer with an extremely sacred and tall palace scene, including the sharp pillars on both sides, which occasionally demonstrate the majestic characteristics of this place that cannot be easily stepped on.
Considering the viewer’s group that mainly facing the youg children, the color chosen were barely dark or monogrammed but mostly varied and much lighter comparing the normal ones used to presnet resplendent and maginificent. This way of color composing the message where dreams come true and the impossible becomes possible as the most important transation of the whole story line.
The gap/contrast between ideal and reality
The story starts again after the protagonist flew to the very centre of the palace. There was only a glass cage sitting in the middle of the golden area, it dramatically lives a tiny insect inside with very rudimentary furniture.
10:26 meeting the curator
At the second stage of this scene, the protagonist finnally found the curator that may could save all his friends’ lifes. Audience’s emoiton should be lifted up by the strong constrast between how they imagine the curator will be like and the reality’s.
The shooting angle zoomed in and changed into a small visual quare very timely at the point just before climax to expose what the curator is like right infront of the audience, and therefore, enhance the whole story to the next stage.
Sword Art Online (Anime & light novel) by Kawahara Reki
The light novel series spans several virtual reality worlds, beginning with the game, Sword Art Online (SAO), which is set in a world known as Aincrad. Each world is built on a game engine called the World Seed, which was initially developed specifically for SAO by Akihiko Kayaba, but was later duplicated for Alfheim Online (ALO), and later willed to Kirito, who had it leaked online with the successful intention of reviving the virtual reality industry. A third world known as Gun Gale Online (GGO) appears in the third arc and is stylized as a FPS game instead of an RPG, and is the main setting of Alternative Gun Gale Online. It was created using the World Seed by an American company. A fourth world appears in the fourth arc known as the Underworld (UW). The world itself was created using the World Seed as a base, but it is as realistic as the real world due to using many powerful government resources to keep it running.
SAO Movie trailer
AR (Augmented Reality) Information Terminal “Augma” Next-generation wearable multi-device that looks like a small headphone. Its compactness far surpasses VR machines. Instead of full dive functionality, we’ve maximized ar (augmented reality) capabilities. It is possible to send visual, auditory, and tactile information to people in an awake state, More and more users enjoy fitness and health management as if they were playing games.
AR, VR, AI, IoT and blockchain are advancing fast. Similarly, their penetration into our daily lives, too, is increasing rapidly. They are being used across healthcare, manufacturing, fintech, hospitality, customer service, and many more industries. Looking at how things are, these technologies can significantly enhance the services offered in these various sectors. For example, the future of fintech can be transformed completely by using VR for conducting virtual meetings by financial institutions and enabling customers to pay for products virtually, forgoing the need to step out of your home.
However, one application of these technologies that has everyone excited is the metaverse. Metaverse promises a universe beyond our real world. It’s a place where our real world, augmented reality, and virtual reality intersect. The intersection gives birth to an interactive, immersive, and collaborative shared virtual 3D environment.
Exploring
But because it’s still just an idea, there’s no single agreed definition of the metaverse.
Apparently, it’s the next big thing. What is the metaverse?
However, there is a huge amount of excitement about the metaverse among wealthy investors and big tech firms, and no-one wants to be left behind if it turns out to be the future of the internet.
There’s also a feeling that for the first time, the technology is nearly there, with advancements in VR gaming and connectivity coming close to what might be needed.
VR has come a long way in recent years, with high-end headsets which can trick the human eye into seeing in 3D as the player moves around a virtual world. It has become more mainstream, too – the Oculus Quest 2 VR gaming headset was a popular Christmas gift in 2020.
And more advanced digital worlds will need better, more consistent, and more mobile connectivity – something that might be solved with the rollout of 5G.
For now, though, everything is in the early stages. The evolution of the metaverse – if it happens at all – will be fought among tech giants for the next decade, or maybe even longer.
Philosophising with VR as a media of metaverse
With VR, what we experience, and thus who we are, or choose to be, is up to us. The ultimate realization of VR will allow us to, at will, have any kind of conceivable experience. For this reason the technology of Virtual Reality could be a fruitful addition to the philosopher’s toolkit. It is the perfect aid to exploring hypothetical scenarios. VR acts as a catalyser for thought in many ways. It instantly re-forges and actualizes philosophical themes.
Some of the questions that the technology of VR poses to us can be deemed existential. The existential philosopher Søren Kierkegaard (1813-55) famously described anxiety (ängst) as the dizziness of freedom, the result of constantly having to make choices and decisions. The technology of Virtual Reality poses an existential problem to us in exactly this way. VR extends the reach of our freedom, and therefore also our existential responsibility, and along with this, our anxiety.
Or consider for instance René Descartes’ Meditations (1641), in which he presents the idea of the Evil Deceiver – a demon that can alter his experiences at will. This has an obvious VR application, that has been well exploited in movies such as The Matrix and Existenz. Hilary Putnam’s idea in Reason, Truth and History (1981) of a ‘brain in a vat’ also illustrates the possibility that our entire universe is a simulation created by some powerful technological civilization. The idea is ever-extending, and we now have modern day philosophers such as Nick Bostrom actively discussing the possibility that we are living in a simulation. As the prime example in which VR acts as a catalyst for thought, I will finish this article with this simulation hypothesis.
This idea is also quite simple. As Bostrom has argued, there are essentially three potential scenarios in relation to the simulation, the ultimate VR. The first is that humans and any other beings will never achieve the technological capabilities for full, convincing, immersive VR – for simulated worlds, such as simulating previous times and our own history, or alternative histories – ultimately, due to our extinction. The second possibility is that we or some other species do reach the technological maturity, but aren’t likely to run such simulations simply because of who we (or they) are as a species: for moral reasons, perhaps, or maybe just due to lack of interest. The third option is that we are almost certainly already living in a computer simulation.
How may that be? If it can happen with us in the future, it might have already happened in the past. If ultimate VR is possible, then our own world will mostly likely be just one amongst myriads created by technologically advanced species. Only one world is the biological, physical, originally one; but there will be an unfathomable number of simulations created by advanced species. The likelihood that we should in be the original one is very small, Bostrom argues. At that point we are no longer a species with a future and a past in our universe: we must instead consider ourselves as ever-duplicating in a myriad of possible simulated worlds.
Whether one thinks it likely that we are living in a simulation or not, the potential implications of VR technology are still looming over us. One of the most interesting food-for-thought experiment for us all, after all, is to ask ourselves: If we could do anything we liked, what would we do?
Joakim Vindenes is a PhD Candidate researching VR at the University of Bergen, Norway. He also runs the VR philosophy blog Matrise, accessible at http://matrise.no as well as the VR & Philosophy Podcast.
Future of work and life in Metaverse
With the metaverse having an independent, whole economy of its own, cryptocurrency and digital currency will likely become the key transactional method. The most widely known cryptocurrencies today are Bitcoin and Ethereum, but the list of digital currencies will likely become more and more diverse as new ones are introduced continuously. Either way, such currencies will be key to trading across the real world and the digital world, all while being supported and distributed by technologies such as blockchain.
Among the companies that are already first comers in this new economy are gaming companies — namely Epic with its Fortnite game that has been completely transforming virtual games as players know it by creating a convincing, realistic world around the game. Among the game’s biggest initiatives sparking the metaverse were the concerts, the game hosted, including shows with Travis Scott and more recently Ariana Grande. Other industries will be swift to follow as the lines between reality and digital get more and more blurry.
Mark Zuckerberg’s Future ambitions
Earlier this year, Mark Zuckerberg announced a rather ambitious plan for Facebook’s future — to become a provider of next-level digital and virtual experiences rather than a family of interconnected social apps. In an essay in January 2020, venture capitalist Matthew Ball also took a stab at creating explicit definitions of the metaverse. These characteristics included encompassing the physical and virtual worlds, both; having its own economy; and introducing a new element — interoperability, meaning that users would be able to use their characters and objects from one platform or metaverse to another.
Tony Fadell’s concern about Metaverse
Tony Fadell – the creator of the Apple iPod and lead developer of the first three generations of the iPhone– has expressed concern about the metaverse and its implications on social connection.
The metaverse describes a digital platform that will combine gaming, social media, augmented reality, and cryptocurrency for an integrated user experience.
Fadell said that the virtual world will remove the ability “to look into the other person’s face.”
“If you put technology between that human connection that’s when the toxicity happens,” he added.
Meta CEO Mark Zuckerberg has long been vocal about his plans for the metaverse, and most of the world’s tech giants are following suit, including Google, Nvidia, and Microsoft.
“Welcome to Meta”
“Our aim has always been to empower as many people as possible through our products, and we’re taking the same approach in the metaverse.”
“The metaverse will also make it possible for more people to see the world without having to travel. And we’re building new technology like Horizon Workrooms to help remote employees feel more connected to their team.”