22/23 BA IMMR & LCDS year2
THE OTHER REALMS
Background & Context
Concept
This project is made with a group of professional dancers in the London Contemporary Dance School that aims to create a project in any form of creative media studies, this gives us very broad options to choose and it could verify with the creator’s wishes, creative media includes Virtual reality, live concert, 3D or 2D performance and any forms that able to express human body movements can be challenging for us. In this case, our groupmates from LDCS are more interested in virtual reality and impressive 2D animations, so we finally chose to make the connection between them.
The project is named ” the other realms” referring to a multi-universe concept that we build to connect virtual reality with 2D animations. The VR environment is the main stage where the dance performance takes place. It is a fully immersive 3D space where the audience can feel as if they are part of the performance. The dancers interact with the environment and the audience in “real space”, creating a dynamic and engaging experience. The 2D animation is a complementary element to the VR environment. It is a flat, 2-dimensional representation of the performance, but with its own unique look and feel. The animation can be projected onto a screen or displayed on a monitor, allowing the audience to see a different perspective of the performance. The multiverse concept connects these two elements by creating parallel versions of the same performance. In one universe, the performance takes place entirely in the VR environment. In another universe, the performance is represented through 2D animation. These universes coexist and interact with each other, creating a dynamic and engaging experience for the audience. The dancers and the animation can interact with each other in different ways. For example, the dancers can move from the VR environment into the 2D animation and vice versa. They can also interact with objects and characters in the animation, or even change the environment itself. This interaction creates a sense of continuity between the two universes and enhances the overall experience for the audience.

Motion & Choreography
The whole storyline is composed of 2D line and circle animation, vr water scene with a clear sky, and a cyberpunk theme scene with the neon light concept. The purpose of the piece “The other realms” express a parallel universe explained by contemporary dance and visual arts in the forms of 2D&3D animation, creating a multiverse travelling experience. The overall representation of the characters will be two- or three-dimensional models of female and asexual bodies that are closer to the theme of surrealism.
We chose the pure non-lyric steam boe song gives more variety in choreography it’s also made with a light rhythm that offers many possibilities in picturing. Vaporwave music, characterized by its dreamy and nostalgic soundscapes, can work well with contemporary dance choreography by providing a unique and atmospheric backdrop for the movements of the dancers. The ethereal and otherworldly quality of vaporwave music can lend itself well to more abstract and interpretive styles of contemporary dance, allowing the dancers to explore different moods and emotions through their movements. The smooth and flowing nature of the music can also provide a sense of continuity and flow to the performance, creating a sense of unity between the movements of the dancers and the soundscape they inhabit. Overall, vaporwave music can be a powerful tool for enhancing the visual and emotional impact of contemporary dance choreography and can allow for a more immersive and engaging experience for the audience.
Choreographies & concepts:
01.Geometric—–Technical and angular movement, accentuating the lines of the body.
02.Water—–Smooth and fluid movement. Exploration of body and water.
02.Cyberpunk—–Personification of neon lights. Embodiment of coloured light.
Creative Process
Creating a VR project with contemporary dance involves a variety of processes, including modelling a character, rigging & animation, motion capture, environment building & light rendering, interactive water shader & coding, and model shading with shader graph. The first step is to create a highly detailed 3D model of a human character that will be used in the project. This model should include features such as facial expressions, body movements, and clothing. Once the character model is complete, it needs to be rigged with a skeletal structure that will allow for realistic movements. Animations, such as dance movements and gestures, can then be created for the character using motion capture technology to make them as realistic as possible.
Model & Rigging


In order to show the dynamism of the dance and for the integrity of the movements, I chose a model with female body features but without individual body, symbols to carry all the animated copies.
The model has perfect proportions and the muscles of the legs and abdomen, including the arms and back, have been drawn out by the original modeller in order to show the smoothness of the model’s lines and to give a sense of attraction to focus on the movements.
This ties in better with the special shader graph I had planned to give the model a flowing galaxy and gives me more creative ideas. The decision to carry complex and gorgeous graphics in a simple and fluid silhouette is in itself a balanced visual art and allows for more and more exciting frames.
The final method I use to apply the skeleton on my model is through the Mixamo AI recognizing and rigging function to attach, it was convenient to be auto-applied as Im requiring a highly adjustable model where each joint has its stretching strength according to the movement.
The reason why I didn’t choose to use MAYA binding skin system to rig is it requires a huge work to make rigging and strength mapping, it will be too much and might be a non-efficient way to do this as we are working with advanced motion capture technology at this time. The Rokoko motion capture suit is designed to work in sync for full-body mocap recordings using the Smartsuit Pro, Smart gloves, and Face Capture. The Mocap suit will capture body, finger, and facial animations in a single performance where every dynamic feature of the movement can be detected directly without any further work on it required, so I think the Mixamo auto-rigging system is the best choice to work with the Rokoko mocap suit in terms of efficiency and productivity.

Motion capture
There are two methods we can choose to do the motion capture work, both of them are based on Rokoko software and or hardware. Rokoko is a company in the motion capture and animation fields that is already succeeding in technology development, it offers free motion capture tech for each user to create their work.
1.Rokoko AI (video based)
Rokoko has launched Rokoko Video, a free AI-trained browser-based tool that extracts motion data from video footage and retargets it to 3D characters. The animation can be cleaned in Rokoko’s Studio software, which can also be used for free, and exported in FBX and BVH format, for use in 3D apps like Blender or Maya, or game engines like Unity and Unreal Engine. The Studio software can also be used for free, although Starter accounts don’t include active customer support or Rokoko’s integration plugins for streaming data to DCC software.
Cooperating with contemporary dance students to do motion capture for their choreographies can be a fascinating experience. To begin with, it is important to understand the choreography, including its theme, mood, and the intended emotional impact on the audience. This can be achieved by attending rehearsals and discussing the choreography with the dancers and the choreographer.
Once the choreography has been understood, the next step is to prepare for the motion capture session. This involves setting up motion capture equipment, which typically includes specialized sensors that can capture the movements of the dancers in real-time. The dancers will then need to wear these sensors during the motion capture session.
During the motion capture session, it is important to create a safe and supportive environment for the dancers. This can involve providing them with clear instructions, guiding them through the choreography, and ensuring that they are comfortable with the equipment and the setup. The motion capture session can be recorded from multiple angles to ensure that all the movements are captured accurately.
The results of the AI video recognition technique were not as complete as expected, the 3D position of the characters and the details of the arm and leg movements were not well recognised, we used a plain white background as much as possible in this scene and brought the video quality up to the required level, but it was clear that the AI was not able to recognise the overly detailed dance movements, not to mention that the choreography for the water scene I was responsible for was a concept based on the interaction between the human body and a body of water, where the dancers’ joint movements were incredibly complex and difficult to recognise.
For these reasons, the final rendered animation by Rokoko AI shows the character’s arms frequently displaced or moving behind the character itself, and the movement of the body joints is unnatural and stiff. After communicating with the dancers and giving them advice, we all agreed with meeting together again to give the motion capture suit a try as their works were not represented perfectly and plenty of animation and rigging work is required if we have to make it as perfect as we can as a VR student that should be responsible to this whole project, but to be honest, it’s a little bit of wasting time for me if we have a more efficient way to do this.
2. Mocap studio (mocap suit)
The Smartsuit Pro is an inertial wireless, all-in-one motion capture solution that is intuitive to use and affordable for anyone. Faster to set up than any other system on the market, robust and durable to withstand close contact and active use, and a high-touch support team is always available to ensure the best user experience. With the Smartsuit Pro, character animation is definitively democratized.
Rokoko Smartgloves can capture the full spectrum of an actor’s hand performance, giving VFX, VR, game and digital artists a faster way to create character animations.
Every detail of dance movement is captured relatively more straightforward this time with motion capture suits on, the method is not left with Rokoko AI but making a skeleton connection under the Rokoko system made work more accessible, the Skeleton is automatically binned on top of the suit where the movements happen, capturing dots placed on each part of the suit worked perfectly under space, size, humanoid prefab settled.
The advantages of using this technology were obviously the efficiency, it didn’t take our whole group a long time to finish all three motion capture processes compared to a 3-hour long preparation and filming process taken for Rokoko AI video motion capturing, and most importantly, our dance school groupmates were more motivated this time with a si-fi device put on, this fulfilled their imagination fo incorporating with us — to have a remarkable experience with technologies.
I didn’t even take a picture of us working at the motion-capturing studio we finished it really fast.

Motion capture results
We finally turn up with a wonderful animation with skeleton results under the professional mocap tutors and we learned different working ethics under several toolkits produced by big companies, for example, the skeleton exporting type could be changed inside the Rokoko studio to match different modelling software such as Maya, Blender, Cinema4D etc, the formation could be differentiated at this very first process that may influence the whole project and to choose which on matches the company’s constrain.
After exporting from Rokoko studio, we will get an FBX file corresponding to the file type selected earlier with all the animation attached on the HumanIk system for MAYA and a preset character body position of “T_Pose”, which will make our work super easier as we only need to bind the skeleton with model mesh for setting up. We can of course adjust the length of every single movement within the Rokoko studio editor (youtube video shown right-hand side) where the displacement and positioning changes were all recorded with different skeleton animation layers so that we can delete every parts that are redundant clutter in considering the completed visual performance. However we are not sufficient to use this function as a student borrowing the mocap studio for just several hours, so we were only allowed to adjust them by ourselves in modelling software, but it’s still powerful enough for the current level that we are not doing professional detail seeking capture at this point. The first processing result was like the video shown left-hand side, the animation was quite natural compared to the previous ones but I still need to manually retarget those important body parts with the reference video provided by Dance school students especially the hand and kneel’s for the choreography I have for my scene interacting with the water surface.
During this process, we didn’t meet a lot of issues as we were prepared before actually using the mocap studio, so the problems such as communicating or role distributions or capturing turn managing while working simultaneously with the other groups were not disturbing too much, and dance school students were all role keeper that we finished them in a very organized management with all of we 20+ members working at the same time. The only thing that was worth to be noticed was the file saved after every single shoot will be important as we were not the only group using the studio at the moment, we need to carefully name our projects in order to get the right file back.
Rigging & Animating
When I was able to get to the step of binding the model and animating it I found that things were far from being as simple as I had thought. By the time I got the hard drive with the results of the motion capture I had already decided on the style of model I wanted to use and had downloaded it waiting to be combined with the motion animation. The first thing that came to my mind was to manually add static bones to the model using the humanik tool that comes with maya and bind the bones to the model’s body parts using the Bind Skin tool after aligning them with each other and then selecting the animation preset in the humanik toolbar to have However, after actually trying to do this, I found that the model I had set first was not in the basic motion capture skeleton initial setup (I was looking for an A-word static motion model and the motion capture default was a T-word model) and I struggled for days with this dilemma.
This fatal problem has never been solved so after this I have been working on unity material selection and rendering related knowledge in advance and I found the following youtube instructional video after being inspired by my motion capture teacher. The Google generation method in this video is directly through a motion capture and skeleton creation site called mixamo, where I can upload any model I want to use in any motion pose and ask the site to automatically bind the skeletal system to my model by its own AI algorithm. There is a downside to this method, which is that the AI algorithm will not work when the bound object is not human. But it ultimately saved me a lot of tedious work and time doing boring work. After getting the AI algorithm to build a model of the human skeleton, all I had to do was download and match the animation up.
Having learnt from the previous experience, I found that the above-mentioned youtube tutorial followed up with a detailed animation binding method, so I decided to abandon the original method (which was to rationalise the movement by adjusting the bone position and rotation parameters to solve the through-moulding problem) and added a control rig to the bones (a simple version of the skeleton system but one that the maker could use). The bones that are connected to the joints will be adjusted by the algorithm to follow the movements of the joints, not too much and not too little) and this made my job much easier.






After the character is rigged and animated, it is time to build the virtual environment where the performance will take place. This involves creating a 3D set with various props and setting up lighting to create the desired mood and atmosphere. An interactive water feature can also be added to the environment by using a water shader and coding it to respond to the movements of the dancers. This can create a dynamic and engaging visual element to the performance.
Building Environment
Finally, a shader graph can be used to create custom shading for the character model. This involves creating custom textures and materials, as well as adjusting the lighting and other visual effects to create the desired look. By following these processes, a VR project with contemporary dance can be created that is immersive, engaging, and visually stunning. The combination of realistic character animations, detailed environments, and interactive visual elements can create a truly unique and memorable experience for the audience.
Unity URP pipeline shader graphs
Shader Graph is a tool that enables you to build shaders visually. Instead of writing code, you create and connect nodes in a graph framework. Shader A program that runs on the GPU. More info See in Glossary Graph gives instant feedback that reflects your changes, and it’s simple enough for users who are new to shader creation. Shader Graph is available through the Package Manager window in supported versions of the Unity Editor. If you install a Scriptable Render Pipeline (SRP) such as the Universal Render Pipeline (URP) or the High Definition Render Pipeline (HDRP), Unity automatically installs Shader Graph in your project.
Shader Graph is only compatible with the Scriptable Render Pipelines (SRPs), namely the High Definition Render Pipeline (HDRP) and the Lightweight Render Pipeline (LWRP). These two SRPs are available in Unity 2018.1 and later. The legacy built-in render pipeline does not support Shader Graph.
Water Shader
The shader graphs Im gonna be using are unity URP real water interaction shader and a moving light changing Galacy shader graph. where the real-time water shader allows the player with a VR headset to view it as well as move their heads and controllers to make interaction with the environment built, this will be the highlight for the whole project part that I’m responsible for, to make the viewer stunning when first step into the scene filled with water.

The interactive water consists of four main effectors in the URP pipeline to make the little ball water interactable, including two particle systems, one water displacement module and one physical ball containing components of Riggidbody and sphere collider that could be influenced by the gravity engine in unity. Other elements like reflection probe and global volume including directional lights are needed to make the water surface movement visible.
This is the preview of water interacting surface with a ball running with unity gravity damping through the surface. This is happening because the water’s surface simulator in form of plane itself contains nature water floating property that can bring the ball upwards once it drops to the largest buoyancy point inside water plane and the wave generates according to ball’s speed and size, the coding applied on water surface will then calculate wave bounce with distance it travels behind the ball’s centre. Another important thing happening is particle system that reacting to the ball’s entry and exit to surface plane, one is going upwards and then spread twice per movement to simulate the water splash, and one is following the ball’s anti-gravity movement to create water drop effect, all these properties are able to be change with the size of that ball to make the game play space reasonable.
The sphere collider and rigid body applied on little ball is used to simulate the physical shape and the mass of it in order to let script on water face interactable.

Galaxy Shader

Pack of space and nebula materials that can be used on any mesh. Multiple textures and gradients are included in this package to help you customize the effects. Materials have many parameters, you can tweak rim, distortions, gradient texture, scroll speed of background nebulas, and much more. Also supported on mobiles.
The Built-in Render Pipeline is Unity’s default render pipeline. It is a general-purpose render pipeline that has limited options for customization. The Universal Render Pipeline (URP) is a Scriptable Render Pipeline that is quick and easy to customize and lets you create optimized graphics across a wide range of platforms. The High Definition Render Pipeline (HDRP) is a Scriptable Render Pipeline that lets you create cutting-edge, high-fidelity graphics on high-end platforms.
In this case with the package of shader graph provided, I will just need to adjust the base colour a little bit in order to fit my skybox colour set-up, therefore, I need to read the shader graph inside the package ( image shown downwards) the components “nodes” that create the shader step by step were clearly marked as different roles given, for example, the colour background part was generated by the shader nodes inside the frame called “Main Background” and the section that related to material normal is “Normal” and the specular particles and shining moving stars are generated by the section called “Rim”, therefore, the design logic will be simply separated into those parts and I can easily change any of them manually according to my own design requirements.


In responding to the rotation and displacement given in the shader graph the result of applying this material with the shader is going to shine the character’s surface and to live it with moving clouds and particles, at the same time, the colour gradient moves. Never the end, these colour changes also reflect on the water surface as the shader for water has the reflection property that duplicates every single colour above itself.
The material will be controlled by the shader which maintains its own reaction to the environment lighting and light bouncing function and also shadowing but with more creative details that producers can play with.



The colour rendered with shader could be easily changed by finding out the nodes in the shader graph map that are related to “base color” or “Color” In this case the colour node is connected to each “cube” section of every reflected cubemap, this will decide which three colours are rendered through all the surfaces with this material, and it has three in galaxy shader to specify the grading colours and positioning of each colour, where the displacement and movement have already claimed at the left-hand side of the graph map, therefore, a lifelike galaxy could be changed its colour and settled in different speed easily.
Post-processing & Detail setting
In order to make the body parts interactive to the water surface, I achieved this by simply placing the little ball prefab to each body parts that are doing interaction with water, for instance, character’s hands must be one of them so I found the rig reference prefab to place the ball with the animation applied on hands, in this way we don’t have to animate the ball separately to make it follow our hands, but by just drag and drop the ball prefab under the skeleton with animation.


Each character mesh has its Control rig reference and skeleton reference that tells the control rig to do what kind of rotation or displacement to make an animation alive. In this case, the water interactive little ball shouldn’t be placed under the control rig but as a child of specific body parts down in the “mixamorig:hips” game object, we are doing this because the control rig isn’t the one with animation attached. Before we do this, the rigid body component on the ball that we mentioned before shouldn’t be set as “using gravity” but “kinematic” otherwise the ball will directly drop down at the same time you hit the play button.
A water surface asset that can generate waves by user operation, available in Unity’s Universal RP and HDRP.by light wave calculation by compute shader and water refraction expression that does not collapse with VR binocular vision. In addition, when tiles are placed on the surface of the water, the waves propagate and can be used to express elongated shapes and wide lake surfaces. By drawing wave obstacle information on the mask texture, you can create a water surface that reflects waves in any shape. We also include a sample that floats on the surface of the water due to buoyancy, which is often used with the surface of the water. As an extension of the tiling idea, I added a 6-sided sphere mesh object and a sample of waves propagating on the sphere. Also added samples of gravity and buoyancy towards the center of the sphere. You can also color the surface of the water by specifying any color. I also added a refracting colored glass material, although it’s not a water surface. In addition, we added a sample that moves Humanoid in VR and interacts with the water surface with the grip and trigger of the controller. The height of the water surface can be obtained in Script by specifying its position in real time.

It is strongly recommended to use it on a PC in general. Waves are simulated by GPU compute shaders. It works lightly in an environment with a large amount of GPU computation. You can also build for mobile devices. However, if GPU performance cannot be expected depending on the device, it will not operate at a high frame rate like a PC. WebGL will not work. This is because the compute shaders used are not supported by WebGL.
Post-processing volume & lights
Post-processing is a generic term for a full-screen image processing effect that occurs after the camera draws the scene but before the scene is rendered on the screen. Post-processing can drastically improve the visuals of your product with little setup time. This function is based on both camera settings and a volume set as global that has post-processing properties attached, and the camera that carries this volume component should have its “Post-processing” feature clicked on.

Camera settings

Volumes can contain different combinations of Volume overrides that you can blend between. For example, one Volume can hold a Physically Based Sky Volume override, while another Volume holds an Exponential Fog Volume override.
The Post-process Volume component allows you to control the priority and blending of each local and global volume. You can also create a set of effect overrides to automatically blend post-processing settings in your scene.
The Depth of Field effect blurs the background of your image while the objects in the foreground stay in focus. This simulates the focal properties of a real-world camera lens. A real world camera can focus sharply on an object at a specific distance. Objects nearer or farther from the camera’s focal point appear slightly out of focus or blurred. This blurring gives a visual cue about an object’s distance and introduces “bokeh” which refers to visual artefacts that appear around bright areas of the image as they fall out of focus.
Bloom is an effect used to reproduce an imaging artifact of real-world cameras. The effect produces fringes of light extending from the borders of bright areas in an image, contributing to the illusion of an extremely bright light overwhelming the camera
or eye capturing the scene
.

SkyDome

To make the whole environment in “Round”, I used Maya to model a rounded sky dome surface mesh inversed to render the sky texture only inside the dome, the teleportation area is circled by a cylinder made ground with a collider on it. In this method, I made the world round for the game space design completed.
RENDERING
Render Test
Water Shader Rendering

Body Shader Rendering
The shader used was in a high polygon scale that might be crashed within the VR environment, it turns out that several rendering tests were required during the project building process to make sure the element added was not exciting the maximum pixel that the VR headset can carry.







Connection between scenes
In our project, “The Other Realms,” we aimed to create a seamless connection between our 2D animations and two VR experiences using a lighting flying particle. This article acted as a visual element that passed through the entire storyline, serving as a transitional element and establishing connections between scenes. By introducing a consistent visual element that travelled through the narrative, we enhanced the overall coherence and engagement of the performance. The particle not only facilitated smooth transitions between scenes but also added intrigue and anticipation for the audience. Its movement and behaviour were synchronized with the accompanying music, evoking emotional responses and heightening the impact of the performance.
As group mates, we collaborated closely to ensure the successful implementation of the lighting flying particle throughout the project. We allocated responsibilities and tasks among ourselves, with one member focusing on animating the particle, another integrating it into the VR experiences, and the third synchronizing its movement with the music. Regular meetings and constant communication allowed us to exchange feedback, refine our approaches, and ensure a cohesive execution. Throughout the development of our project, we encountered challenges and made adjustments to enhance the effectiveness of the lighting flying particle. We experimented with different particle effects settings and fine-tuned the synchronization with the music, ensuring that the particle’s movements complemented the rhythm and mood of each scene.
In conclusion, the lighting flying particle played a significant role in “The Other Realms” project by connecting our 2D animations and VR experiences. It served as a transitional element, seamlessly guiding the audience through different scenes, and symbolising the unity and interconnectedness of the various realms explored in our performance. Through collaboration, experimentation, and attention to detail, we successfully integrated the particle into our project, enhancing its visual appeal, coherence, and immersive qualities. The lighting flying particle truly brought our project to life, creating a captivating and fluid experience for our audience.
Documentation
The tutor from LCDS organized a documentation filming activity on the date when we were doing a final presentation with our dancers, this records our whole presentation through the camera which shows how the work are be done also the interview with both tutors from UAL & LCDS.
Link to the LinkedIn blog: Educational Collabs – Antoine Marc

Thrilled to have led the first collaboration between UAL and LCDS, in this new BA tackling the use of creative technologies in dance and virtual production. Supported by case studies and hands-on explorations, spent the past few weeks covering:
– Cross-field collaborations.
– Differences in Mocap technologies.
– Incorporating AI tools.
– UX in Hybrid Performance.
Thank you to Omari Carter and Manos Kanellos for making this happen. Congratulations to both UAL and LCDS students for collaborating through dance and VR production.
Check out the short video below:
Critical Reflection
Critical Reflection on “The Other Realms” VR Dance Project
Introduction:
As a student who actively participated in the creation of the “The Other Realms” project, which aimed to connect virtual reality (VR) with 2D animations through contemporary dance, I would like to critically reflect on my experience. This project presented numerous challenges and opportunities, allowing me to explore the intersection of creative media, dance, and technology. In this reflection, I will discuss the strengths and weaknesses of our project, the collaborative process, the integration of motion capture technology, and the overall learning outcomes.
Strengths and Weaknesses:
“The Other Realms” project offered a unique and innovative approach to contemporary dance by combining VR, 2D animations, and motion capture. The use of VR as the main stage for the dance performance provided an immersive and engaging experience for the audience, enabling them to feel as if they were part of the performance. The integration of 2D animations added a complementary visual element, allowing the audience to view the performance from different perspectives.
One of the strengths of our project was the choice of music. The selection of vaporwave music added an atmospheric and dreamy backdrop to the dance movements, enhancing the emotional impact of the choreography. The music provided a sense of continuity and flow to the performance, creating a unified experience. Another strength was the collaboration between the dancers and the animation team. The dancers’ input and understanding of the choreography were crucial in creating meaningful interactions between the virtual and animated worlds. The seamless transitions between the VR environment and the 2D animations added depth and dynamism to the performance.
However, there were also some weaknesses that we encountered during the project. One notable challenge was the limitations of the motion capture technology we initially used. The AI video recognition technique did not accurately capture the intricate dance movements, resulting in unnatural and stiff animations. This required us to pivot and explore alternative solutions, leading us to incorporate the Rokoko motion capture suit. Although this introduced efficiency and improved results, it would have been ideal to have used this technology from the beginning to avoid wasting time and effort. Another weakness was the initial struggle with rigging and animating the 3D character model. The process of manually adding static bones and binding them to the model proved to be time-consuming and complex. However, we eventually found a more efficient solution through Mixamo, which allowed us to automatically bind the skeletal system to the model using AI algorithms. This saved us valuable time and enabled us to focus on refining the animations.
Collaborative Process:
The collaborative process was a vital aspect of our project’s success. Working with a group of professional dancers from the London Contemporary Dance School provided valuable insights into choreography, movement, and the expressive potential of the human body. Their artistic input and expertise were crucial in creating choreographies that effectively utilized the capabilities of VR and animation. Communication and coordination were key factors in the collaborative process. Regular meetings, attending rehearsals, and discussing the choreography with the dancers and choreographer helped us align our creative visions. It was essential to understand the intended theme, mood, and emotional impact of the choreography to effectively translate it into the VR and animation elements.
Integrating Motion Capture Technology:
The integration of motion capture technology played a significant role in our project. The initial use of Rokoko’s AI video recognition technique proved to be inadequate for capturing complex dance movements. However, after transitioning to the Rokoko motion capture suit, the process became more efficient and accurate. The ability to capture the nuances of the dancers’ movements in real time greatly enhanced the authenticity and realism of the animations.
The use of motion capture technology also facilitated a more engaging and motivating experience for the dancers. Wearing the motion capture suit allowed them to see their movements