After over a year of research and development Sensosis is now available to download from SideQuest. Here is a quick video teaser of the experience:
For now it is available for download only with a headset however I’m currently working with a local gallery to make the experience available to the public.
Apart from sharing with you news that Sensosis is now live I thought in this post I’ll share with you some of the insights on the final weeks of the project.
As you can see from my previous posts I had a very hands-on experience throughout the project. I might have mentioned somewhere that a large proportion of my learning was learning Unity engine (separate post on that is coming). Unity was my daily bread and butter in the final weeks of the project. However, not everything was going smoothly. Main issue I’ve come across was with one of the materials. In Unity everything worked perfectly however it wasn’t the case when viewed in Oculus. Despite countless hours of searching online for a solution as well as asking around there was no easy way to fix it other than msking the asset from theb scratch. Thankfully I had a great team of creatives who I worked together with me on fixing this issue, big shout out to Tran3D: https://taran3d.com/ and Ryan Garry: https://www.unlimitedmotion.ltd/ One take away from this is to always test the development in Oculus as it might look very different from what’s in Unity!
Once issues were resolved and final touches on the 3D environment completed VR experience was ready to be shown to the audience. Due to covid the initial plan to display the piece in the gallery changed to online deployment. I’ve chosen SideQuest platform as it is easiest to publish by indie developers and also as it is widely used by headset owners. I won’t lie that as it was my first ever app deployment it didn’t go without a few hiccups. However, SideQuest support is excellent and assisted me in resolving the issues, so I can definitely recommend this platform for anyone looking to deploy their VR apps and games.
With the app ready to be viewed I’m working on getting it out there: submitting it to XR festivals, sorting out the gallery exhibition and spreading the word ;)
Also another thing I’ve been considering from the start is bringing the whole experience into VR Chat, so watch this space as you might get to meet the alien in real time ;)
Achieving high-level fidelity motion-capture performance is not easy and in order to get to perfect motion capture clean up is required. However, having said that I found that there are ways to work around it. A perfect example of it is VR performance of Tempest by Tender Claws. I was very intrigued by it as it’s been advertised as an immersive theatre with live interaction with an actor. The way that the show got around the motion capture issue is that it didn’t intend to use high fidelity movement. Instead, actors used tracking that is within the headset, so the headset can detect movement in space and hands movement and the voice can be directly streamed through the microphone in the headset. The avatars were simple but worked together with the rest of the environment. The audience had even simpler shadow like avatars with hands which allowed for interaction with the environment. The actor navigated the audience through the story and asked for participation in key parts where the audience either collected objects or enacted scenes by simple physical gestures. Bottom line is that this sort of interaction worked, it was real-time performance that didn’t have full motion capture but sometimes the simplest of solutions work best which the show proved. I’ve heard that similarly simplified avatars are used to design performances in Mozilla Hubs and VR chat. I’m currently exploring these so more to come on live events on VR social platforms.
One thing that I haven't mentioned so far is environment creation. This post is going to be longer than others but believe me I tried to keep this simple… Whether you are creating virtual reality experience or a game it requires worlds and objects for the audience to explore and experience. The environments can be complex and expansive or totally opposite.
It is really exciting to know that anything can be created, whether it is an environment as we know it like forest, city or an imaginary planet. However, as tempting it can be to let the imagination loose it is useful to learn a bit more about how 3D assets are created.
My first environment creation experience has been using Tilt Brush, it is a program used within headset to create 3D paintings and sculptures. There are also few other programs working on similar premises like Microsoft Maquette or Gravity Sketch. Benefit of using these types of programs is that the creator is wearing a headset while painting therefore they can experience the sketch in 3D. They can move around, see it from different perspectives and decide what angle they would like the audience to see the sketch from.
One thing to mention at this stage is that even though these tools are great for 3D assets creation once assets are imported into the game engine they might not have the same look and feel as they did within the painting software. This is particularly apparent with Tilt Brush, brushes have different colours and often animations which might not look the same in the game engine. Also when painting in the software you set certain lighting and this lighting will be different within the game engine. However, there is a way around it, it is possible to adjust settings in Unity to get Tilt Brush SDK and then the majority of brushes have the same look and feel. Something to consider is creation of a base sculpture and application of mesh (the colour of the sculpture) separately. If you decide to go down this route you might want to consider using Substance Painter. This software allows you to choose from many existing textures, colours, materials but also you can paint over the sculpture to decide on it’s look. You can also download free meshes from 3D texture website.
I should also mention that 3D assets creation can be done without use of a headset. There are many programs to create 3D assets and meshes amongst them: a SketchUp, Blender, Maya, Autodesk 3D. It might take some time to upskill, it is really a discipline in itself but programs like Tiltbrush are relatively easy to use.
However, if the idea of 3D painting or using any sort of 3D modelling software scares you but you still would like to try and create your own world, don’t worry there are many free and affordable 3D assets available online. These can be imported directly into the game engine from Poly, Sketchfab or TurboSquid. There are many assets available from environment building ones like furniture or trees, to spaceships and Mars craters. So, probably before being carried away with making crazy worlds it is worthwhile to think how the audience will be exploring them, will they be static or able to walk around? Would they be able to interact with any of the objects? What will happen if they walk into the object? Deciding on what you want your audience to see and how they will be able to explore the environment will help you decide on the contents of the world you want to create. Afterall there is no point of making an entire galaxy if audience is within windowless spaceship. I would definitely advice playing around with free assets in the game engine to understand how mesh and objects are brought together, there are many YouTube videos explaining how to do this. I would also recommend learning about creating skyboxes which is basically a sky that the audience will see, again YouTube is a great resource for that. This should give you basic abilities to create big or small worlds!
Since the beginning of lockdown I’ve been worried about motion capture of my performance. With no facilities open and people not willing to travel it’s been hard to figure out an alternative that I could use.
First I started from testing home made solutions using Kinect. It’s not very hard to set up and is quite cheap technology. Second hand Kinect together with the appropriate cable comes to about £50. To make Kinect work with computers there is also software that has to be downloaded: Kinect SDK. When Kinect is connected to the computer in order to activate it SDK Kinect Configuration has to be run. Once the Kinect is recognised by computer the red light on it will light up. Now as Kinect is connected we need to set up other software that would actually recognise that we are capturing motion. Kinect is equipped with a true depth camera which means that it captures a surface of everything that is in front of it. Meaning it will capture the entire room, so it will need to be directed to what it is supposed to be capturing.To do that I used Brekel’s software:
In order for Kinect to work well, the performer has to keep a certain distance from Kinect and also cannot come out of the scope of the camera. Also more complicated movements won’t be captured. Actions should be simple and defined. Despite that I’ve kept motion to minimum I found that the animation captured was a bit messy. One of the legs moved in a completely different way than intended. I’ve made quite a few tests and came to the conclusion that when using Kinect there have to be post processing applied in order to fix the animation. However, the amount of post processing can be so extensive that it is better to capture the motion with other solution. Especially as post processing is done with using Maya or Motion Builder which are quite costly.
I’ve decided to go back to my initial test I did with geospatial suits: Neuron and Rokoko. With lockdown easing I started to think how I can get my hand on any of them. Luckily some time ago I met Ryan Garry from Unlimited Motion. He owns a Rokoko suit and also a Leap motion tracker for capturing hands. He also has experience with face motion capture which I’ll mention in a separate entry. I managed to book sometime with Ryan and decided that we will capture the performance at my home. That’s the beauty of geospatial mocap solutions; they can be used anywhere, as long as there is not much disturbance which comes from metal. We’ve spent a few hours capturing the performance and discussing post processing and how hands,body and face will be put together. Also there has been a bit of a learning curve for me in terms of performance with mocap suit and camera but that's the essay on it’s own. Now as the performance is captured different animations have to be put together: body, face and hands. This will take some time but I’m looking forward to seeing the first draft ;)
There are quite a few options to create avatars, some do it in Blender, those with Maya license use Maya, there is also Makehuman and Fuse amongst other options. Currently Adobe Cloud has been offering free creative cloud service, so I’ve decided to use Fuse which is their product. Another factor is Fuse’s integration with Mixamo which is a website which rigs the avatar. Rigging attaches a skeleton to the character which then can be animated. Without skeleton animation can’t reference the avatar and it simply wouldn’t know what to animate. Not all characters might have standard humanoid skeletons for example monsters or animals. Non humanoid avatars would require bespoke rigging which would outline the structure of their skeleton.
To create a rig it is necessary to connect the bones of the skeleton mirroring all the limbs. Skeleton has to mirror the structure of the body for example if a monster has six legs then skeleton should as well. Also all the parts of the body that we wish to move with animation have to have bones attached to them, for example if we wish to move fingers then each finger has to have all the bones. To mirror a human-like hand that would be three bones per finger and five bones in the palm. As you imagine, creating a rig which would be as complex as a human skeleton, which has 206 bones, can be quite time consuming. Hence there are solutions simplifying this process. Miximo is a website where you can upload an avatar and a simple rig would be added to it. In order to experiment with animation and to learn about mocap this will be enough.
Once a character is uploaded to Mixamo then the rig has to be attached. For each center point of the skeleton like legs, arms, head there are center points that have to be mapped. This can be done by dragging them into the correct space in the avatar. Once this is done the character can be downloaded and it is ready to be used.
Using Fuse and Miximo is probably the easiest workflow available and probably most popular amongst indie game developers. However, there are YouTube tutorials on character rigging using Blender or Maya. These solutions might be better for less standard skeletons, however might be more time consuming. One thing to mention is that rigging doesn’t apply to facial movements and only focuses on the body including the head. Working out how to animate faces is my next task coming up but before that I’ll have a play around in Unity with animations making my new avatar move :)
So by now I’ve tried a few options for motion capture: optical ‘high-end’ solutions, Rokoko and Neuron Suits. I’ve also tried homemade mocap solutions using Kinect and Blender. Initially, I was interested in how good each of the solutions is and how they compare on the cost front. From the technologies I’ve tried, Neuron and Rokoko seem to be the winners for now.
Since I gained a better understanding of technology used for motion capture, I have moved on to learning more about how to apply motion capture data onto an avatar. When the data is initially captured, it pretty much looks like an animated stickman.
This animation then has to be applied onto the avatar (a character in a story or game) and this can be quite difficult to accomplish. Firstly, the avatar has its own skeleton structure which is a simplified humanoid skeleton. To apply the motion capture, the skeleton of the stickman and the avatar have to be matched. From the research I have done so far I can see that there are many different ways of doing this depending on what software you decide to use. A lot of the time motion capture will be first imported into Maya, which is a software specialising in animation, where it will be matched to an avatar. As good as it sounds, the software price makes it out of reach for indie developers. Hence, many other tools and solutions have been developed in order to achieve the same purpose. This can also be accomplished in Unity, which is a game engine design software that enables the creation of 3D environments, enables design of interaction with objects and characters. For now I’ve managed to match a trial avatar that I’ve downloaded with Neuron’s motion capture in Unity. However, it was quite tricky since the bone structures were different between the avatar and the motion capture, resulting with quite a few glitching movements along the way. Consequently, I’ve realised that before I invest more time in creating motion capture it will be better to finalise the avatar that I want to use.. This is simply to avoid myself retargeting motion capture data from the stickman onto the avatar skeleton each time. As with anything, there are many different ways to go about creating an avatar so that’s a topic for my next post ;)
Recently I’ve tested magnetic motion capture solutions: Rokoko and Neuron. In whole honesty, I didn’t really see any difference between the two in terms of the quality of motion captured. If anything I was quite impressed by how well they performed. I would say that they get to maybe 80% of high-end optical studio performance. However, having said that there are some serious drawbacks to consider. Firstly, set up time, Rokoko comes in a form of a suit, so you just put it on like a jumpsuit, contrary to Neuron which is made from individual sensors. The sensors are on different stripes that you have to get into right position on your body. Then the suit has to be calibrated in order for the software to recognise where the trackers are. To calibrate series of positions have to be performed in particular sequence in order for the program to recognize the skeleton. This calibration process has to be repeated after some time. The reason for it is that the trackers after some time start to lose positioning and some limbs might start to jitter. To fix this issue re-calibration has to take place. No matter which system is used, even the optical one, the re-calibration will be required at some point during a full day of work.
The reason for re-calibration is related interference, each system whether optical or magnetic can be affected by different types of interference. One thing that causes magnetic systems to lose the tracking is metal, therefore you have to think carefully where you are doing the capture as you might have to re-calibrate it way too often. Things to watch out for are any sort of metal rigs on the ceiling, chairs, cutlery literally anything with metal. Due to ease of Rokoko suit calibration, it might seem like a better option than Neuron, however, there is one thing that currently makes Neuron far more superior over Rokoko - hand tracking functionality. You wouldn't think that hands can make such a difference but they do. I was surprised how much hand gestures can add to expression.
At the moment there doesn’t seem to be a perfect low budget solution that will capture all body parts. I’ll probably stick to Neuron due to hand functionality, however, I have another test to do - streaming mocap into Unity, so let’s see how it performs then ;)
I've been extremely busy since I got a green light to go ahead with the project. So far, the majority of time I've spent on upskilling for the project, and truth to be told I know that this year I will continue doing the same.
So, what is it that I've been learning? First on my list was the motion capture technique. If you are a newbie to animation here is a few things I've learned so far so keep on reading.
I've been attending workshops and training sessions to learn about different solutions starting from high end expensive optical systems (you know, the ones where actors wear Lycra suits and dots around their bodies); to the cheaper options that use mechanical or magnetic systems. The substantial difference between the differently priced systems is that the former requires a studio with multiple cameras and the latter uses just the markers. Now, you might wonder why bother with the studio, it's expensive right? Truth be told, having seen both solutions in action, currently the optical solution is far more superior. I was amazed on how detailed the captured movement was. Every little movement which we might not be aware of, like shifts in weight or different twitches, all of that is captured. That is contrary to the mechanical capture system which records skeleton movement without that level of detail. Having said that, it all depends on what your budget is and how you intend to use this technology. If you just want to create simple animations, there is no need for high end solution unless you have some spare cash to burn - and you'll need a lot of it. Also, the number of alternative/cheaper solutions is increasing and it's hard to even call them all out. For now I've not made up my mind, however in the next couple of weeks I'll be testing Roko Suit and Notch Sensors so I should be able to make the final decision.