At VMware we’re working on technology to support Spatial Computing in the enterprise. If you’re not familiar with Spatial Computing, please check out my blog here. Spatial Computing is the convergence of emerging technologies such as Augmented Reality (AR), Virtual Reality (VR), computer vision, depth sensing and more.
My role, as Director of AR/VR at VMware, is to lead our product strategy around spatial computing with a focus on research and development of these technologies. In the build-up to VMworld 2019 we wanted to put together a few demos to showcase what our team had been working on (Project VXR). Given the short timelines, it was all hands to the pump and with a (short!) background in software engineering I was happy to get hands-on and contribute. It’s not often I get to stretch my technical legs, so this was certainly a fun and challenging project for me.
In this blog I’m going to cover our experiences around building immersive training scenarios for mobile or standalone virtual reality headsets. Specifically, I’m going to dive into using a scene simplification technology by Google called Seurat. Unfortunately, Seurat is not officially supported by Google, but it is an open source project that can be used and updated by anyone. The first part of this blog introduced immersive training and scene simplification. The second part of this blog is a technical how-to on Google Seurat.
Using Google Seurat in your immersive training scenario
For this blog I’m going to show two examples using two game engines – Unity and Unreal. Unity from Unity Technologies is one of the leading platforms for developing AR and VR applications. Unreal Engine from Epic Games is another leading engine to use for AR/VR application development. At VMware we’re working to accelerate your development of enterprise applications using Unity and Unreal.
Example immersive environment created with Google Seurat with full parallax when a user moves position
Before we start with Seurat, you should be aware that Google has not updated the code for Seurat since May 4th 2018. This project is likely no longer maintained by Google. However, using this guide you will be able to effectively use Google Seurat and where necessary update it to support the latest versions of Unity and Unreal.
Google Seurat can be used with a variety of render/game engines including Unity and Unreal as well as common modelling software such as Autodesk Maya. Typically, Seurat “captures” are done in a render or game engine while Seurat output is imported into a game engine such as Unity or Unreal.
For this example, I’m going to use both Unity and Unreal to capture environments with Seurat and then show you how to import those captures back into Unity and Unreal. You need basic Unity and Unreal experience to follow along.
Google Seurat process
There is a three-part process to using Google Seurat – Capture, Process, Import.
To start you will need a source model or scene that you are going to simplify. This might be a building CAD model imported into Unity or Unreal, an existing high poly scene created in Unity or Unreal or prefabricated building/scene available via a 3D model marketplace.
Firstly, Seurat is used to capture images and depth information from a scene within a defined “head box”. This is usually done using a Seurat plugin for your chosen scene renderer (this could be Unity, Unreal or Autodesk Maya). The headbox represents the total area the user may move their head/body in VR, this is usually a 1-4m² area that covers from the floor to say 2m in height.
Secondly, those images and depth information are processed using Seurat to generate a new 3D model of the environment with associated textures. For this, Seurat has two applications that can be used to process the images and output the 3D model.
Finally, there is a process to import the Seurat model and texture into Unity (or Unreal) and render it using custom Seurat shaders.
If you want to know how Google Seurat works under the hood, there is a wonderful research paper by the authors here.
Getting Started with Google Seurat
Before we start, I want to highlight some great resources online for using Google Seurat, certainly I wouldn’t have been able to use Seurat without them.
Let’s start with the Google Seurat GitHub which can be found here – https://github.com/googlevr/seurat
The github pages offer some information on how to use Seurat but are by no means complete or can be confusing, which prompted this blog! The main Seurat GitHub project is focused on processing Seurat captures and includes source code for two applications:
- Butterfly – a viewer for Seurat captures.
- Pipeline – generates the 3D model(s) and textures that can be imported into your game engine
In order to process Seurat output you will need to either compile these applications from source code (out of scope for this blog) or more conveniently download the binaries from here. However, before you can do any processing using these applications, you will need to create some Seurat captures. In order to do that (and render the Seurat output) in Unity or Unreal, you will need to use the Seurat plugins for those engines. These plugins can be found here:
For Unity, the Seurat GitHub page is here – https://github.com/googlevr/seurat-unity-plugin
For Unreal, the Seurat GitHub page is here – https://github.com/googlevr/seurat-unreal-plugin
WARNING! For Unreal users, the current Seurat Unreal plugin does not work with Unreal Engine 4.16.3 and above. Fortunately, as part of this blog, I will detail how you can build the Seurat plugin for Unreal so that it’s compatible with the latest Unreal version.
If you are looking for guidance on Google Seurat with Unreal Engine, skip down to the Unreal section.
Let’s get started using Google Seurat with Unity!
If you want a good video tutorial on using Google Seurat in Unity then this videoprovides a step by step guide. There are some updates to this procedure that I will include in this blog to help you get the best output from Seurat.
I am going to assume that you have Unity installed. I am using Unity 2019.1.8f1. I also assume that you already have a project setup in Unity with an environment or CAD model that you want to capture. Let’s get started!
See more – https://octo.vmware.com/immersive-training-environments-virtual-reality-using-google-seurat-part-2/