Google's detailed 3D flyover of Hong Kong in Google Earth highlights a developing trend that will accelerate as more and more reality capture takes place and we start to fill up a digital world that mirrors the real world. See Google's A bird’s eye view of Hong Kong. The 3D virtualization of the real world is going to simultaneously make our collective understanding of the world both smaller and larger. That comes with some really cool upsides and a reason to reflect on one extraordinary cause for consideration.
I can't wait to watch football on a 360 degree immersive VR headset. NextVR of Laguna Beach California says that it has developed a custom "lens-to-lens system for capturing and delivering live and on-demand virtual reality experiences in true broadcast quality." How great will it be to be able to watch a play unfold while virtually standing in the pocket and seeing everything the quarterback sees? If you are defense-oriented, then you will want to stand where the linebackers line up and see the game from their side of the ball.
Live action streaming VR cameras will be set up in a calibrated array around the stadium. Among the many technical challenges that must be overcome to make a VR broadcast happen, including real-time stitching together of the many cameras' images, there is one challenge that I have not seen addressed yet. The problem is know as occlusion.
I'm always looking for ways to get things done more easily. I turned that desire onto a project that needs a photorealistic 3D environment. Thinking that with all of the scanners and high res cameras out there, there must be a straightforward way to pull a real environment (think a room, set of buildings or beautiful mountains) into a 3D game engine, such as Unreal or Unity. Turns out I was wrong.
I looked at over 30 separate hardware-based approaches, [See my recent post entitled Putting the Reality in Virtual Reality] and none of them would do exactly what I wanted. In the interest of moving the discussion forward on this topic -- How can you pull reality into a 3D game engine? -- I am offering my list of the elements of an ideal reality capture system. I am all for hearing your comments.
The Ideal Reality Capture System (according to me)
There are at least six major types of reality capture system out there now: 2D panoramic, 2D spherical, photogrammetry, 3D stereoscopic, light field array, and LIDAR mesh with an HDR photo overlay.
Questions and observations I have...
So my friend Mike and I were talking about the best way to bring the real world into 3D game engines, like Unity and Unreal. Mike's been a game developer for a long time and founded GAME-U, a terrific after-school program designed to teach kids how to design and build their own video games. Wanting to know more, I researched the available options and it turns out that there are a whole lot of choices. I have pulled together a list of those options. They span the range from simple fish-eye camera lenses to a rig packed with 14 Red cameras and a LIDAR sensor.
From my perspective they all come with some form of compromise. Some are 2D only, others stereoscopic 3D, few have what I would call real 3D and even the ones that do involve serious post-processing or provide a geometry that leaves game developers deciding it may still be easier to draw the environment the way they always have before. This feels like a category of hardware/software systems on the verge of true greatness and utility in light of the impending VR tsunami, but not quite there yet.
In a future post I will offer my views on the elements of the ideal reality capture system. For now, please enjoy the following (pretty close to) comprehensive list of reality capture systems: