Monthly Archives: February 2014

London Oculus Rift/VR Developer Meetup (Feb 2014)

Last week Virtual Architectures headed over to East London to check out the goodies on display at the most recent London Oculus Rift / VR Developer Meetup hosted by creative production company Inition. Is was a great evening with plenty of new technology on display so we thought we’d share.

First off was the European debut of the Avegant Glyph headset which is currently doing massively well on Kickstarter:

During the recent annual CES trade show held in Las Vegas this January the Glyph was being touted by some as the main rival in to the Oculus Rift on the basis of the high resolution achieved by its novel display technology. After Avegant CTO Allan Evans gave us a demonstration it was clear that the display really is indeed quite special.

The clarity of the image was really impressive compared with that of the current Oculus Rift development kit (DK1). However, unlike the Rift which is designed to immerse the user by expanding the displayed image to fill their full field of vision, the Glyph displays an image which appears to float a few centimeters in front of them leaving the periphery around the visor free. Rather than focusing on providing an immersive VR experience the Glyph is much more targeted at the high end consumer seeking a no fuss plug-and-play media device. In this regard it is really promising.

Unfortunately we didn’t get the opportunity to experience any VR applications with the device at the meetup although Allan assured us it would be VR ready upon release. Sure it will do VR then…but for the time being the chances are that those applications will be developed using the Oculus Rift or something like it. In the meantime we’re crossing our fingers for Oculus Rift to announce a new high resolution development kit soon.

The second item to catch our attention at the meetup was an Oculus Rift integration with SoftKinetic’s ‘DepthSense’ gestural camera. Ordinarily the camera would be placed on top of a computer monitor or laptop screen and enable the user to interact with the machine using bodily gestures such as a wave of the hand, much like the larger Xbox Kinect.

In the case of the DepthSense camera it is small enough to mount on the Oculus Rift and capture the position of the users hands so that they can be projected into the virtual scene as this test by test video by developer Gilles Pinault demonstrates:

The demo we tried was slightly different as it used a gridded referencing system to help users place virtual building blocks more accurately. It was a fascinating to interact with a virtual scene in such an intuitive way for the first time. Usually our experience of VR is oddly disembodied and awkwardly mediated by a mouse, keyboard or hand controller. It was a real pleasure to simply pick things up, put them down and feel part of the scene.

That’s not to say the experience was perfect though. The main limitation was that the cameras field of view did not match that of the Oculus Rift display. As a result the users hands would have to be held up directly in front or else disappear behind an invisible frame a quarter of the way in from the edge of the Oculus display. In doing so the system would lose the virtual block the user was holding.

Despite the limitations of this particular demo it points the way to a range of applications in manufacture, construction and other specialist fields. We’re really looking forward to seeing what’s next.

Many thanks to the folks at Inition for having us. Looking forward to the next!

Advertisements

Panopticon Project Phase 1 – Technical Testing & Teaser

Having received the Oculus Rift development kit last month Virtual Architectures has spent the time exploring existing Oculus ready games and demos to get a sense of what works in VR and what doesn’t. In the meantime we’ve also completed the crucial first phase of technical testing by creating a scene and exploring it with the Oculus Rift headset as you can see in the teaser video above. We decided not to take you on the tour inside just yet.

The main aim of testing at this stage is to establish that there won’t be any unexpected problems in the proposed workflow for developing the final Panopticon Project experience:

  1. 3D Modelling in Sketchup
  2. Export Model to FBX and import into Unity
  3. Assemble the scene and integrate Oculus Rift controller
  4. Optimise the scene
  5. Build the final Unity application

We were really please with the result.

The Panopticon model shown in the video was created using the free modelling program Sketchup Make. After a few teething problems importing the model to Unity worked fine. To speed up the test we reused the terrain, skybox and background sounds from the existing Tuscany Demo provided by Oculus VR and Unity. Setting up the Oculus Rift controller was surprisingly easy.

If you are wondering why there are two images side by side in the video this is because the Oculus Rift headset displays a slightly different image to each eye. The two images of the virtual environment are adjusted to match the estimated ‘interpupillary distance’ of the users eyes. This convinces the brain that each eye is seeing a single unified object from the appropriate perspective and simulates our ordinary experience of three dimensional depth. Each image is displayed in the headset with a ‘barrel distortion’ that counteracts the effect of the lenses in the Oculus Rift headset which are designed to expand the image to cover the user’s full field of view and improve their sense of visual immersion in the virtual scene.

In the process of taking apart the Tuscany demo to set up the scene we picked up lots of tips for optimisation in Unity. Technical areas requiring further investigation are as follows:

  • Lightmapping and Dynamic Shadows – Determines how shadows are created (with various benefits and tradeoffs for performance)
  • Shaders – Controls how light interacts with model surfaces to give a realistic feel
  • Occlusion Culling – Hiding geometry that isn’t being looked at by the user to reduce computer processing and improve performance

At this stage work is ready to commence on building the final Panopticon model and creating a suitable landscape environment to provide context. This will be informed by research undertaken using Jeremy Bentham’s own Panopticon Writings along with Janet Semple’s book Bentham’s Prison: A Study of the Panopticon Penitentiary.

On to Phase 2!