The kit is purely aimed at enterprise use cases.
Last month wireless solutions provider for virtual reality (VR) headsets TPCAST began shipping its TPCAST Air for Oculus Go solution for business in Europe and Asia. Today, the company has expanded those territories to include North America.
TPCAST Air is the company’s second generation wireless solution, originally launching it for the Chinese market in February. The technology is SteamVR-based, providing an ultra-low latency wireless VR experience for 3-DOF applications on the headset. Essentially, it allows customers to easily stream content of their PC and into a standalone headset
Designed for industries such as architecture, engineering, construction, interior decoration design, education and gym spinning, for example, TPCAST Air for Enterprise can, in fact, support different 3-DOF or 6-DOF standalone VR headsets. It supports multi-user applications so in time location-based entertainment (LBE) venues such as VR arcades could use the technology to stream content rather than users having to wear bulky backpack PC’s.
As a single-user system, TPCAST Air for Oculus Go will be sold for $499 USD (including the headset) available for order now at www.tpcastvr.com and also at US VR solution providers.
TPCAST has also confirmed AIR will be coming to Oculus Quest. Able to support the devices Oculus Touch controllers, TPCAST Air for Oculus Quest is scheduled to arrive in about eight weeks (so presumably the start of August). No price has been revealed for this version.
As companies like Oculus and HTC begin steering users towards standalone devices TPCAST has expanded its wireless offerings to keep up. You can still purchase the original TPCAST Consumer Editions for Oculus Rift and HTC Vive. VRFocus will continue its coverage of TPCAST, reporting back with further developments as they’re announced.
Im Rahmen des Förderprogramms des Ministeriums für Wissenschaft, Forschung und Kunst Baden-Württemberg sollen innovative Lehr-/ Lernszenarien sowie neue Ansätze zur Nutzung von Virtual Reality (VR) und/ oder Augmented Reality (AR) für die Lehre an Hochschulen anwendungsorientiert erforscht und nutzbar gemacht werden. Anträge können noch bis 31. Juli 2019 eingereicht werden.
In den letzten Jahren konnten in dem Bereich Virtual Reality (VR) und Augmented Reality (AR) große technische Fortschritte erzielt werden. Dies hat dazu beigetragen, dass sich diese Technologien vom „Spielzeug“ zum „Werkzeug“ entwickelt haben und zunehmend Verwendung finden. Dies verwundert nicht, denn kein anderes digitales Medium bietet momentan die Möglichkeit, sich so direkt und intensiv – sozusagen mit dem ganzen Körper – mit einem Gegenstand auseinanderzusetzen, wie es bei VR / AR der Fall ist. Insbesondere wird ihnen ein hohes Potential zur Verbesserung der Lehre und des Lernens sowie zur Realisierung neuartiger Lehr-/Lernszenarien zugesprochen.
Das Gesamtkonzept zur Förderung der Digitalisierung in der Hochschullehre beinhaltet das vier Bausteine umfassende Maßnahmenpaket „Teaching4Future digital@bw“. Mit dem vierten Baustein „Teaching4Future with virtual elements digital@bw” (T4F-virtual) sollen nun Vorhaben gefördert werden, die die Potenziale von Virtual Reality (VR) und Augmented Reality (AR) für die Bildung adressieren. Insgesamt stehen für die Laufzeit von bis zu 36 Monaten 1,6 Mio. Euro zur Verfügung (maximal 400.000 Euro je Vorhaben). Die Antragsfrist endet am 31. Juli 2019.
Mehr dazu finden Sie auf der Webseite des Ministeriums für Wissenschaft, Forschung und Kunst Baden-Württemberg.
It’s easy to tell why so many VR companies are focused on enterprise solutions.
May ’19 saw the global Vive X accelerator hold a selection of demo days to help promote a variety of startup companies involved in the fourth batch selection. Days were held in Beijing, Tokyo and London (the one in San Francisco was this week), and VRFocus took a trip to the across the UK capital to see what was on offer.
One of the main reasons VRFocus also wanted to attend was to get some more hands-on time with the HTC Vive Pro Eye – the new flagship device which has just launched – as well as the HTC Vive Focus Plus, the standalone headset which is only going to be made available to enterprise customers – meaning it’s a rare chance to test the sucker out.
There were four companies showcasing their tech with Vive headsets, Kainos, Immersive Factory, Vobling and ZeroLight. Kainos is a British digital solutions company that was using Vive Pro Eye to demonstrate an AI driving tool, which could analyse and collect insights into driver behaviour – essentially a more advanced hazard awareness test. This certainly proved to be one of the more interesting use cases for virtual reality (VR) eye-tracking at the event, as the system could tell with incredible accuracy where a driver was looking at all times and how quickly and when they noticed a road hazard.
The simulator didn’t require any other input from the user – you didn’t need to actually drive the car, for example, it’s not Need for Speed – all that was required was awareness of the surroundings. This also meant the system logged one of the fundamental faults of most drivers, not looking at mirrors. It’s this type of VR use case that could introduce many more people to the technology, as it provides not only a better environment for hazard perception training; the software can offer decent accurate feedback.
Immersive Factory was the only company displaying the HTC Vive Focus Plus for its training software. Reasonably comfy, it’s a far bulkier piece of hardware than Oculus Quest, as well as being unable to offer the same tracking capabilities of the consumer headset (only two front-facing cameras!). Screen quality was good (as far as we could tell) from the one short demo, but highly noticeable was the unergonomic 6DoF controllers which are well below-par when compared to rivals.
As for Immersive Factory’s demo, it was a neat little simulation to teach correct health and safety procedures when operating a cherry picker. The goal was to change a bulb, demonstrating how not following safety procedures can lead to accidents while working at heights. Needless to say, VRFocus managed to get to the required height by operating the pickers levers but forgot to attach a safety harness. So when leaning towards the bulb the obvious happened, VRFocus went tumbling to the concrete floor.
Vobling was another company in the training realm, showcasing a VR simulator the firm had built for Scandinavian train operator SJ. This combined both the eye-tracking and controllers to help close a door that was stuck. Not a simple process (there was no giving it a boot), the software provided a highly detailed environment where certain locations had to be inspected and a procedure followed to release the door properly.
Testing these sorts of simulators out certainly helps to demonstrate how useful VR really can be for the workplace (it doesn’t solely need to be about zombie headshots) particularly when offered the visual detail the HTC Vive Pro Eye can offer. And it now means VRFocus can unstick an SJ train door when travelling across Sweden if needs be.
As for ZeroLight, this is a company well versed in VR, having worked with cars makers like BMW on a range of projects. The one at Vive X was an oldie but a goldie, highlighting how purchasing a new BMW in the future could be done entirely in VR. The demo is a couple of years old now but it looks great on the Vive Pro Eye, being able to swap alloys around, change the paint colour and more. There was even a physical racing seat provided so that at the right moment you could step inside the car to examine the interior and alter its design as well.
VRFocus is positive regarding the future of consumer VR and only expects it to get better. However, should it all implode and the general public gets bored with strapping high-end tech to their faces, there will always be a place for VR when it comes to enterprise solutions. It’s just way too useful, with too many applications across a number of industries proving that when taken seriously, VR can produce excellent results.
Mit drei neuen Zentren in München, Würzburg und Nürnberg will der Freistaat Bayern die Entwicklung von sogenannter Virtual Reality und Augmented Reality voranbringen. Damit werden verschiedene Darstellungen von computergenerierten künstlich erschaffenen Welten beschrieben. Im aktuellen Doppelhaushalt stünden für diese Technologien 1,5 Millionen Euro zur Verfügung, sagte Digitalministerin Judith Gerlach (CSU) am Donnerstag im Wirtschaftsausschuss des bayerischen Landtags. Ziel sei es, neue Anwendungen in der Bildung, in Museen, im Gesundheitswesen und in der Industrie möglich zu machen. „Virtual Reality und Augmented Reality ,Made in Bavaria’ soll national und international eine Marke werden“, sagte Gerlach.
Bis September soll in München das Anwenderzentrum „XR Hub“ angesiedelt werden – es wird jährlich mit 500000 Euro unterstützt, im März 2020 folgen die regionalen Zentren für Unter- und Mittelfranken. Beide erhalten für das Jahr ebenfalls in Summe eine halbe Million Euro. „Allerdings benötigen wir hierfür eine dauerhafte Finanzierung und bitten dabei auch um Ihre Unterstützung“, sagte Gerlach an die Abgeordneten gerichtet.
Darüber hinaus gab sie bekannt, dass sich 300 Frauen am ersten Talentförderprogramm für mehr Frauen in Digitalberufen beworben hätten. Jährlich sollen 50 junge Frauen zwischen 18 und 30 Jahren daran teilnehmen. Ihnen stünden Paten aus der Branche zur Seite – darunter Sabine Bendiek, Chefin von Microsoft Deutschland. (dpa)
Foto: Foto: JUSTIN SULLIVAN / AFP
Brandon Hall Group’s Learning Strategy research found that among high-consequence industries, about 45 percent consider virtual reality simulations either important or critical to achieving their business goals over the next 18 to 24 months.
By David Wentworth, Principal Learning Analyst, Brandon Hall Group
Despite being around for more than 30 years, virtual reality (VR) is just now beginning to catch on in both the commercial and enterprise environments. Advances in computing power and the advent of devices such as the Oculus Rift have brought virtual reality into a practical place in our lives.
From a workplace perspective, virtual reality has vastly expanded the possibility of how we train workers. These platforms make it possible to put employees in almost any locations or situation imaginable, interacting with items that would otherwise be difficult or impossible to have in a training environment.
According to the 70:20:10 framework, 10 percent of what people learn comes from formal learning events such as courses and classes; 20 percent comes from informal, peer-to-peer learning; and 70 percent is experiential. This means that the clear majority of what people learn comes from on-the-job training, trial and error, and simply learning by doing.
Virtual reality gives organizations the ability to create scenarios in which employees are actually learning by doing without any consequences. It allows for mistake-driven learning where employees can safely make mistakes and learn along the way. In most enterprise learning environments, experiential learning is often the most difficult to deliver, yet it often has the biggest impact. Virtual reality lets organizations capitalize upon that. Learners can see how they’ll react in stressful situations and identify performance gaps that are standing in their way.
In essence, they can gain valuable online training experience and prepare for every eventuality before they enter the workplace.
Virtual reality is changing the way businesses train their employees through these advantages:
- Improved safety in on-the-job training
- Low-cost, comprehensive education for new employees
- Increased productivity
- Saves time and money with remote learning
- Works for various learning styles
- Makes training enjoyable and engaging
Research and Trends
When VR first became popular, it was associated with gaming and other esoteric or recreational uses, and the corporate/industrial world didn’t immediately take it seriously. Most major advancements in technology start out being viewed as toys or fantasies; this was true of the telephone, the internal combustion engine, and probably every other truly revolutionary invention. It’s only when their usefulness becomes widespread, familiar, and undeniable that they are treated as the serious disruptions they really are. Even though it still generates plenty of gaming-related publicity, VR is clearly emerging as a disruptor to pay attention to in almost every business sector.
Trainees can enjoy learning in many different scenarios, simulating different conditions, without moving to a different place. A police officer can train on how to handle robberies, hostage situations, or natural disaster interventions—all in a single place, without moving.
Virtual reality is still an emerging tool for enterprise learning. Not all organizations have a suitable need for the technology. However, Brandon Hall Group’s research has found that interest and use is growing among “high-consequence” industries. These are industries in which organizations face a high level of regulatory and compliance requirements. These include aerospace, chemicals, health care, manufacturing, energy, and investment/finance.
Interest in using game-based technology and simulations has increased significantly among these companies, as more than 30 percent identify these tools as a top learning priority for the next
12 to 24 months. This is a 66 percent increase from 2016.
When we look specifically at the use of VR in the context of compliance training, Brandon Hall
Group’s latest Compliance Training Survey found that that 9 percent of high-performing companies currently are deploying virtual reality, compared to 6 percent of other organizations. Additionally, 15 percent of high performers say their use of virtual reality will increase. High-performers are those organizations with annually improving key performance indicators (KPIs) such as revenue, market share, and customer retention. While this data does not indicate causation, there is a correlation between being a high-performing organization and being more likely to use virtual reality.
Additionally, Brandon Hall Group’s Learning Strategy research found that among high-consequence industries, about 45 percent consider these types of simulations either important or critical to achieving their business goals over the next 18 to 24 months.
However, most are not ready to execute just yet. Only 18 percent of companies in high-consequence industries are ready to take action, according to research, while 29 percent are not at all prepared and 30 percent are only somewhat prepared.
The difference between virtual reality scenarios and traditional simulations is the sense of immersion. The virtual environments also make possible training with hazardous materials or dangerous situations without being in real danger. How else can an airplane pilot train for an engine-failure situation or nuclear plant personnel train for a leak involving nuclear waste?
Learners’ lack of experience might be a risk in some industrial occupations training, for instance, those involving work at height or hazardous material manipulation. VR-based simulations offer a safe environment, where novices are able to train. The safety factor alone can be enough to justify any investment in VR training because the cost of accidents both in training and, eventually, on the job can be astronomical. According to the Bureau of Labor Statistics, there were 722 workplace deaths in 2015 that resulted from contact with objects or equipment. There were also 424 deaths resulting from exposure to harmful substances or environments, an increase of 34 events from 2014.
Training that can better prepare workers for the types of hazardous environments they will encounter can help mitigate this risk.
The value of virtual reality for workplace safety initiatives lies in its ability to overcome classic learning problems, especially at the large scale involved in a manufacturing setting. The most obvious of these problems is the inherent limitation of 2-D-written or video material in preparing workers for real-life situations. Up until now, participating in such training programs involved a worker passively watching a video or reading a printed safety procedure, and then perhaps being asked to answer questions about the material. This lack of realism in traditional teaching procedures shows up as a glaring defect when compared with the immersive experience of VR training. In virtual safety training, the worker directly experiences many of the actual sights and sounds of a real-life emergency and learns to respond appropriately to such a situation regardless of his or her emotional reaction.
As referenced earlier, the majority of what people learn comes from experiential learning, or “learning by doing.” Virtual reality provides an avenue to allow learners to get hands-on without actually executing on the job. Not only does this represent the majority of how we learn, but it also represents a modality with high retention rates.
According to Dr. Narendra Kini, CEO at Miami Children’s Health System, the retention level a year after a VR training session can be as much as 80 percent, compared to 20 percent retention after a week with traditional training. “The level of understanding through VR is great because humans are primarily visual, and VR is a visual format,” Kini says. “We believe there are numerous opportunities where repetitive training and skill set maintenance are critical for outcomes. Since there are not enough patients in many cases to maintain these skill sets, virtual reality is a real addition to the arsenal. Imagine also scenarios where we need to practice for accreditation and or compliance. In these situations, virtual reality is a god-send.”
As industrial and manufacturing positions increasingly fill with younger workers, it’s appropriate to make use of the technology that’s familiar to this generation. Accustomed to extensive and sophisticated gaming platforms, younger workers find it easier to make the transition from virtual reality safety training to real-world practices. According to a recent survey, safety and manufacturing skills training was the second-most popular application of VR and augmented reality among U.S. manufacturers. The ways in which those manufacturers are applying VR and
AR to their everyday dealings continues to evolve, as well. The survey noted improvements in materials handling, remote maintenance, augmented assembly, and improved inspection.
Not only can simulations be safer and less expensive than real-life exercises, they can be more cost effective than a standard e-learning course.
It would be reasonable for such innovative training methods to come with a hefty price tag, but the cost of VR equipment is starting to stabilize as more providers enter the market. VR programs are a one-time expense that can prove cost-effective in the long run. Users can take part in as many training modules as required, tracking performance over a range of sessions to give visual and statistical feedback during ongoing assessments. This flexibility and convenience means operators don’t have to spend money on materials and resources used for repeat training and practice. Instead, trainees can engage with lifelike scenarios as often as necessary without additional costs to their employers.
Training in virtual environments reduces the large expenditure associated with real-life simulations. Let’s think, for example, about the resources used in training firefighters or police officers; they have to close roads, involve many people as actors, simulate dangerous scenarios, etc.
VR is still at the very beginning of realizing its substantial potential to save money in the industrial sector. The operation of entire manufacturing plants, including all machines and processes, will be transformed in the coming years.
Looking just at the benefits offered by virtual safety training, the savings arise from a range of sources. When workers are better trained, there are fewer accidents. In addition to fewer injury-related costs and production delays, a better safety record translates into less risk and lower insurance costs. As long ago as 2007, VR was being proposed as a cost-saving, on-demand resource for workers. Though very conceptual, that 2007 paper likely helped set the stage for the many innovations making headlines today. VR can help overcome the lack of on-demand access to training because that virtual world can be summoned any time the VR headset is needed. This dedicated resource makes safety knowledge always accessible.
David Wentworth is principal Learning analyst at Brandon Hall Group, an independent research/analyst firm in the Human Capital Management market. The firm’s vision is to inspire a better workplace experience, and its mission is to empower excellence in organizations around the world through its research and tools. Brandon Hall Group has five HCM practices, produces the Brandon Hall Group Excellence Awards and the annual HCM Excellence Conference, held Jan. 30-Feb. 2, 2018, in Palm Beach Gardens, FL.
Online multiplayer could revolutionize how we interact in AR.
While VR makes you forget about the outside world, AR instead enhances reality by blending virtual elements with your real-world environment. It unites both realities in the most seamless and exciting way possible, allowing users to access one-of-a-kind immersive experiences using existing smartphone technology. You’re almost certainly aware of augmented reality and its application in games such as Pokemon Go!, Google Ingress, and The Walking Dead: Our World, titles which—over the past three years—have contributed heavily to the rise of location-based AR gaming. Almost halfway into 2019, we’re beginning to see the growth of a new feature in AR-based mobile gaming that further impact how we interact in AR: remote multiplayer functionality.
Gaming, business, communication, entertainment – remote multiplayer AR has the potential to breathe new life into a multitude of activities. Imagine a father away from home on a business trip being able to build an AR lego spaceship with his son, who is miles away at home; or a distributed team of engineers working on the schematics for an actual spaceship in an augmented meeting room. Two friends separated towns apart could even battle as virtual cyber raccoons in a deathmatch-style competition. The challenge with remote multiplayer AR is sharing the same AR experience within two different physical environments simultaneously in real-time. Thanks to continued advancements to both Google’s ARCore and Apple’s ARKit platforms, however, users with compatible smartphone devices could be seeing a lot more AR experiences with remote multiplayer AR functionality in the near future.
The introduction of the remote online multiuser feature is what makes AR so fascinating to experience and so complex to develop. The technical side is a bit more challenging than in VR where online multiplayer is nothing new. Indeed, in VR there is no such difficulty with tracking the surrounding space: all the scenes are predefined which makes VR multiplayer very similar to conventional PC and mobile-based multiplayer.
In AR you have to build the scene, taking into account the parameters of each player’s physical environment (one playing on a football field, another — in a living room). This can make it difficult to match users, as you must first make sure their scenes can be built and merged correctly to allow for accurate interactions. The more users you have playing with each other simultaneously, the more complex it gets.
Despite these inherent difficulties, more developers than ever have begun experimenting with the implementation of this functionality in their personal applications. In terms of universal compatibility between devices, WATTY Technology is leading the way with their WATTY Remote / WATTY Ghost Mode technology. This London-based multiuser AR technology company has already announced its upcoming AR-based ping-pong application, AR Ping Pong—in which players from around the globe compete with one another in real-time table tennis matches— as well as Dance Battle, a music-based experience that has players duking it out for the honor of “Best Twerk.”
If you want to get into early access, leave your email below in the comments to receive an invitation.
Vlad Vodolazov, Co-Founder and CTO of WATTY explains why ping-pong is the best way to demonstrate the potential and interactivity of remote online multiuser AR: “Table tennis is a fast-paced game and it brings out different sides of this technology like matchmaking and lag compensation techniques. From a gaming perspective, this is a timeless classic reimagined as an immersive uninterrupted AR experience — anyone can recall playing ping-pong at some point of their lives”.
With its AR Ping-Pong, Watty gives a fiery start to the remote online multiuser concept. The company is working on business cases to make AR more useful and fun, aiming at turning the mundane and routine into entertainment. “Bringing remote online multiuser AR experiences to daily activities such as grocery shopping, sports, and social interactions will certainly improve the way we live and work today”, states Vlad Vodolazov.
I for one can’t wait for the day there’s a remote AR app that finally turns the chore of shopping into a thrilling multiplayer experience.
Image Credit: WATTY Technology
At the company’s annual WWDC developer conference today, Apple revealed ARKit 3, its latest set of developer tools for creating AR applications on iOS. ARKit 3 now offers real-time body tracking of people in the scene as well as occlusion, allowing AR objects to be convincingly placed in front of and behind those people. Apple also introduced Reality Composer and RealityKit to make it easier for developers to build augmented reality apps.
Today during the opening keynote of WWDC in San Jose, Apple revealed ARKit 3. First introduced in 2017, ARKit is a suite of tools for building AR applications on iOS.
From the beginning, ARKit has offered computer vision tracking which allows modern iOS devices to track their location in space, as well as detect flat planes like the ground or a flat table which could be used to place virtual objects into the scene. With ARKit 3, the system now supports motion capture and occlusion of people.
Human Occlusion & Body Tracking
Using computer vision, ARKit 3 understands the position of people in the scene. Knowing where the person is allows the system to correctly composite virtual objects with regard to real people in the scene, rendering those objects in front of or behind the person depending upon which is closer to the camera. In prior versions of ARKit, virtual objects would always show ‘on top’ of anyone in the scene, no matter how close they were to the camera. This would break the illusion of augmented reality by showing conflicting depth cues.
Similar tech is used for real-time body tracking in ARKit 3. By knowing where people are in the scene and how their body is moving, ARKit 3 tracks a virtual version of that person’s body which can in turn be used as input for the AR app. Body tracking could be used to translate a person’s movements into the animation of an avatar, or for interacting with objects in the scene, etc.
From the footage Apple showed of their body tracking tech, it looks pretty coarse at this stage. Even with minor camera movement the avatar’s feet don’t remain particularly still while the rest of the body is moving, and small leg motions aren’t well tracked. When waving, the avatar can be seen to tip forward in response to the motion even though the user doesn’t. In the demo footage, the user keeps their arms completely out to the sides and never moves them across their torso (which would present a more challenging motion capture scenario).
For now this could surely be useful for something simple like an app which lets kids puppeteer characters and record a story with AR avatars. But hopefully we’ll see it improve over time and become more accurate to enable more uses. It’s likely that this was a simple ‘hello world’ sort of demo using raw tracking information; a more complex avatar rig could smartly incorporate both motion input and physics to create a more realistic, procedurally generated animation.
Both human occlusion and body tracking will be important for the future of AR, especially with head-worn devices which will be ‘always on’ and need to constantly deal with occlusions to remain immersive throughout the day. This is an active area of R&D for many companies, and Apple is very likely deploying these features now to continue honing them before the expected debut of their upcoming AR headset.
Apple didn’t go into detail but listed a handful of other improvements in ARKit 3:
- Simultaneous front and back camera
- Motion capture
- Faster reference image loading
- Auto-detect image size
- Visual coherence
- More robust 3D object detection
- People occlusion
- Video recording in AR Quick Look
- Apple Pay in AR Quick Look
- Multiple-face tracking
- Collaborative session
- Audio support in AR Quick Look
- Detect upt to 100 images
- HDR environment textures
- Multiple-model support in AR Quick Look
- AR Coaching UI
With ARKit 3, Apple also introduced RealityKit which is designed to make it easier for developers to build augmented reality apps on iOS.
Building AR apps requires a strong understanding of 3D app development, tools, and workflows—something that a big portion of iOS developers (who are usually building ‘flat’ apps) aren’t likely to have much experience with. This makes it less likely for developers to jump into something new like AR, and Apple is clearly trying to help smooth that transition.
From Apple’s description, RealityKit almost sounds like a miniature game engine, including “photo-realistic rendering, camera effects, animations, physics and more.” Rather than asking iOS developers to learn game engine tools like Unity or Unreal Engine, it seems that RealityKit will be an option that Apple hopes will be easier and more familiar to its developers.
With RealityKit, Apple is also promising top notch rendering. While we doubt it’ll qualify as “photo-realistic,” the company is tuning rendering the allow virtual objects to blend as convincingly as possible into the real world through the camera of an iOS device by layering effects onto virtual objects as if they were really captured through the camera.
“RealityKit seamlessly blends virtual content with the real world using realistic physically-based materials, environment reflections, grounding shadows, camera noise, motion blur, and more, making virtual content nearly indistinguishable from reality,” Apple writes.
RealityKit, which uses a Swift API, also supports the creation of shared AR experiences on iOS by offering a network solution out of the box.
Just like RealityKit, Reality Composer aims to make things easier for developers not experienced with game engines, workflows, and assets. Apple says that Reality Composer offers a library of existing 3D models and animations with drag and drop ease, allowing the creation of simple AR experiences that can integrate into apps using Xcode or be exported to AR Quick Look (which allows built-in iOS apps like Safari, Messages, Mail, and more, to quickly visualize 3D objects at scale using augmented reality).
In addition to the built in object library, Reality Composer also allows importing 3D files in the USDZ format, and offers a spatial audio solution.
Preliminary data suggests the benefits of VR as a teaching tool hugely outweighs traditional methods, reports John Pickavance
You’ve probably heard how Virtual Reality (VR) is going to change everything: the way we work, the way we live, the way we play. Still, for every truly transformative technology, there are landfills of hoverboards, 3D televisions, Segways, and MiniDiscs – the technological scrap it turns out we didn’t need.
It’s reasonable to approach VR with a degree of scepticism, but allow me to explain three ways in which VR can transform the way we learn, and why we, as psychologists, are so excited about it.
Exploring the unexplorable
VR has great potential as a classroom aid. We know learning is more effective when learners are actively engaged. Practical lessons that encourage interaction are more successful than those where content is passively absorbed. However, certain topics are difficult to ground in meaningful tasks that learners relate to.
From the enormity of the universe to the cellular complexity of living organisms, our egocentric senses haven’t evolved to comprehend anything beyond the scale of ourselves. Through stereoscopic trickery and motion tracking, VR grounds counter-factual worlds in the plausible. For the first time, learners can step inside these environments and explore for themselves.
Researchers are currently developing Virtual Plant Cell, the first interactive VR experience that’s designed for use in the classroom. Learners explore the alien landscape of – well – a plant cell. Wading through swampy cytosol, ducking and weaving around cytoskeletal fibres, and uncovering the secrets of the plant’s subcellular treasures: emerald green chloroplasts, where photosynthesis takes place, curious blobs of mitochondria, or a glimpse of DNA through a psychedelic nuclear pore.
The inner workings of the cell are grounded, allowing students to actively engage with the lesson’s content through meaningful tasks. They may work in pairs to give each other tours, or create a photosynthetic production line. Using intuitive gestures, students grab carbon dioxide and water molecules from around the cell, feeding them to chloroplasts to produce glucose and oxygen. With all the ingredients for active learning, the Virtual Plant Cell should be a particularly effective teaching aid. Indeed, preliminary data suggests it may improve learning over traditional methods by 30%.
VR for everyone and everything
It’s not just the learning of “what” something is that VR can assist with, but also the learning of “how” to do something. In psychology, we make the distinction between declarative (what) and procedural (how) knowledge, precisely because the latter is formed by doing and can be applied directly to a given task. Put simply: the best way to learn a skill is by doing it.
Every learner’s goal is to cultivate a large enough range of experience that individual elements can be drawn upon to meet the demands of novel problems. To this end, a great deal has been invested into training simulators for high-risk skills such as flying and surgery. But there are many lower-risk skills which would benefit from simulation, there’s just been little reason to justify investment. That is, until now.
Advancements in mobile technology have led to high-definition VR sets for the price of a mid-range TV. Without the financial barrier, consumer-grade VR opens the door to improve skills training in settings where the real thing isn’t readily available.
One such example would be the Virtual Landscapes programme we’ve developed at the University of Leeds. A vital part of any geologist’s training is to learn how to conduct geological surveys. Armed with a compass, GPS and a map, geologists must navigate unfamiliar terrain to make observations, ensuring they make the most of their time. VR simulation can provide this in real time, with all the tools they’d expect to have out in the field.
The advantages are twofold. Student absences from field trips become less of a hindrance with access to an accurate simulation. The challenges of surveying a mountainous region differ from those in a tropical rainforest. It may be easier to see where you’re going, but your choice of path will be more constrained. VR can present these different biomes without students having to visit all corners of the Earth. The learner’s experience is expanded, and they’re better equipped to tackle novel problems in the field.
Wearing (a VR headset) is caring
VR may also hold the key to driving positive behavioural changes. One way we know we can achieve this is by eliciting empathy. VR uniquely allows people to experience alternative perspectives, even being dubbed the ultimate “empathy machine”. It’s a lofty claim, but early applications have shown promise.
A recent Stanford study showed that participants who experienced becoming homeless in VR displayed more positive behaviour towards homeless people – in this case, through signing a petition demanding solutions to the housing crisis – than those who engaged with the same materials on a traditional desktop computer. This effect persisted long after the study ended. Perhaps by experiencing firsthand the challenges faced by vulnerable groups, we can share a common understanding.
The power of VR to elicit empathy might be used to tackle an even wider range of social issues. We’ve been running VR outreach projects in schools to improve awareness around climate change. Through VR, young people have witnessed the melting of the icecaps, swam in the Great Barrier Reef to see the effects of receding coral on the ecosystem and rubbed shoulders with great primates whose habitats are being cleared by deforestation. Using VR, we hope to cultivate environmentally responsible behaviour before attitudes and habits become more fixed.
So there you have it. By bringing previously inaccessible experiences into the classroom, VR may accelerate the learning of abstract concepts, augment the acquisition of skills, and perhaps even be a force for social change. For now, the technological scrap heap can wait.
PhD researcher in cognitive science at the University of Leeds