In this article, we’ll explore key trends in the learning and development space, which are going to forever change the future of how people learn at work. Among the emerging learning technology landscape, three content delivery methods are becoming more and more relevant and accessible: augmented reality (AR), virtual reality (VR), and mixed reality (MR).
When learning practitioners hear “AR,” “VR,” or the enigmatic “the realities,” one of two responses usually comes to mind: “I’d love to do something with it, but am not sure where to begin”; or “I want to dive in, learn more, and identify the use cases to substantiate an investment.” The reality is, many practitioners are still grappling with the differences between augmented reality, virtual reality, and mixed reality. For those of us in that space, here are a few quick and dirty definitions:
- AR overlays data onto a view of the physical world, adding context, content, and means for interacting in ways tied to that physical space. AR can incorporate image scanning to launch content in several ways, using a “marker,” which is a defined visual tag that the AR mobile app recognizes, or markerless AR, which uses unique images to launch content. You can also incorporate GPS triggers that launch content that is specific to a GPS coordinate on a map. Think about hovering your mobile device over an image or item and seeing digital content related to the image that you could explore on your own.
- VR takes the learner into a new physical space that could be a real or fantasy space created for the purpose of interacting and learning within that space. VR spaces can be photo-realistic using 360-degree cameras or 3-D/animated experiences developed by designers who create a graphic rendering of a space. Think of employees who need to familiarize themselves with a given space that isn’t possible or practical to visit until the moment the work needs to be done.
- MR blends attributes of the real and virtual worlds into a single space. It weaves attributes of VR and AR together in meaningful ways. Of the realities, this one is furthest out on the radar field, but will come into closer view in the coming years as tools to create it become more mainstream. Today, think about retail apps that allow you to render a room with paint and products; next, imagine you are donning the headset and are in the physical room and then positioning those same elements as virtual objects within the physical space.
Devices and Development Tools: Huge considerations in this brave new world
The biggest obstacle for AR today is the device question: BYOD or corporate devices? This is a huge factor when you consider where the data is stored and what you need to do to hit it. However at present, AR is the most attainable solution for several main reasons: there are more learning use cases, the investment is considerably lower, and the upskill for L&D practitioners is within reach for practices that hope to create these experiences in house. There are a number of off-the-shelf tools, such as Zappar and LayAR, which you can learn to use to create a prototype. Both have free trials and tons of tutorials to sort out how the tech works and how to build a low-level solution for performance support or other purposes.
Let’s take it further, imagine accessing a user guide for a piece of equipment you are standing right in front of, the catch is that you don’t often use this piece of equipment. I know what you are thinking; how is this different from a QR code? Easy. QR codes have a one-to-one relationship. AR experiences can launch numerous artifacts associated with a single target. For example, a QR code could link to the users manual or a job aid, both of which could be very helpful. But an AR experience could offer a menu of support on your device simultaneously. It could display the user guide, a set of video demos showing the replacement of common parts, an option to connect with IT directly via Skype, and links to user-generated content about different ways to perform a specific task.
VR development tools are catching up quickly to commercial AR tools. However, two short years ago at GP Strategies, we were challenged to build our first-ever VR experience pilot for a client event. At the time, the technology was expensive and the skills were costly to create such an experience. We used this as an opportunity to kick the tires to see how VR might make sense in a client’s ecosystem and to learn what we didn’t know about building VR content. At the time we heard a lot of hyperbole about “the most obvious” use cases for learning, but a lot of it was speculative and frankly would be better served up using different delivery methods. Commercial development tools didn’t exist yet, the technology was somewhat obtuse or hard to attain, and development costs were prohibitive.
For our first foray into VR, we built a scavenger hunt activity that was designed to engage conference attendees and create a team-bonding experience. Attendees were broken into teams of five and each given a single iPhone and Google Cardboard. One team member would look into the Cardboard and see a virtual room with a lot of content objects. The remaining team members looked at a projection screen in the room that showed a “target” object and a list of “taboo” words. From that, the team members had to describe a single object in the virtual room that the Cardboard user had to find without disclosing any of the words on the taboo list. Once the teams all found the target object or the countdown clock reached zero, the Cardboard would be passed to the next participant until five rounds had been completed. On the back-end, our programming team tracked the teams that performed the best.
This was a relatively simple VR activity, but it was created at a time when commercial tools were not readily available and upped the ante by adding a unique back-end tracking component that required a fair amount of web app development. We had to solve a number of technical challenges that included the following:
- Web display (three.js for 3-D)
- Animation (TweenMax)
- Stereo viewing (StereoEffects plugin for three.js that creates an offset view for each eyeball)
- Sound control (soundsJS)
- Preloading (preloadJS to load media objects)
- Orbit controls (non-Google Cardboard navigation plugin for three.js)
- DeviceOrientationControl (Google Cardboard navigation plugin for three.js).
While we got the job done as a one-off, we really started looking at what was out there that we could leverage for our clients. Generally, the technology was really on the edge of our horizon, as there were no (even remotely) easy tools to bring in house without extraordinarily specialized skills. It was not quite ready for learning prime time. VR was far from “normalized,” even “socialized.” Shelf life and data sensitivity presented huge challenges. There were many reasons to keep it on the back burner. A lot has changed since then.
Today, to create a similar VR experience, we have a variety of tools that we can use, some which require minimal programming. Platforms like eevo (eevo.com) enable us to capture 360 photos and video, upload to the platform, and create beautifully interactive experiences that include spatial familiarity and awareness. Tools like Trivantis’ Cenario VR allows practitioners to create the experience themselves by uploading a series of 360 videos or stills and add onscreen interactives that trigger certain activities and content. Want to get a micro experience without making a huge investment? Try SeekBeak’s (SeekBeak.com) cloud-based tool to get a feel for what’s needed to create an experience and the level of effort that’s involved.
Now let’s consider MR, which is in roughly the same spot that VR was two years ago. Right now, there are a couple of enterprise-level players, such as Microsoft Hololens and Magic Leap, that have extremely impressive hardware and a few high-end partners that are developing entertaining and awe-inspiring content. At this time, the software development kits (SDKs) for these systems still require talented software engineers who work in code-heavy systems such as Unity and Unreal. While we could hypothesize many use cases, adoption is directly related to the shelf life, advanced level of programming skills, and hardware and software requirements, as compared to other means of delivering learning. It makes MR a hard sell to many L&D clients today. As a result, we’re not seeing a lot of MR on the learning front just yet, but we anticipate that this will change as the technology becomes more attainable and normalized.
Now let’s talk about how we are seeing the realities in our work.
Augmented! Bring Contextual Content to Life
AR is a much more common request for us, though clients often think about it as a more simplistic solution: scan, unlock, consume, and move on. In reality, some of the more sophisticated solutions are not too far in the distant future for mainstream L&D and will provide incredibly rich experiences. The key drivers for any AR solution are that employees need access to information when real -world context matters and traditional means aren’t available. The ideal use case is in the real world when content in the physical context matters. So, what does AR solve for? A lot. These are the top four challenges that we’re exploring with our clients:
- Reinforce previous learning. Reminders, revisiting of information; imagine constant on-demand access to learning content.
- Deliver build-and-extend knowledge. Now that you have learned the foundational knowledge, we can layer in more complex information in the context of work.
- Provide the ability to remotely mentor and guide. Imagine the ability to bring additional assets, human and other, to support in-the-moment work. What could it look like? Consider a jet engine technician seeing an anomaly for the first time. AR enables access to connect user with human expert to impart a higher level of knowledge via document push and drawing in the wearer’s field of view.
- Deliver access to automated data systems that enable just-in-time data pushes to the user in a specific scenario. Here is how this plays out: The learner dons a heads-up display using a headset or glasses that incorporates visual tags, RFID, maintenance requirements, scheduled learning requirements, logistical requirements, and more.
How Virtual Reality Fits In
VR, while discussed but pursued less often, is most ideal in the learning context when we need to provide awareness of a physical space that isn’t possible or practical. The key driver for a VR solution includes spatial awareness and the ability to provide skills practice that you can attempt over and over, and get better at, ideally not in the live situation. So, what does VR solve for? Here are the applications we are seeing in our work:
- Mitigate human risk. Perform hazardous work or learning about fringe but critical safety issues. Can you imagine an over-the-road trucker developing familiarity with how his vehicle responds in adverse conditions, or a salesperson who has to explain how the in-vehicle technology responds when a car is sliding on ice?
- Mitigate reputation risk. Observe and respond to an uncomfortable situation in the office using scenario observation and branched decision making with feedback.
- Diminish proximity/logistic costs associated with training. Develop proficiency on a complex product before the physical product is available or if the product is too costly to have on site. We’re exploring this application in the automotive sales space, allowing consultants to experience vehicles and technology that aren’t available yet.
- Train on abstract concepts. Experience something virtually that is simply impossible to actually experience in the physical world. In the pharmaceutical and biomedical arenas, imagine learning how a disease or a drug works within the human body over time.
Next on the Horizon, Mixing It Up
MR reality use cases in learning are being defined as tools continue to be refined. Therefore we are in the untenable position of speculating where MR fits into the learning ecosystem. Because it incorporates elements of both AR and VR, similar use cases come into play. But because MR is intended for direct integration with real-time data systems, we can begin to see performance support tools at a level never seen before. The key driver for an MR solution includes situations in which the real world, AR, and VR converge to give us more valuable, or interpreted, information. Imagine being able to look “through a car” and identify vehicles or other items in the blind spot in your field of vision. “Actual field-of-view” is critical, and the spatial accuracy of the overlay is a critical differentiator that VR or AR alone cannot support. In VR we can simulate what this looks like, but imagine moving from the driver seat in one vehicle to another and having a realistic experience to differentiate the blind spot variations between vehicles that you are sitting in. Other potential high-value use cases include logistics and facilities planning, warehousing and expansion, and more, including real-time extrapolation of organic systems—for instance, looking into the human body.
In today’s corporate learning landscape, having the “right” problem to solve and the funds to develop, deploy, and maintain are gating factors that keep MR largely out of the ecosystem, at least for now.
Getting Started, The Design Process
So, what is the process like? The steps for building reality-based content whether it is AR, VR, or MR are going to be somewhat familiar. Creating simple AR and VR is not unlike building an eLearning course or filming a video project. MR remains more complex. Our high-level guidance here reflects our experience developing VR and AR.
- Understand the audience and the need. AR, VR, and MR will rarely be the single method to teach, assess, or support; they are a component of a larger solution. Consider the learner’s needs holistically, and determine where these technologies can support that mission. We find AR is strong for just-in-time performance materials; whereas, VR is great for situational awareness and practicing skills in a safe space.
- Develop your plan. How does the reality experience fit into the learning solution that it supports? Identify and reconcile technology questions about devices, required apps, data location, delivery location, connectivity, and more. How will the experience be maintained over time? Who will facilitate the experience and manage the hardware and wearables? Answering these questions will ensure you start the project on the right foot.
- Consider your development options, and then select a platform or toolset. These technologies may sound tough to build, but with more commercial tools, decreasing costs of 360 cameras, and cloud delivery systems, it’s easier than you’d think.
- Storyboard and script the experience, and develop from the storyboard. Not unlike an eLearning course or other learning solution, your storyboard becomes a single source of truth for designing the complete experience and then developing it. It will include variations that are specific to the medium, like the variations you’d expect to see in a storyboard for an eLearning course versus an interactive video.
- Test early in development; there will be things you’ve missed. Even though the design and development processes are familiar, the stakes are indeed higher. Develop a limited scope pre-pilot test that uses real content, and conduct some user -acceptance testing. You may find that connectivity, latency, or load hitting the experience presents an issue you will need to mitigate or require you to change course.
- Controlled launch. The technology is expensive and new; your launch needs to impress learners and deliver without technology hiccups. If the initial experience isn’t positive, you’ll have a hard time getting their mindshare back. Before you launch your reality components, try a small pilot for reaction. Gather feedback and implement a revision cycle if needed.
It is truly an exciting time to be an L&D practitioner. With the technology landscape changing, literally under our feet, it’s important to be poised and ready to act. What is just hitting the horizon today will be ready for prime time soon. And for those thinking, my environment won’t support this, remember that mindset is literally a moment in time. Get out there, play with the tools, and develop some proficiency, even if you have to do it on your own—think free tools and trials. Because in the blink of an eye, your leaders will be asking where they fit and what your first project will be.
About the Authors
Ann Rollins is a modern learning champion, published author, and instructional design leader. She has spent her career consulting and leveling up learning in the aviation, automotive, financial, telecom, insurance, and national security spaces. She leads learning strategy and innovation work for Fortune 100 companies, and guides and crafts the upskilling efforts for L&D teams of global organizations, including her own. One of the best parts of her job is to collaborate with technology and learning leaders in GP Strategies‘ Innovation Kitchen, a dedicated learning innovation lab where the team explores, tests, and pilots the technology that enables tomorrow’s learning.
Tom Pizer is the director of learning technologies for GP’s Learning Solutions Group; he has more than 20 years of experience in the technical digital-media field. He leads an exceptional team of developers who create unique learning solutions in response to atypical client challenges. During his career, Tom has created, specified, directed, and/or managed hundreds of hours of educational, instructional, and entertainment-based media for a variety of clients in both the public and private sectors. A key aspect of Pizer’s responsibilities is staying abreast of emerging technologies and in tune with the latest development methodologies, standards, and practices.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.
2019 Copyright is held by the owner/author(s). Publication rights licensed to ACM. 535-394X/19/05-3331556