AR und VR sind im Begriff, traditionelle 2D- und 3D-Tools zu ersetzen. Unternehmen sollten diese Arten der Visualisierung jedoch nur dann einsetzen, wenn damit ein klares Ziel verfolgt wird.
Datenkomplexität als Problem
Neue Wege mit AR und VR

AR und VR sind im Begriff, traditionelle 2D- und 3D-Tools zu ersetzen. Unternehmen sollten diese Arten der Visualisierung jedoch nur dann einsetzen, wenn damit ein klares Ziel verfolgt wird.
VR is more than just about video & gaming.
Today most people think of gaming, virtual reality glasses and video when asked about virtual reality. The concepts of virtual reality in these areas have been around for some years, but what if we look beyond the media space.
Virtual reality is becoming a thriving part of many businesses; 50 per cent of companies not currently using virtual and augmented reality will start experimenting with it in the next 2-3 years, and 82 per cent of adopters say the benefits are meeting or exceeding expectations.
Remember virtual reality is much more than gaming and video. Increasingly, it is being powered by predictive analytics and creating virtual data points to create virtual environments to help us see a whole new future of possibilities. Do you realise that the egg you’re eating will soon be created through virtual reality? Or that the quality of the meat will be derived by virtual reality?
For one example of innovation, researchers at Oxford have applied genetics data to virtual reality, to visualise how genes and strings of DNA sit within the chromosomes. Their resulting 3-D presentation clearly showed how genes sit in relation to each other and how they interact with each other. And the purpose? Visualising such interactions helps us better understand how DNA works and develops.
As Stephen Taylor, Head of the Computational Biology Research Group at the MRC Weatherall Institute of Molecular Medicine, University of Oxford, stated: ‚Being able to visualise such data is important because the human brain is very good at pattern recognition – we tend to think visually.’
While not all companies see the opportunity yet for virtual reality in their business model, most are engaging in data driven strategies to enable the storage and analytics of data, tomorrow. Little do they realise that this type of activity can lead them on a journey into virtual reality.
As a case in point, companies in animal breeding are moving towards virtual breeding. In animal breeding, the creatures involved need to be continuously monitored so that a thorough understanding of their physiological and psychological conditions can be gained, given that what is desired are best in breed animals or dairy and meat products.
This can result in thousands of data points being continuously saved minute by minute, detailing room temperature, body size, body weight and the beast’s vital statistics, such as heart rate. Even in fish breeding, the fish are scanned by ultrasounds several times a day.
The explosive amount of data derived from the animals makes it possible to undertake advanced and predictive analytics. From the results, the breeding company can create virtual animals and breed their offspring virtually, creating ‘what if’ scenarios around changes in feeding and environmental conditions. By doing so virtually, they can create a much faster time to market as they no longer have to go through the painstaking process of raising the physical animals to see how they respond to different circumstances, which can take years.
Most importantly, however, it means far fewer real animals are needed and the ones that do ‘come to life’ experience less stress thanks to non-invasive appliances such as ultrasound. Additionally, the infrastructure to support such work in terms of breeding pens and laboratories, are reduced, resulting in a significant cost reduction on the R&D side.
As you can see, by only considering how virtual reality can impact the animal world, a massive amount of opportunities can come to light for the agricultural sector, and related animal testing and fauna related educational sector. Soon it will be of little surprise to find that the quality of our eggs and bacon is increasingly enhanced through virtual reality.
Yet what about enterprises outside of agriculture and animals? Although most companies might not see where virtual reality fits into their business, many are already using what drives it: machine learning and data. As such, the more creative and visual organisations out there should start to look to the virtual world, to see how it helps them speed up time to market, reduce cost, lower risk, increase sustainability and augment the customer experience.
Fields heavily involved in design and manufacturing are no strangers to running simulations before testing their ideas out in the field. However, nothing immerses clients or customers like a virtual reality demo where they can view and experience the product as they choose. Imagine architecture software that designs and builds its construction in a virtual reality landscape.
The firm is able to preview and demonstrate to the customer how the structure will look, and how it will feel to walk around in it before the structure is built. Any customer pain points can be addressed at an early stage, avoiding problems before they emerge or customer dissatisfaction once the building is finished.
The opportunity to leverage virtual reality is all around us, you just need to see the virtual reality for your business.
Quelle:
https://vrroom.buzz/vr-news/tech/understanding-evolution-virtual-reality
Todd Maddox examines how Extended Reality (XR) is an ideal tool to enable faster and more efficient familiarization with complex medical devices.
edical devices are essential for safe and effective prevention, diagnosis, treatment and rehabilitation of illness and disease. Many are used in the patient’s home allowing patients to lead normal lives, and to address their health needs on their own or with loved ones. Critically, high-quality education and training on these devices is required to avoid complications resulting from incorrect care and maintenance. For example, the Continuous Positive Airway Pressure (CPAP) system commonly used to treat obstructive sleep apnea, is a complex system that requires sterilization on a daily basis to avoid infection. In addition, poor cleaning and maintenance of a colostomy bag can be life-threatening. Finally, medical devices such as in-home peritoneal dialysis machines to address kidney disease, and insulin infusion pumps to treat diabetes are central to patient care, but these systems must be maintained to avoid complications. Although the existence of these medical devices increases the patient’s freedom and quality of life by allowing them to be treated at home, medical personnel are not readily available should something go wrong. This makes quality education and training critical to success.
By far the most common approach to medical device education is to have patients read documents describing the device and outlining the steps needed to care and maintain the device. Patients are expected to study and memorize the steps so that they can effectively care and maintain the device and avoid complications. This approach leaves it to the patient to determine proficiency and leaves the patient at risk when proficiency has not been achieved. With more complex systems, such as a peritoneal dialysis machine, document training will be supplemented with classroom and hands-on training with an expert. Although clearly superior to document training alone, this is time-consuming, costly, and is difficult to scale.
The neuroscience of learning and performance suggests that this traditional approach to medical device training is sub-optimal for at least two reasons. First, document training focuses almost exclusively on cognitive learning, when what is needed is broad-based experiential and behavioral learning. Second, hands-on training with an expert, although effective, is rarely extensive enough to ensure broad-based proficiency.
In this report, we discuss the neuroscience of learning and performance. We show why traditional approaches to medical device training are sub-optimal and can leave patients vulnerable to complication. We also show how immersive technologies like virtual reality (VR) and augmented reality (AR) hold great promise for optimizing medical device training.
As outlined in the figure below, the human brain is comprised of at least four distinct learning systems: experiential, cognitive, behavioral and emotional. As alluded to by Einstein, experience is at the heart of learning, and forms the foundation of VR and AR. The experiential system represents the sensory and perceptual aspects of an experience, whether visual, auditory, tactile or olfactory. Because every experience is unique and is immersive, this adds rich context and nuance to the learning. These enhance generalization and transfer of learning. The critical brain regions associated with experiential learning are the occipital lobes (sight), temporal lobes (sound), and parietal lobes (touch).
The cognitive system is the information system. This is the “everything else” that Einstein noted. The cognitive system processes and stores knowledge and facts that usually come in the form of text, graphics, or video. The cognitive system is truly amazing and is more developed in humans than in any other organism, but it relies on working memory and attention that are limited resources and form a bottleneck that slows learning. More information comes into the system and is available to the learner (the green arrows) than can be processed (the red arrow). This system encompasses the prefrontal cortex and hippocampus. Importantly, this system is slow to develop and starts to decline in middle age when health issues begin to arise.
The behavioral system in the brain has evolved to learn motor skills. It is one thing to memorize the steps needed to care and maintain a medical device, but it is completely different (and mediated by different systems in the brain) to know how to perform those steps. The behavioral learning system is fascinating, but its detailed processing characteristics are beyond the scope of this report. Suffice it to say that processing in this system is optimized when behavior is interactive and is followed in real-time (literally within milliseconds) by corrective feedback. Behaviors that are rewarded lead to dopamine release into the striatum that incrementally increases the likelihood of eliciting that behavior again in the same context. Behaviors that are punished do not lead to dopamine release into the striatum thus incrementally decreasing the likelihood of eliciting that behavior again in the same context.
The emotional learning system in the brain has evolved to facilitate the development of situational awareness — the ability to read nuance in a situation, to know what comes next, and to deal effectively with stress, pressure, and anxiety. This system affects processing in the cognitive and behavioral systems and helps patients learn to care and maintain medical devices under stress or pressure, or when “things just aren’t going right”. The critical brain regions are the amygdala and other limbic structures. Emotional learning is at the heart of situational awareness.
Using an insulin infusion pump as an example, let’s explore the traditional approach to training. The patient starts by reading documents that outline step-by-step how to care, maintain, and use the insulin pump. Then the patient may receive some hands-on training where they observe as an expert cleans and sets the pump up for usage. The patient may be given the opportunity to demonstrate the steps to the expert while receiving feedback. Because hands-on training is so time-intensive and costly, document and do-it-yourself training is usually emphasized.
From a neuroscience perspective, this approach starts by engaging the cognitive system as the patient reads documents and attempts to memorize the steps. Because the infusion pump is a 3-dimensional object and the system is dynamic (pushing insulin into the body), it is challenging to fully understand how the system works when the information being processed by the patient is 2-dimensional, static, and usually text-based. The patient must convert 2-dimensional static information into a 3-dimensional dynamic mental representation in the brain that accurately reflects the operation of the infusion pump. The cognitive effort needed to do this is enormous. Given the fact that working memory and attention are limited capacity resources, this process with be slow, challenging, and error-prone.
During the hands-on training, the patient can “experience” the system in action while watching an expert. This combines experiential learning with cognitive learning. When the patient is given a chance to care and maintain the device while receiving corrective feedback from the expert, behavioral learning is added to the mix. Although effective, behavioral learning is gradual and incremental and proficiency takes extensive practice, usually more than is provided in traditional training settings. In addition, it is rare that uncommon situations are trained.
At some point, the patient is deemed “ready” and they are allowed to take the infusion pump home. It is at this stage when the patient is at the greatest risk. They have a basic understanding of the care and maintenance of the infusion pump, but do not have extensive experience, and lack confidence. The brain tells us why. All of the training has been sequential and disjointed. Training starts with documents that train the cognitive system. Experience is then added to the mix, with behavioral training occurring last. What the patient really needs is for cognitive, experiential and behavioral learning to occur simultaneously, and in the environment where the infusion pump will be used (i.e., in the home). In addition, they need situational awareness training to deal with cases in which they are under stress or pressure, or when things are not going according to plan.
Now consider an immersive approach to infusion pump training. Suppose the patient dons a VR headset and is immersed in a 360 experience with an expert who has used an infusion pump for years. The patient can watch as the expert demonstrates and verbally describes the step-by-step procedures needed to care and maintain the infusion pump. The patient can view this from a third-person perspective, but also from the first-person perspective. The expert can demonstrate common pitfalls and how to address them, while describing them verbally. The expert can do all of this while demonstrating a calm demeanor. The patient can view the VR experience and can complete knowledge checks at the end to ensure that they have the requisite knowledge. They can view the experience as many times as they like until they are confident in their own skills.
To train the behavioral aspects of infusion pump care and maintenance, one could utilize an interactive VR setup, or an AR solution. Imagine using your tablet or a hands-free AR device that uses computer vision to identify the make and model of your infusion pump. The AR system uses visual assets such as arrows, highlights, text and such to guide you step-by-step through the care and maintenance of the system. As each step is completed, you are rewarded with success and move on to the next step in the process. You can practice the process as many times as you like, and even under time pressure.
From a neuroscience perspective, this immersive VR/AR approach engages all four learning systems in synchrony. In AR, assets are providing the necessary cognitive information. The learning is happening in real-time and through experience, with the patient generating the relevant behaviors and being rewarded or punished. This broad-based neural activation leads to a highly interconnected, context-rich set of learning and memory traces. These highly interconnected memory traces will be slower to decay over time leading to better long-term retention. Because these VR and AR training tools are available 24/7 the patient can have unlimited practice, and can train and test themselves under a broad range of environmental conditions. If they are unsure of a step, they simply pull out their VR headset or tablet, start the VR or AR software and are guided through the relevant steps.
Gone will be the days where medical device training for patients is inadequate, and patients are at risk of complication. With immersive technologies, patients can be given standardized and highly effective training from day 1. They can obtain experiential familiarization and practice as many times as they like. They can receive training on rare, but life-threatening situations so that they are prepared for anything. Because these immersive technologies broadly engage multiple learning systems in the brain in synchrony, the patient will obtain a strong knowledge base and behavioral repertoire that have been honed and testing. Patients will be more satisfied and confident, and fewer complications will arise.
Quelle:
https://medium.com/edtech-trends/report-xr-medical-device-training-6fad1cfb50b4
Google has released to researchers and developers its own mobile device-based hand tracking method using machine learning, something Google Research engineers Valentin Bazarevsky and Fan Zhang call a “new approach to hand perception.”
First unveiled at CVPR 2019 back in June, Google’s on-device, real-time hand tracking method is now available for developers to explore—implemented in MediaPipe, an open source cross-platform framework for developers looking to build processing pipelines to handle perceptual data, like video and audio.
The approach is said to provide high-fidelity hand and finger tracking via machine learning, which can infer 21 3D ‘keypoints’ of a hand from just a single frame.
“Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands,” Bazarevsky and Zhang say in a blog post.
Google Research hopes its hand-tracking methods will spark in the community “creative use cases, stimulating new applications and new research avenues.”
Bazarevsky and Zhang explain that there are three primary systems at play in their hand tracking method, a palm detector model (called BlazePalm), a ‘hand landmark’ model that returns high fidelity 3D hand keypoints, and a gesture recognizer that classifies keypoint configuration into a discrete set of gestures.
Here’s a few salient bits, boiled down from the full blog post:
In the future, Bazarevsky and Zhang say Google Research plans on continuing its hand tracking work with more robust and stable tracking, and also hopes to enlarge the amount of gestures it can reliably detect. Moreover, they hope to also support dynamic gestures, which could be a boon for machine learning-based sign language translation and fluid hand gesture controls.
Not only that, but having more reliable on-device hand tracking is a necessity for AR headsets moving forward; as long as headsets rely on outward-facing cameras to visualize the world, understanding that world will continue to be a problem for machine learning to address.
Quelle:
Google Releases Real-time Mobile Hand Tracking to R&D Community
from Heather Andrew, CEO, Neuro-Insight
Key neurological insights from Mindshare UK’s groundbreaking report, ‘Layered’
Over the past few months, Neuro-Insight has been working in partnership with Mindshare UK and Zappar on ‘Layered’ – a first-of-its-kind study into the consumer, neurological and brand impact of augmented reality.
Until now, there has been little research carried out to understand the neurological effects of augmented reality (AR) and the ways in which the brain responds to various AR tasks and experiences. With ‘Layered’, we wanted to bridge this gap, unearthing AR’s true value – not just as a marketing and communications channel, but also an everyday utility that has the potential to transform the world around us, making it more rich, meaningful and engaging.
In this post I wanted to build on Jeremy Pounder, Director, Mindshare Futures’ blog discussing the future of AR and its implications for brands and share some of the key neurological insights from the UK’s first ever neurological study into the effects of AR on the brain.
To fully understand the effects of AR on the brain we recruited 151 UK smartphone users aged between 18 and 65. We allocated them into two cells, each with around 75 respondents, closely matched by their demographic, attitudinal and behavioral characteristics.
We used Steady State Topography (SST) brain-imaging technology to measure how the brain responded to different types of stimulus. STT measures the electrical activity in the brain in order to report (second-by-second) on a number of cognitive functions, including attention, personal relevance, emotional response and memory encoding. We can’t mind read (yet), but the technology can measure and quantify some very interesting things going on in your brain.
In order to capture these changes in brain response, we created six AR and non-AR tasks for the participants to complete with the aim of measuring how the brain reacts to augmented reality.
These six tasks were:
Upon arrival, respondents were sorted into the two matched cells and briefed on each of the six tasks they would be carrying out during the study. Half the sample completed the six tasks in AR and the other half completed the non-AR version.
Participants were asked to carry out the tasks on an iPad as they would normally do at home, with the order of the tasks rotated. Respondents were individually filmed as they carried out the tasks, allowing us to match their second-by-second activities to their neuro data.
It was truly fascinating to see how the brain reacted to augmented reality for the very first time. Like with a lot of the studies we undertake here at Neuro-Insight we expect a degree of variance from one study condition to another, but with AR we saw an unprecedented level of difference between the two cells of respondents in the study.
As I’ll highlight below, augmented reality affects the brain in new and exciting ways. If you have the time, you should discover the full neurological findings in ‘Layered’.
Here are just three ways augmented affects the brain:
Let’s dive a bit deeper into these three core learnings to understand what’s happening in the brain when we talk about attention, surprise and knowledge retention.
Attention is a necessary precursor to any sort of brain response to communication – if people don’t register a brand message it can’t possibly have any sort of lasting impact. So, whether it be online, TV, billboards, or in our case, augmented reality, capturing attention is something that successful brands or business spend a lot of time, money and resource getting right.
One of the incredible findings from ‘Layered’ was AR’s ability to drive high levels of visual attention and engagement compared to non-AR tasks. In fact, upon completion of our research, we found that AR drove higher levels of attention that pretty much any other medium we have studied.
As you can see from the AR experience below, we see much higher levels of cognitive activity when the brain is exposed to the AR task compared to that of the non-AR task.
AR experience in ‚Layered‘ showing how augmented reality affects the brain
This wasn’t just a single experience either. Across the series of cognitive function measures carried out as part of the study, AR delivered almost double (1.9 times) the levels of visual attention compared to their non-AR equivalent.
This was particularly true of right brain visual attention, (attention to the overall feel or something rather than the detail). This is incredibly important when we talk about memory encoding, as there’s a direct correlation between the emotional intensity of a stimulus and how it’s encoded by the brain.
This finding, in particular, is crucial in showing AR’s ability to generate a more powerful response than equivalent non-AR experiences.
Cognitive activity during tasks. Brain activity measured using SST headsets; unit of measurement is radians, which equates to strength of brain response.
If we break this down further we can see increased levels of emotional intensity and visual attention in the brain compared to both TV and online viewing. What’s happening here is the brain is working much harder where we would expect it to. In particular, we saw incredibly strong levels of visual attention amongst younger smartphone users.
If we specifically look at levels of attention elicited by AR vs. TV, we see AR delivers a 45% higher level of attention. So, for media and brands competing to achieve cut-through, this is hugely significant in determining how to spread investment across channels.
So why are high levels of emotional intensity important? Simply put, emotional intensity is crucial when it comes to how we process experience because emotional intensity both drives and colors what the brain stores, or encodes, into long-term memory. This is highly significant for brands wanting to make a lasting impression on target audiences because memory encoding has a strong correlation with decision-making and purchase behavior.
These high levels of emotional intensity and attention were typified for us by the AR-enabled packaging concept W-in-a-Box created by Zappar for SIG. This was conclusive in demonstrating the ability of augmented packaging to increase levels of engagement, emotional intensity and attention.
Zappar’s augmented reality packaging concept for SIG.
The second insight found in ‘Layered’ came out of AR’s ability to surprise the end user. In neurological terms, what we saw in the study were lower measures of ‘approach/withdrawal’ (which captures the extent to which the user wants to move towards or away from a stimulus) in participants when undertaking AR tasks.
Average levels of brain response during AR and non-AR tasks.
A withdrawal response can be indicative of a number of different emotional take-outs; in this context, having also asked people about their conscious responses to the tasks in the study, it’s apparent that the “step back” response was a strong indication of the sense of surprise that occurs in the brain when smartphone users start an AR experience. Essentially what we’re seeing here is something that people aren’t expecting, a bit like a jack in the box. Your first reaction is to step back until you understand what you’re seeing.
It’s interesting to note that amongst men there was a stronger approach response than amongst women suggesting that, in this case, they were faster to embrace and engage with the unfamiliar.
Zappar’s AR-enabled gin bottle for Shazam and Bombay Sapphire
What does this mean for the future of AR? The capacity for AR to deliver surprising and emotionally powerful experiences is likely to endure for the foreseeable future as the software used to create AR experiences continually improves, enabling creatives to build more compelling and immersive AR experiences.
However, there’s also a bigger opportunity for AR to grow into more of a wider utility as it permeates our everyday lives. The qualitative research carried out in ‘Layered’, alongside our neurological findings, points to the fact that smartphone users welcome the technology’s ability to problem-solve different contexts and occasions of everyday life.
In neurological terms, if any type of branding or communication is to be effective, it needs to be encoded into long-term memory – otherwise, it will have little to no impact on any of our future actions.
Let’s put this into perspective. How many ads do you remember seeing yesterday? Even though you probably saw hundreds of ads across mobile, online and out of home billboards, if you’re like most people, you wouldn’t be able to recall more than a couple. (I’m able to recall exactly zero ads from yesterday!).
Graph showing left and right brain memory response to AR and non-AR tasks.
What we found in ‘Layered’ was that memory encoding was 70% higher in the AR tasks compared to the non-AR tasks. What this means is that AR can be a particularly powerful way to deliver information that is subsequently retained. Despite seeing higher visual attention for younger people, and stronger approach for men, the memory encoding response was similarly high for all groups of respondents.
What does this mean for brands? Simply put, the future for AR is looking promising. If advertisers can achieve 70% higher levels of memory encoding, then AR becomes a very interesting medium to put marketing budget behind to secure wider, more engaged reach.
Pernod Ricard collaborated with Shazam, OLIVER and Zappar to create an augmented reality experience for the box for their limited edition The Glenlivet Code product.
As with most new mediums, the novelty factor associated with AR is likely to diminish to some extent over time. I’m sure we all remember the excitement of getting a broadband (or dial-up) internet connection and the seemingly endless possibilities it brought.
With this in mind, the withdrawal effect (jack in a box) I mentioned earlier will more than likely decrease as we see AR segway into an everyday utility that, much like the internet, becomes an integral part of our everyday lives.
It will be at this point that novelty transcends into strategy, and narrative and content become a much more integral part of delivering the AR experiences of the future. The ability to tell compelling and engaging stories will become increasingly more important as people’s brains come to expect everyday surfaces, products and packaging to be augmented.
Kate Spade utilise AR for the launch of their flagship store in Paris.
So what does all this mean for brands and how can they start leveraging AR as a more effective way of generating brand awareness in terms of attention, emotion and memory encoding?
These findings have huge implications on the way brands should be thinking about their long-term commercial strategies. As Neuro-Insight’s research certainly suggests, AR experiences are considerably more engaging and memorable than non-AR experiences, which presents a huge opportunity for brands to lead the way in leveraging the technology.
The folks at Zappar have seen AR move into the mainstream over the last year with some of the titans of tech, (the Googles and Apples of the world) investing in augmented reality, and now with ‘Layered’, we’re seeing the science to validate this investment. These findings don’t just stop at a brand level either. The implications are much more far reaching. For example, what will this mean for L&D professionals knowing that AR can drive greater levels of attention and engagement, but also enable staff to memorize more of their training? I know what I’d do.
Likewise, CPG brands can finally get ‘stand-out’ with their packaging on supermarket shelves whilst educating and entertaining their customers in the process. Now is the time to start thinking about your long-term AR strategy.
Don’t forget, you can take a deeper look into the neuroscience research carried out by Neuro-Insight in Mindshare’s ‘Layered’ report.
Project film for Mindshare’s Layered report.
Quelle:
https://www-zappar-com.cdn.ampproject.org/c/s/www.zappar.com/blog/amp/how-augmented-reality-affects-brain/
Die gesamte reale Welt mit einer digitalen Augmented-Reality-Welt zu überlagern. Und damit unendliche neue Möglichkeiten für Unterhaltung, Tourismus und Marketing bieten – das ist das Ziel des Karlsruher Start-ups MAPSTAR. Doch wie genau wird diese Idee umgesetzt?
Die letzten Jahre haben bewiesen, dass durch die Kombination von Technologie und disruptiven Geschäftsmodellen komplette Industriezweige verändert werden.
Uber besitzt keine Fahrzeuge, Airbnb besitzt keine Hotels, Facebook keinen Content und Alibaba besitzt keinen Lagerbestand. Und doch sind diese Unternehmen innerhalb kürzester Zeit von Start-ups zu Weltkonzernen herangewachsen.
Augmented Reality bedeutet die Überlagerung realer Inhalte mit digitalen Inhalten. Bekannt wurde die Technologie vor allem durch die Gesichtsfilter von Snapchat sowie durch den Hype um das Spiel PokemonGO.
Doch Augmented Reality lässt sich auf nahezu jede Branche anwenden und wird daher als eine der Supertechnologien der nächsten Jahre bezeichnet. Vorhergesagt wird ein Marktvolumen von einer Trillion US-Dollar im Jahr 2030.
Da klingt es wenig verwunderlich, dass Apples CEO Tim Cook über Augmented Reality sagt:
I regard it as a big idea, like the smartphone. The smartphone is for everyone, we don’t have to think the iPhone is about a certain demographic, or country or vertical market: it’s for everyone. I think AR is that big, it’s huge. I get excited because of the things that could be done that could improve a lot of lives. And be entertaining.
Auch das Karlsruher Start-up MAPSTAR macht sich die Vorteile von Augmented Reality zunutze und bietet eine gleichnamige App an. Sie ermöglicht es den Nutzern, jeglichen digitalen Inhalt direkt als ein 3-D-Modell zu speichern.
Hinter der Idee steckt der Gedanke, dass unsere besten Erlebnisse und Erinnerungen immer mit Menschen, Emotionen aber auch mit Orten verknüpft sind. MAPSTAR ermöglicht es, die besten Erlebnisse wieder und wieder mit anderen Personen genau an den Orten zu erleben, an denen sie entstanden sind; eindrucksvoll zu erleben in diesem Video der PRO7-Sendung Galileo.
MAPSTAR stellt diese Informationen genau dort zur Verfügung, wo sie relevant sind. Mit der App können Nutzer digitale Inhalte gemeinsam an einem Ort editieren, speichern und teilen.
Befindet man sich an einem bestimmten Ort, startet man einfach die MAPSTAR App und kann sehen, was andere Menschen genau am selben Ort erlebt haben, an digitalen Touren teilnehmen oder spezielle Angebote erhalten.
Die MAPSTAR App ermöglicht dem Smartphone das Sehen und Verstehen seiner Umgebung. Daher ist die komplette Anwendung dreidimensional und interaktiv aufgebaut, was den Gründern deutlich mehr Möglichkeiten bietet, die Aufmerksamkeit der User zu gewinnen.
Zu Beginn haben die Nutzer die Möglichkeit, Texte, Bilder und 3-D-Modelle zu speichern und mit anderen zu teilen. Doch MAPSTAR ist als komplettes Augmented Reality-Ökosystem entwickelt, das sowohl private Nutzer als auch Unternehmen auf einer digitalen Plattform verbindet.
Viele Unternehmen schalten heutzutage Außenwerbung komplett auf physikalische Flächen wie einer Plakatwand. Mit Augmented Reality haben Unternehmen die Chance, ihre Markenbotschaften orts- und größenunabhängig platzieren.
MAPSTAR bietet darüber hinaus die Möglichkeit dreidimensionaler Werbung, die Kunden interaktiv nutzen und erleben. Somit kann ein Produkt als animiertes Modell durch die Straßen laufen, Nutzer ansprechen und digital erlebt werden.
„Our vision is to connect one billion people in a new world“ so MAPSTAR Gründer und CEO Shora Shirzad. Wie bei jeder Erkundung neuer Welten sind Visionäre und Neugierige gesucht, die sich auf das Abenteuer einlassen und ihren digitalen Fußabdruck in der Welt hinterlassen.
Die erste Version der MAPSTAR App steht seit dem 20.08. weltweit im AppStore und im PlayStore zur Verfügung.
Quelle:
Dieses Start-up erschafft eine neue digitale Welt in Augmented Reality!
NextVR, the VR livestreaming event app, is now available on Oculus Quest for free, which is said to come alongside a new technology that promises clearer and sharper video rendering.
The studio calls the new rendering technology ‘PAVE’, which is said to enable new features like enhanced in-experience graphic overlays (e.g. stats, instant replays) and forthcoming multi-user interactions such as multi-user game trivia during timeouts.
Previously, NextVR content was only available on Quest via the Oculus Venues app, which provides communal viewing of select real-time live events. In contrast, the NextVR app is a single-user experience that also provides replays and extra content too.
“NextVR is pairing our newest rendering technology with Oculus Quest to provide an outstanding fan experience.” said David Cole, NextVR co-founder & CEO, “We’re launching with a collection of experiences that best demonstrate the capabilities of our technology including a show this September 10th hosted by Pete Davidson recorded live at Gotham Comedy Club in New York City.”
In addition to Quest support, the studio is also adding an exclusive ‘Best on Quest’ channel with a curated group of “new and optimized experiences,” among which is the NBA Finals Game Six Highlights with access to the Toronto Raptors locker room celebration, NHRA Drag Racing, Day to Night Timelapse and select concerts.
NextVR’s supported platforms now include Oculus Quest, Samsung Gear VR, PSVR, Google Daydream, Windows VR headsets, Oculus Go, Oculus Rift, HTC Vive, and Valve Index.
Quelle:
https://www-roadtovr-com.cdn.ampproject.org/c/s/www.roadtovr.com/nextvr-brings-live-streamed-events-oculus-quest/amp/
Frank Hartwig zeigt aus dem Fenster seines Büros. „Manchmal laufen hier morgens drei Rehböcke auf dem Firmengelände herum“, sagt er und schaut, als könne er noch immer nicht glauben, wie nah hier in Suhl-Albrechts Natur und Industrie beieinander sind. Er genieße diese Nähe sehr und überhaupt: Thüringen sei ein tolles Bundesland, ein guter Standort für Unternehmen. „Ich verstehe nicht, warum nicht mehr Firmen nach Thüringen kommen, um sich hier anzusiedeln“, sagt der Geschäftsführer und Gesellschafter der CDA GmbH. Gerade für den Mittelstand biete Thüringen perfekte Bedingungen. Kurze Wege und eine gute Erreichbarkeit der Landespolitik in Erfurt.
Worte aus dem Mund eines Managers, der eher zufällig in Südthüringen hängengeblieben ist. „Ich stamme ja vom Niederrhein in Nordrhein-Westfalen. Da ist die Landespolitik in Düsseldorf in unerreichbarer Ferne“, schildert Hartwig, der seit Mai dieses Jahres gemeinsam mit Finanzchef André Keller Gesellschafter des Unternehmens mit derzeit 205 Mitarbeitern ist.
An diesem Samstag ist die Politik zu Gast in Suhl-Albrechts, um das 25-jährige Bestehen der CDA zu feiern. Es ist nicht das erste Mal, dass die Politik sich für das Unternehmen interessiert. Denn CDA hat eine Vorgeschichte, die über das Jahr 1994 hinausreicht. Das Unternehmen, das früher CD-Werk Albrechts hieß, ist schließlich der Nachfolger des ersten deutsch-deutschen Joint-Ventures. Noch vor der Wiedervereinigung hatten der westdeutsche CD-Fabrikant Reiner Pilz und das Kombinat Robotron das neue Unternehmen gegründet. Ein Vorzeigeprojekt der Wiedervereinigung. Selbst Lob vom damaligen Kanzler Helmut Kohl gab es. Allerdings stellte sich Pilz später als Scharlatan heraus, wurde wegen Betrugs später verurteilt.
Linsen statt Compact-Disks
Doch diese Vorgeschichte ist nicht der einzige Grund dafür, dass das heutige Jubiläum von CDA für manche überraschend kommt. Das Produkt, mit dem das Unternehmen einst groß wurde, spielt eine immer geringere Rolle. Die Compact-Disk. Weil kaum noch Menschen Musik auf einer Silberscheibe kaufen, sondern immer öfter aus dem Internet streamen. Das gleiche gilt für Filme auf DVD. Der Umsatz, den CDA mit dem Beschreiben von runden Ton-, Bild- und Datenträgern macht, schrumpft von Jahr zu Jahr.
Und trotzdem feiert das Unternehmen, dass es seit einem Vierteljahrhundert besteht. Und Hartwig blickt optimistisch in die Zukunft. Wohl auch, weil die Vergangenheit für ihn keine Rolle spielt. „Die fünf Jahre Vorgeschichte blenden wir bewusst aus. CDA ist ein anderes Unternehmen und es ist inzwischen kaum noch ein Mitarbeiter aus der Pilz-Ära im Haus“, sagt Hartwig. Zudem wird sein Optimismus von neuen Geschäftsfeldern getrieben. Das Beschreiben von Speicherkarten für die Automobilindustrie wird ein immer wichtigeres Standbein für CDA. Viereinhalb Millionen SD-Karten hat CDA im vergangenen geflasht, also mit Daten beschrieben. In Suhl und auch an weiteren Standorten in Indien und China. Kunde ist vor allem die Automobilindustrie.
Wer sein Auto nach dem Weg von A nach B fragt und ein fest eingebautes Navigationsgerät hat, der kann mit hoher Wahrscheinlichkeit davon ausgehen, dass das Karten- und Datenmaterial von CDA auf die Speicherkarte in seinem Gerät gespielt wurde. „Die Datenmengen werden von Jahr zu Jahr größer. Inzwischen sind wir bei 64 Gigabyte“, sagt Hartwig. Für die kommenden Jahre sei CDA mit festen Verträgen ausgestattet. Doch der Firmenchef geht davon aus, dass auch danach die Speicherkarte noch einen Markt haben wird. „Bis wir alle Informationen über das Internet in unser Auto bekommen, werden noch einige Jahre vergehen, denn dafür ist der Netzstandard 5G wirklich in jedem Winkel, an jeder Milchkanne notwendig“, sagt Hartwig.
Ein Viertel des Umsatzes steuert inzwischen ein weiteres Geschäftsfeld bei, das Hartwig in seinen siebeneinhalb Jahren bei CDA mit aufgebaut hat. Linsen aus Polymeren, also Kunststoff. Sie stecken in Smartphones, etwa in dem Sensor für die Gesichtserkennung. Aber auch in Spielkonsolen und Autos. „Überall dort, wo es um virtuelle oder augmented Realität geht, kommen unsere Linsen zum Einsatz“, erklärt Hartwig. Zum Beispiel auch in Autos, die ihren Fahrern wichtige Informationen ins Sichtfeld projizieren. In sogenannten Head-up-Displays. Enge Kontakte hat CDA derzeit auch zu einem Unternehmen in den USA, das an den VR-Brillen der Zukunft arbeitet. Sie sollen die Realität mit der virtuellen Realität verschmelzen und kaum größer sein als eine herkömmliche Brille.
Die Technologie hat den großen Vorteil, dass sie auf das Können der Mitarbeiter in der Kunstoffverarbeitung aufbaut. Und so hofft Hartwig, dass mit den optischen Systemen eines Tages so viel Umsatz zu erzielen ist, um das langsame Sterben der CD und der DVD auszugleichen. Schließlich sollen Natur und Industrie in Albrechts auch künftig hab beieinander sein.
Quelle:
Foto: Bis Mai dieses Jahres bildeten sie die Geschäftsführer der CDA GmbH, seit einigen Monaten sind sie auch deren Gesellschafter: Frank Hartwig (links), Vorsitzender der Geschäftsführung, und Finanzchef André Keller. Archivfoto: ari
Scape Technologies compares data from your phone’s camera with a cloud-based, machine-readable 3D map for augmented reality overlays that put existing setups to shame.
Pokemon Go sparked a new wave of augmented reality exploration, but London-based Scape Technologies wants to take AR beyond GPS-driven gaming to provide a more accurate representation of the world around us.
GPS can pinpoint location within a foot or so, but Scape can tell exactly where you’re standing by comparing data from your phone’s camera with a cloud-based, machine-readable 3D map. Its first product is a software development kit that will anchor AR content to specific locations.
Currently running trials in the UK, Scape has created 3D maps of 100 cities, and is live in London and San Francisco. The company also recently dropped a teaser for Holoscape, a massively multiplayer AR game.
We spoke to Scape co-founder and CEO Edward Miller about large-scale AR, what it means to „Scape“ an entire city, and why he believes 3D map data is „the scaffold of the 21st century.“ Below are edited and condensed excerpts of our conversation.
PCMag: You’ve said ‚3D map data is the scaffold of the 21st century.‘ What did you mean by that?
Edward Miller: Over the next few years, we’re going to enter an exciting new era of so-called „spatial computing.“ This refers to a new class of computing devices that will go beyond the boundaries of the 16-by-9 screen and operate within the physical world. Self-driving cars, drones, augmented reality glasses are all examples of spatial computing devices. It’s a field I started working in about nine years ago, first with interactive imagery robotic camera rigs to build early 360-degree video; and it’s [now] about to explode. Essentially it’s a way to connect the physical and digital worlds.
Which is what you’re doing at Scape Technologies. How did the company start?
I joined Entrepreneur First, which is where I met my co-founder and CTO Huub Heijnen. He’d studied his masters in robotics at the world-famous ETH Zurich, and was working as a researcher in Australia before coming to the UK to join the EF program. We got talking in a pub and realized we had the exact skills to collaborate on, and build the infrastructure for the next generation of maps for machines.
How does your AR SDK work?
At Scape, we’ve built a cloud-based Vision Engine that allows camera devices to understand their environment using computer vision. It doesn’t use 3D maps built and then stored locally, but taps into a „shared understanding“ of an environment, which allows content to be anchored to specific locations at an unprecedented scale.
This is city-wide AR, right? How many cities have you ‚Scaped‘ to date?
Collectively, we’ve gathered imagery data for over 100 cities and are currently processing a selective amount of that depending on our rollout schedule, R&D requirements, and financial considerations. We launched in London and San Francisco, and there are many more areas waiting to be rolled out in the coming months, including some geographical regions in Asia, where there will be some particularly exciting events next year.
How is Scape different from Snap’s World Lenses AR overlay?
Several companies have developed the ability to map a small area like a building, road, or landmark, and use that map for augmented reality. However, we need to think bigger than that. We don’t just want to map a building or a street or a block, but entire cities and beyond. This requires a very different approach, which has meant we’ve had to focus on scale from day one.
Explain how what you’re doing is different to Google Street View.
Google built its system for humans to be able to visually identify locations, but we’re creating an entirely different class of map for machines. Using Scape Technologies, devices will be able to understand, via visual features and the information stored in our server-side maps, and triangulate exactly where that device is.
Much more accurate than GPS?
Oh yes, definitely. GPS has the urban canyon problemwithin cities where tall buildings can occlude or reflect signals to the detriment of positions. Also there’s a level of inaccuracy with the phone’s magnetometer (compass) due to influence from steel-structured buildings, which rotate the signal in the wrong direction.
Many people have seen the layering of history on top of today’s world as an exciting use of AR, such as strolling into the Colosseum in Rome, holding up your phone, and ducking to avoid a gladiator’s dagger.
Yes, that’s a great example. We have already mapped out Rome. In fact, the Colosseum is often used as a pillar in research projects by the computer vision community as a benchmark to test internal accuracy localization. There are so many possibilities in this area, in Rome and beyond.
When people think about AR they often say ‚Oh, right, Pokemon Go?‘ Explain how what Scape is doing differently.
That app generated a large proportion of the revenue in AR today so it’s understandable how much attention it received. But do you remember the trailer they ran when the app first got released? The trailer promised full-scale AR and painted a picture of the world and what it could be. Essentially, I believe Scape can fulfil that potential and bring it to life, going much further than they were able due to the constrictions of GPS. We’ll be using not only cloud-based map generation, but deep semantics and networked devices to provide a true city-scale AR future.
Explain how your ScapeKit SDK is able to convert geo-coordinates (longitude and latitude) to Euclidean space coordinates on the fly.
This is how we can provide such precise localization mapping for content persistence. To do this, we work in what’s called Euclidean space, generating a „scene anchor“ within the visual/physical locale and relating to it via our system with transposed coordinates, which our SDK can translate into accurate measurements.
Still at the seed stage, Scape Technologies has raised $8 million to date. What was it about your pitch that struck a chord with investors?
Essentially, I explained that „We’re not building a map, we’re building the map.“ What we’re doing is so much bigger than augmented reality, robotics, or drones alone. We’re building the fundamental infrastructure that will connect all of these industries together, allowing machines and humans to have a shared understanding of the physical environment.
Before Scape, you ran a VR/medtech company called Medical Realities, which sought to democratize access to surgeons in the developing world, and created VR apps for clients including BBC, Jaguar, and the Royal Ballet.
I believe that immersive realities are going to have profound effects in many, if not all, industries, and I’ve done work across sectors exploring this in the past nine years. I stepped away from Medical Realities several years ago now, but it was important for me in learning about early experimentation on integrating new workflows within existing industries.
You’ve recruited computer vision PhDs from all over the world including Greece, South Africa, Switzerland, and Taiwan. Is the UK becoming a hub for this work?
The UK is becoming tremendously important in computer vision, driven by expertise from many of the top universities, including: Imperial and UCL [both part of the University of London], and the universities of Oxford and Cambridge. Many large companies, like Facebook and Snap, are centering their computer vision teams here.
Finally, what’s next for you?
I’m off to Munich, Germany, to speak at Augmented World Expo in October to give a sneak peak of our project to build out a game like Fortnite—but in the real world.
Quelle:
https://www-pcmag-com.cdn.ampproject.org/c/s/www.pcmag.com/news/370434/this-augmented-reality-platform-is-smarter-than-pokemon-go?amp=1
Black Mirror au bureau, c’est quand… Ta RH te propose de faire ta prochaine formation via un casque de réalité virtuelle, dans un environnement 100% numérique. Les yeux ronds, tu l’écoutes t’expliquer que grâce à l’IA, tu vas mieux et plus vite assimiler de nouveaux skills. Intox, blagounette ou carrément science-fiction ? Pas du tout : elle te propose de tester l’apprentissage immersif, ou encore appelé “immersive learning” chez nos voisins anglais, via le mécanisme de plus en plus reconnu du “serious game”…
Jamais entendu parler de cette modalité de formation relativement nouvelle dans le monde professionnel ? Envie de savoir de quoi il s’agit, comment cela s’applique au monde de l’emploi et d’en connaître les limites potentielles ? Cet article devrait éclairer ta lanterne.
Selon William Peres, fondateur de Serious Factory, une entreprise qui édite des solutions pédagogiques innovantes de type simulations, l’immersive learning « permet de s’entraîner dans un environnement virtuel maîtrisé, avant de se confronter à la situation réelle. Grâce à sa solution logicielle Virtual Training Suite basée sur ce principe, Serious Factory transmet des savoirs utiles au bureau et développe des savoir-faire et savoir-être. »
Guillaume Ruzzu, le responsable marketing et Communication de l’entreprise, précise : « l’apprentissage immersif consiste à plonger l’apprenant au sein de sa formation dans un univers immersif virtuel où il pourra s’entraîner dans des situations réalistes. L’objectif est de le confronter à une réalité qu’il pourrait difficilement rencontrer en vrai (par exemple, des situations à risques dans l’armée, des pathologies rares dans le domaine médical…). L’enjeu est de le mettre face à des cas sur lesquels il apprendra, d’autant plus qu’il pourra faire des tests et des erreurs sans conséquences. »
Cette technique est axée sur l’idée qu’on apprend mieux par la pratique. Guillaume a bien insisté sur cela en arguant que « l’idée derrière l’immersive learning, c’est qu’il vaut mieux apprendre et mémoriser par l’expérience. Quand on le fait en complément de l’apprentissage magistral et théorique, on développe les bons automatismes et réflexes. Et surtout, on mémorise beaucoup mieux. Plusieurs études, notamment de Stanford ou de l’Université du Colorado, ont d’ailleurs prouvé que c’est beaucoup plus puissant en terme d’impact sur l’apprenant, de mémorisation et d’apprentissage sur le long-terme. »
Nous avons voulu en savoir plus sur ces recherches. Guillaume nous a aidé à en comprendre l’essence. L’Université de Stanford fait partie des pionniers dans les recherches liées aux techniques de Game Design et des neurosciences pour montrer leur impact au travers des jeux vidéo notamment dans l’éducation. On retrouve le fruit de leur travail sur ce site.
Par ailleurs, une étude conduite par l’Université du Colorado sur l’impact des simulations et des jeux en formation a montré que les apprenants ayant participé à une expérience de formation gamifiée obtenaient de meilleures performances que lors de formations traditionnelles. Ils ont en effet constaté une augmentation de 14% sur les compétences réelles évaluées, de 11% sur les connaissances acquises en formation et de 9% de rétention sur l’ensemble des contenus dispensés.
L’immersive learning est en pleine prise de vitesse. Apparu vers 2010, il était à l’époque coûteux, car les objets numériques étaient rares. Il était à l’origine utilisé par l’armée américaine, pour faire s’entraîner les soldats dans des situations factices de danger extrême. L’objectif : les préparer en toute sécurité à ce qu’ils allaient vivre en situation réelle. D’où le nom donné à certaines formes d’apprentissage immersif, les serious games. Entre temps, la commercialisation massive de casques de réalité virtuelle, notamment par Samsung ou HTC, ainsi que le progrès des technologies, ont rendu les outils abordables et permis la floraison d’un marché tout neuf.
À présent, cette méthode vient régulièrement en complément de formations théoriques en ligne ou en présentiel, car elle garantit de fixer durablement les apprentissages. Selon Guillaume, « en jouant, on apprend mieux. Avec des simulations, on a des meilleurs taux de mémorisation, car on est plus impliqués. Et cela s’adapte à tous les domaines. C’est par exemple très utile au sein de l’industrie, car on va recréer en 3D des environnements, des chantiers, et on peut sensibiliser les ouvriers aux bonnes pratiques. Apporter ces savoirs et savoir-être de manière ludique rend les employés beaucoup plus réceptifs. »
Justement, si le fait que ce soit ludique rend les employés plus réceptifs, quels sont les autres avantages de l’immersive learning et comment les mettre à profit dans son job ?
L’apprentissage immersif est pensé en complément d’autres formes éducatives. Chez Serious Factory, l’équipe l’intègre à des programmes de formation en présentiel ou l’ajoutent à des cours suivis en ligne, suivant les besoins de l’entreprise. Et ce parce que cette forme éducative, grâce à l’environnement unique qu’elle crée, enrichit l’expérience des apprenants et lui donne plus de relief. Ce qui conduit à une très bonne assimilation de ce qu’on y apprend. Pourquoi ?
Le premier gage d’efficacité de l’immersive learning est le fait que les apprenants sont plus “dedans”, autrement dit plus concentrés sur ce qu’ils apprennent. Ils sont actifs, présents, et c’est bien cet engagement qui rend les études efficaces. Guillaume insiste sur la notion décisive de plaisir. « On sait, nous a-t-il dit, que si l’apprentissage est gamifié, il acquiert cette dimension ludique qui amuse et fait aimer les études. On aime cela parce que c’est fictif, qu’on se projette et qu’on s’identifie, comme quand on regarde un film, une série ou qu’on joue sur un ordinateur. Benjamin Bloom, chercheur en sciences éducatives, a notamment montré que quand on se projette de manière expérientielle, une très forte volonté d’apprendre se développe via ce plaisir, et que l’engagement quant aux études est bien plus important. Cela décuple donc la mémorisation. »
Le serious game s’intègre parfaitement dans un parcours de formation pour compléter l’apprentissage en présentiel. D’une part car c’est expérientiel et cela permet à l’apprenant de mettre en pratique ses connaissances ; mais également parce que ces modules peuvent être joués plusieurs fois avec des embranchements différents en fonction des interactions choisies par l’apprenant. Or, l’entraînement régulier aide à développer réellement ses compétences. Guillaume avait un professeur de DUT dont le leitmotiv était : « la répétition fixe la notion ». Le responsable marketing de Serious Factory fait le lien avec la mécanique du serious game : « On retient vraiment bien. Il y a le fait de s’être entraîné en situation réelle bien sûr. Mais tout l’intérêt est également que cela s’intègre avant le présentiel, pour le préparer, pendant, pour dynamiser la formation, et après pour s’entraîner régulièrement quand le formateur n’est plus disponible. Or, 10-15 mn de temps en temps suffisent pour réactiver ce qu’on a appris. Cette mécanique change absolument tout pour la mémorisation et une véritable maîtrise des compétences. »
Le troisième avantage c’est qu’en serious game, comme son nom l’indique, on n’est pas dans la vraie vie. On prend l’exercice au sérieux, mais si on se rate, cela n’aura pas de conséquences dans le réel. Cela désinhibeénormément, et en même temps, permet de former sur des situations qu’on rencontre peu.
« Le serious game est adéquat pour former aux situations délicates, mais dans lesquelles il est primordial de savoir réagir, note Guillaume. Cela peut être, par exemple, se mettre dans la peau d’un patient schizophrénique avec un casque de réalité virtuelle pour un futur psychiatre. Ou alors, réaliser une simulation d’opération difficile en médecine. Ou encore, apprendre les normes de sécurité ou à utiliser des machines dangereuses dans le milieu de la construction. Dans le bâtiment, nous formons beaucoup d’employés qui sont sur le terrain, qui ont peu accès aux formations et que ça enthousiasme beaucoup d’être formé de cette manière très “ludique”. Nous sommes heureux que cette méthode puisse répondre aux enjeux des entreprises dans leur objectif de développement continu des savoir-faire. »
Et c’est d’autant plus vrai qu’au contraire du présentiel où on peut se sentir jugé, là on est libres de faire ses propres expériences, de prendre le risque d’échouer sans peur du ridicule. C’est ce que Guillaume note : « on est derrière un personnage qui est notre ombre, on ne risque rien. Pas de trac pour prendre la parole, faire, tester. On appelle cela l’effet Protée, du nom du dieu grec qui pouvait changer de forme. Des études ont démontré que les interactions virtuelles par avatars montraient des apprenants plus impliqués, plus désinhibés, qui prennent des risques, sont actifs, et développent les bons réflexes. Ils n’ont pas peur de se tromper ou faire des erreurs, car l’impact dans le monde réel est nul. C’est donc un formidable terrain de jeu pour apprendre. »
Contrairement aux formations en présentiel, l’immersive learning fait travailler les qualités comportementales. Or, elles sont tout aussi importantes que les compétences techniques. Du fait d’être immergés dans un environnement, les apprenants s’identifient aux personnages qui sont devant eux. Personnages qui leur répondent souvent, grâce à l’intelligence artificielle et au machine learning : « nous avons développé une technologie avec des personnages qui montrent des émotions qui évoluent en temps réel, avec des attitudes spécifiques. Par exemple, si dans le jeu on accueille mal un client, il va se mettre en colère. En voyant cela, les apprenants développent des savoir-être : l’empathie, l’écoute, l’adaptabilité, tout cela pour répondre à l’état du personnage virtuel. Grâce à ces exercices, ils combinent donc des compétences techniques et comportementales, pour apprendre tout ce qui est nécessaire en poste. »
L’immersive learning est une excellente solution pour les entreprises qui ont des équipes dans plusieurs pays ou en remote. Les outils peuvent être fournis et toute l’expérience réalisée n’importe où. C’est aussi une modalité de formation flexible, puisque l’apprenant peut faire un module quand il le souhaite, et revenir sur certaines parties si nécessaire.
Au vu de ces avantages, ce n’est donc pas étonnant que l’immersive learning trouve naturellement sa place dans des programmes de formation de grandes et petites entreprises. Grâce à lui, les collègues responsables des ressources humaines peuvent :
Selon Guillaume, le monde de la formation prend conscience qu’il faut aller au-delà de l’acquisition de connaissances. « Notre postulat se base sur la règle des 70 / 20 / 10. C’est simple : 70% de ce qu’on maîtrise est relatif à l’expérience. 20% provient de nos interactions sociales : avec des professeurs, des mentors et des collègues. Au maximum 10% enfin, vient de ce qu’on a appris théoriquement. »
Pour réussir à ancrer ces 70%, Guillaume l’assure, rien de plus redoutable que l’expérience. « Le meilleur moyen, dans un cadre digital et de réduction des coûts des organisations, c’est la simulation ». Demain, l’immersive learning devrait donc prendre de la vitesse et se mêler à d’autres formes éducatives, créant des parcours de formations hybrides que l’on appelle “blended learning”, ou encore “classes inversées”.
Cette forme éducative pourra aussi être utilisée par les responsables des ressources humaines.
« Nous avons créé un Serious Game immersif dans le cadre d’une campagne de recrutement pour les métiers de la circulation ferroviaire avec la SNCF. L’objectif était de susciter une prise de conscience sur la diversité des missions liées au métier de technicien aiguilleur. L’opération a été un succès ! », raconte Guillaume.
« Chez Serious Factory, nous avons un programme d’onboarding sous forme de serious game, où on a recréé nos locaux et où l’on découvre les différents services. L’avatar de notre CEO est le coach virtuel qui accompagne les nouveaux dans la découverte de l’entreprise. » Guillaume.
« Nous formons aussi avec des outils en interne sur des choses spécifiques. Par exemple, cela sert aux commerciaux, car nous avons réalisé un module de présentation de notre suite logicielle, qui sert en prospection marketing mais aussi en interne pour faire découvrir notre argumentaire marketing et avoir les bons arguments. C’est un peu une mise en abyme de notre outil qui fait en même temps la promotion de l’outil. »
Et si, pour revenir à notre épisode de Black Mirror, demain, le serious gamedevenait encore plus serious, et qu’il servait aussi à virer les collègues qui ne parvenaient pas à acquérir les bonnes compétences assez vite ? Une chose est sûre, cette modalité éducative en plein boom n’est pas près d’épuiser ses ressources.
Suivez Welcome to the Jungle sur Facebook et abonnez-vous à notre newsletter pour recevoir chaque jour nos meilleurs articles !
Photo by WTTJ
Quelle:
https://www.welcometothejungle.co/fr/articles/apprentissage-immersif-immersive-learning