The AR cloud is a foundational principle that helps contextualize the future of augmented reality. Also known as mirroworld, magicverse and other monikers, it’s all about data that’s anchored to places and things, which AR devices can ingest and process into meaningful content.
This concept has been a central topic of industry discussions over the past two years. Charlie Fink even wrote a whole book about it (disclosure: we wrote a chapter in the book). Applying that knowledge, Fink recently led a Niantic-hosted panel discussion about making the world clickable.
Picking up where we left off in Part I of this series, the concept of a clickable physical world first requires some organization. In that way, Google is well-positioned because the spatial web will need to be parsed and indexed, similar to what Google has done for 20 years on the 2D web.
“If you look at Google’s history, the first mission was to help organize the world’s information on the web and make it useful for people,” said Google’s Justin Quimby. “In 2005, Google shifted and set out to map the world […] And with the idea of searching what you see, we’ve already started to make some inroads with Google Lens and Live View and other products to use the camera as an input source and a search modality. I see it as an expanding surface area for how people can ask questions and get information about the world.”
The spatial web will also require infrastructure and standards. Again, this will be similar to the 2D web in that it has data transfer protocols (https), common languages (HTML) and organizational layers like Google to index, validate and rank all of that data for reliability and quality.
“What are the URLs for the physical world?” Posed Quimby. “We have a shared coordinate system using latitude and longitude […] So we have these sort of open standards today of how you can say ‘I have a piece of content attached to this location.’ Now the interesting challenge on top of that becomes, what information is there and who is displaying it? […] The way Google looks at is we want to make sure we’re providing utility to users. And part of that is making sure that the information is real.”
Nine-Tenths of the Law
Beyond veracity and value are questions of ownership and permissions. A longstanding question in AR circles has been, who owns your augmented realities? The classic example is a question of McDonalds’ rights to anchor promotional AR graphics on top of a Burger King location.
It’s a key question because AR is so new: There’s no legal precedent nor case law to guide AR “property rights.” One potential near-term patch is for physical-world property rights to confer ownership of digital AR rights for anything geo-anchored, though that’s a bit fuzzy.
“There are companies trying to solve that using similar hierarchical structures to how real estate is owned today,” said HTC’s Amy Peck. “We’re going to have layer upon layer of these experiences. Some are going to be data, some are going to be entertainment, and some are going to be things that we as individuals are going to be recording and hanging on a particular location. What does that structure of discovery look like for someone who I want to see this experience? How do I put those permissions into play? These are things on the standardization process, we need to look at.”
Beyond the what and how, there’s also the question of who? 6D.ai founder Matt Miesnieks has a historically-validated opinion that vertically-integrated platforms will be advantaged because they can create more seamless experiences, which will be a key success factor in AR’s early days.
He points to early days of smartphone apps when success was defined by a thin line of UX-based sensibilities like how many clicks were involved in a given app. This level of UX obsession and design will be even more important in AR, given that it’s something you may wear on your face.
This means that companies like Apple — famous for elegant integration of hardware and software — hold advantages over third-party developers who don’t have that same tight control over hardware integration and sensor fusion. This bodes well for first-party app dominance in AR.
“The interruption to your line of sight is more intrusive than a phone click,” said Miesnieks. “So the drive to eliminate friction in the interactions in glasses is going to be much stronger than even on phones. And my belief is that this is going to reward very tightly integrated software and hardware. Apple is really good at that. Snap is heading down the same path. There’s going to be just that extra bit of friction in open-architected products. I feel like the pendulum is going to swing more towards first-party apps and services like Apple payments rather than PayPal or Shopify or Stripe.”
But Miesnieks also points out that a third-party developer ecosystem will be necessary in an AR cloud-driven world. Despite the need for more tightly integrated hardware and software, third parties will be critical for the decentralized mission of crowdsourcing the AR cloud’s assembly.
“The only prevailing counterforce that blows in favor of the web is that the real world is really big and really messy,” he said. “Even a $2 trillion market cap company isn’t big enough to manage everything that’s going on in the real world. We’re going to need some way for this mess to be constantly evolving and adapting, and the web is the way to do that. So I’m kind of curious to see how those two forces play out when it comes to AR.”
Speaking of Apple, the question on everyone’s mind is the fate of its rumored AR glasses. Charlie Fink believes that rather than the small revenue target of AR glasses for an underdeveloped market, Apple could be thinking bigger (as it does) to the $200 billion corrective eyewear market.
This could be the first step towards hardware that evolves over time — like the iPhone did — into what we now envision for AR glasses. V1 could involve simpler “augmentation” in the form of making humans see better, along with sexier features like Doug Thomson’s Pop Principle.
“Apple is not going to give you an interrupted experience or tether you to a phone,” said Fink. “Instead, they’re going to go after the thing that 50 percent of us do, which is wear glasses. […] And instead of someone like me who wears trifocals, it will know if I’m driving at night… It will know if I’m trying to read in the dark…It will know if I’m watching TV. It will take this business that is 400 years old and disrupt it — a $200 billion per year business. I would run out the door screaming with as much cash as it took to augment my vision… to make me a better human, the way that smartphones give me access to all the world’s knowledge and augment us as humans.”