2024 is marked by significant hype around spatial computing and AI. Wherever you look, you will find something related to one of these two technologies and another usage example that will help us in the future. As with all hypes, they usually don’t last too long. They are here for some time and then they are replaced with something else everyone is interested in. And, inevitably, this will happen with spatial computing and AI as well. However, there are valid reasons to believe that the interest and impact of these two technologies will not end here.
Rotating hypes
When you look at the history of hypes in the software world, you can notice a certain pattern. Every few years, there is a hype for spatial computing followed by an AI hype. This was the evident between 2015 and 2017, with the advent of the first modern AR and VR devices, followed by a hype for chatbots. Then, second iterations of these devices were created somewhere around 2019, followed by hype for machine learning. Then, there was Metaverse in 2022, followed by ChatGPT, and soon followed by Meta Quest 3 and Apple Vision Pro. Finally, Google Gemini, Microsoft Copilot and Apple Intelligence were introduced.
Having all this in mind, it is easy to conclude that, while not the only ones, these two technologies are the most common hypes in the past two years. Therefore, it is almost certain that the current hypes are not the last ones and that these two technologies will be dominant in the future. And, even though it sounds counterintuitive, they need each other to achieve this dominance. Let’s see why.
Shift in the user interaction paradigm
The reason why new technologies are getting adopted is that they are making our lives easier. We are now accustomed to cumbersome user interfaces with buttons, sliders, input fields etc. They may seem intuitive right now, but they are actually contrary to the natural human interactions. Controls used today were simply the only way to interact in the past and step-by-step, people got used to them.
When every household started getting a personal computer in the 90s, I remember how many older people struggled with understanding what a mouse cursor does, or what is a difference between a label and an input field. In time, everyone got used to that, but the fact that people struggled proves that the approach was not natural. Later, when smartphones were introduced, the learning curve was much smaller, because, with touch screens, gestures and interactions were much closer to our natural .
Today, with spatial computing getting into the spotlight, user interfaces are getting transformed to using more natural gestures, such as pinching, grabbing, etc. And the recent boom in development of large language models, which led to the boom in virtual assistants, paves the way for oral communication becoming a main way of human-computer interaction.
In the end, products like this will become everyday thing.
App personalization
With adoption of spatial computing, the need for app personalization will increase drastically. While traditional mobile and web apps were mostly customized based on user’s habits and explicit actions, spatial computing applications have possibility to create more nuanced user profiles.
With spatial computing technology, we can really capture user’s implicit actions. For example, we can track user’s eye movement, how much they move their arms while interacting with UI, how much they walk around, etc.
User profiles are not the only customization that can be made. In contrast to traditional mobile and web applications, spatial computing applications run in a 3D environment, often being aware of physical surroundings of the user. This provides a lot of additional data for apps to process, such as number of walls and other surfaces around the user, amount of space for movement, number of obstacles, etc.
All the data mentioned above can be used to learn about both the user and its surroundings and dynamically adapt the app accordingly. For example, depending on how much the user is moving their arms, we might want to adjust how UI elements are positioned. Or, another example, you can train an AI model to recognize custom user gestures, enabling fully customizable experience.
Digital assistants
Digital assistants represent the latest trend in AI. First it started with coding assistants, such as GitHub Copilot, but we can expect more and more examples of how AI can help us with some known, often repetitive tasks. The shift became apparent when Google started replacing Google Assistant with Gemini, and at this year’s WWDC, when Siri’s integration with ChatGPT was announced.
Everything above leads to the conclusion that in the future we can see development of an AI area that was already given a name – agentive AI. This would be an umbrella term for AI tools that are doing simpler or repetitive tasks on our behalf, for example scheduling our appointments.
To fully live to its potential, this agentive AI needs an appropriate platform on which it can run and interact with users. That platform needs to offer users easy and natural interaction and to constantly use voice recognition. I think you already see where this is going. Yes, spatial computing devices are a perfect platform to fully utilize the power of agentive AI and offer users simple and intuitive way of using the capabilities of their AI assistant.
Conclusion
We have seen some examples on how AI and spatial computing will co-exist in the future. These examples support the thesis that these two technologies need each other to thrive. But this is just a tip of the iceberg.
While it is true that the current interest in both technologies are hypes, and hypes tend to fade, there is no doubt we will experience further development of AI models and spatial computing devices. Since AI is useful for spatial computing applications as a good computation and personalization tool and spatial computing devices are a great platform for AI, inevitably, we will see more and more examples of cooperation of these two technologies.
Quelle: