Embrace the next-generation design paradigm
Are you a designer bored with making rectangles, burger menus, and Tinder clones?
Then read on. With the new decade, a new design paradigm is on its way.
From vision to the body
Augmented Reality glasses and Virtual Reality headsets imply vision-centered interactions.
Yet, their display technologies — while developing — do not in themselves guarantee satisfying user experiences. How we end up interacting with virtual objects while using wearable technologies is not solved.
We will interact with virtual objects in a more embodied way than when browsing our smartphones or desktop computers. Gestures will expand from swiping and pinching to touching and holding objects that have volume. Voice input will likely become more prevalent.
Why next-generation design is spatial, virtual, and real-time
Let’s talk about the common principles behind these technologies and the interaction paradigms they both enable and necessitate.
We can encapsulate the next-generation design paradigm into three aspects:
- spatial, in the sense that next-generation designers will embrace physical space and how three-dimensional objects reside in it. Assets laid out into these spaces are volumetric. Next-generation designers need to act out their designs instead of click or tap through them. So-called cognitive walkthroughs, where designers ask test users to complete a task, become actual walkthroughs or choreographies.
- virtual, in the sense that next-generation designers can leverage how their design domain affords embodied interactions with non-physical objects, agents, and environments as if they were physical. Next-generation designers need to empathize with the fact that users of their products and services perceive the elements making up the design against the physical world (AR) or as part of computer-generated environments (VR), or as a blend of virtual and physical environments and agents (Mixed Reality).
- real-time, in that next-generation designers must address the need to update virtual objects’ and environments’ characteristics (e.g. position, shape, color) continuously, depending on where they are in relation to the users inhabiting/perceiving/interacting with them. Next-generation designers need to think about how such objects “anchor” into a physical space (as with AR), or construct a virtual space, and are affected by simulated physics.
Flat-screen conventions only go so far
Creating content, products, and services in this paradigm of spatial, virtual, and real-time requires a different approach to the design domains you may have practiced in the past.
If you step into (spatial) design as a first-time designer, you don’t need to unlearn many 2D design habits.
For more experienced designers, remaining in the comfort of rectangles is not an option. You need to embrace all three dimensions of space. Product and interior designers are better equipped to deal with this than designers who have plied their trade in digital media.
For example, Virtual Reality headsets track our head and hand movements in real-time, in 3D space. In practice, they afford users to reach past the screen’s reflective surface. They afford looking into the distance and moving there.
In the smartphone-centered Augmented Reality phase we still live in, we move the camera viewfinder in all three dimensions when moving closer and farther to frame the virtual objects in the physical space.
Designers are tasked to create interactions for use cases where three-dimensionality and its responsiveness to the user’s actions add value.
Users and audiences tend to associate these qualities as ”immersive”. Welcome to immersive design.
Next-generation design is immersive
In next-generation design, while we want to be inclusive in who practices it, we need to be exclusive in how we bring existing conventions into the domain.
Designing interactions with 2D surfaces in a 3D virtual space only makes sense if the flat objects deliberately imitate familiar, flat counterparts from our physical world, and replicate their affordances. A virtual whiteboard in a collaborative VR or AR space is an example where staying true to conventions makes sense.
Even then, surfaces like a whiteboard will reside in a physical space next, above, or behind us, as physical objects do in the world, in relation to our bodies. Not within a browser window or the confines of a tablet app.
If you see 2D objects in AR or VR, ask yourself how their function is to be part of the solution to a problem — to a user need.
Ask yourself if they create a cone of attention and interaction beyond anyone holding a phone, tablet, or computer.
Ask yourself, if they acknowledge your physical movement around them. If your answers are anywhere close to ”not really”, the designers have probably gotten their choice of technology wrong. The solution might as well exist as a traditional flat-screen solution.
How to start unlearning 2D design skills and embrace bridging physical with virtual
Here are takeaways for you:
- experiment going beyond storyboards and user flows — they are relevant for scoping and communicating the idea, yet they ”speak rectangles”. Move into brown-boxing and body storming; making physical models that yourself, your team, and test users can act out in physical space.
- Besides cardboard, you can use clay modeling or legos — for a design exercise focussing on spatial layouts, I use wooden blocks and marbles to get people to play around with three dimensions before taking them into prototyping in AR.
- Take on learning immersive tools, such as Torch AR with an iOS device, or get access to a headset and tools like Tilt Brush and Gravity Sketch to start practicing your craft in 3D.
In my newsletter — see profile for link — I will cover each of these in upcoming posts. Subscribe to get access to tutorials, book and research summaries, and more!
Quelle:
https://arvrjourney.com/start-future-proofing-your-design-skillset-today-113a2f3d6f10
Foto: Image courtesy of Microsoft. Check out their Mixed Reality toolkit for Unity.