I am very fascinated by the idea of augmented reality lenses, so as soon as I saw they had a private demo room, I ran towards its door begging for a demo. They looked at me like a crazy guy addicted to XR, but in the end, they were very kind and gave me the possibility of trying the public demos of their products. Unluckily, I have not been authorized to take pictures, but they gave me the permission to write about my experience, so here you are my experience with AR contact lenses… or sort of, because actually, I could not wear the lenses.
If you don’t know Mojo Vision, my suggestion is to read the interview I have had with them or this long detailed article on Fast Company. Long story short, it is a company that is working on building augmented reality contact lenses. Yes, you’ve read that right: not augmented reality glasses, but contact lenses. They want to cram all the hardware necessary to have visual augmentations in a tiny lens that sits on your eye. The lenses it aims at manufacturing should have as a baseline these internal components:
- A microdisplay to let the user see the augmentations;
- An image sensor for seeing the surroundings and process them through computer vision (e.g. edge detection);
- Eye-tracking sensors (accelerometer, gyroscope, magnetometer);
- A battery system;
- A 5Ghz radio communication antenna to make the lens communicate with an external unit;
- An ARM0 processor, that acts as a “traffic cop” for the data.
with the idea of expanding the capabilities of the lenses, maybe adding additional components, over time.
This may seem a futuristic idea, but actually, Mojo already has internal prototypes of its technology, and a few of its executives have already worn them. The company aims at releasing the first product within the next several years. Of course, there are many safety concerns in wearing an electronic device in a sensitive area like the eyes, so inside Mojo Vision there are not only people that work on the technology side of the lenses, like in all hardware companies (e.g. software developers, hardware designers, etc…) but also people that come from a medical background (e.g. optometrists, medical device professionals, etc…) that work in guaranteeing that the lenses are comfortable and safe for their users.
The first use cases of the lenses are about offering a better vision to people with eye impairments (e.g. highlighting the edges of the objects in the field of vision of people that don’t see very well). Then the idea is targeting the sports/performance/wearables sector, and in the end, the moonshot is reaching the consumer market. As for the price, they hope for something like the one of a high-end smartphone (so I guess around $1000-1500).
All of this is crazy cool, and now you get why I rushed towards their booth like a crazy guy to try their product.
Mojo Vision demo hands-on
Of course, Mojo Vision can’t have people wear these prototype lenses, so I haven’t had the opportunity of feeling like in the future. But I had three small demos that made me super-excited all the same.
The first demo was seeing a prototype of the lenses. An executive from Mojo put on my hand a tiny plastic lens saying that this is how Mojo lenses are made. It was mindblowing: I had on my hand this tiny lens, and inside its shell, I could see a lot of mini-circuits. It sounds impossible that you can have a full computing unit in such a small device, but it is exactly what Mojo has designed. And it was there, on my hand, so it was real. I could see all its tiny electrical components installed inside it. Wow.
The second demo has been with the famous “stick” that all people have been able to try until now. They have given me something that looked like a “reverse popsicle”: I had in my hand a small box with a stick coming out from it. The stick was made in semi-transparent plastic, and at the end of it, there was a small circle with a tiny green luminous dot at its center. I said “oh, there is a tiny light here” and Mojo Vision employees looked at me and said: “This is our display”. I was like “NO WAY, that is a tiny dot, it is not even a millimeter wide, it is just a status led, it can’t be a display in any possible way”. They so invited me to put the tiny green dot close to my eyes. And at that moment, the magic happened: when the green dot was very close to my eyes, I could see images inside it: writings, tables, even small parts of videos (I remember seeing a frame with Baby Yoda from the Mandalorian). It was a display. Really. In a tiny dot. My mind got totally blown away.
The display was just monochrome green and of course, it had not a high resolution due to the limited dimensions, but the quality of the images was enough for me to read some short text and watch some small gifs with a big character in the foreground. I tried going in front of the window installed in the room, and the visuals were still clear, not like in modern AR glasses. This means that the display emits enough light to be useful also outdoor. As Mojo Vision people told me, this is necessary because contact lenses must be worn at every moment of the day, and it would have no sense to limit their use to indoor contexts.
This demo has been the most mind-blowing thing I have tried at AWE… a dot big like the tip of a pencil that actually is a display. You should try it to understand how it is incredible. Technology sometimes feels like magic.
The third demo was about the UX of Mojo Vision lenses. I have been instructed to wear a Vive Pro Eye that had passthrough activated. I could so see my surroundings. Additionally to them, I could spot some green lines sometimes. Mojo employees told me to look in the periphery of my vision (at the very far left, right, etc…) and there I could actually see some green icons, all connected by a line forming a circle. Looking at one of those icons for a while, I could see a popup appearing inside the external circle showing something related to that icon. For instance, if the icon was about music, the popup was about the music playing at that moment. Inside these popups, there were other interactive elements (usually tiny arrows), which I could stare at for a while to trigger some action (e.g. a button to pause the music currently playing) or to trigger some other internal menus. I couldn’t take a screenshot, but these images provided by Mojo Vision to Fast Company can more or less convey the idea:
The external circle had more or less the dimension of the FOV of my eyes, so I had really to look at far left, right, up, and down, to see the external icons. This was a bit uncomfortable and led me to have a bit of eye strain. The internal “popups” were visible only if I triggered the related menu (e.g. the popup about music was visible only if I previously stared at the “music” icon), and were visible only one by one. As you can see from the image above, the simulated FOV of vision for the virtual elements was not that big, probably like the one of a HoloLens 1, so only one of the popups at a time fitted my FOV.
This demo was very interesting because it made me understand some of the many challenges that Mojo has to face. First of all, it showed me that in this first phase, more than AR contact lenses they are smart lenses, something like smartglasses in my eyes: they made me see additional information that was 2D, and not 3D elements in the world. I guess 3D augmentations may come in the future, but the lenses should be able to perform positional tracking and environment understanding, which at the moment sounds a bit like too much.
It also showed me that there is a whole new interface to think about. Lenses have to be worn every day, and they can’t truly rely on additional hardware. You don’t have a frame to tap like with smartglasses or controllers to use for pointing like in VR headsets. You just have your eyes, most of the time, so the interface should rely mostly on them. The problem is that our eyes are not meant to provide an active input, while more a passive one: our eyes every day explore, wander around, and other parts of our body are meant to interact (e.g. our hands or feet). So every interface that is merely eye-dependent is uncomfortable to use in my opinion. In some cases (like people with disability) it is a necessity, but otherwise, it’s better to use other interaction means. That’s why I think that the interface that Mojo showed me is cool to have a fast check on something (e.g. to have a fast check on the music playing), but shouldn’t be used for long interactions (e.g. playing a game). I hope the company will work on integrating other input means, like for instance voice commands, or interactions through a companion phone app, or whatever they find suits better, to offer longer and more complex interactions.
My final impressions are that I am impressed by the work of Mojo Vision. They showed me a physical prototype, the actual display used in the lenses, and also a first prototypical UX the users can employ while wearing the device. This means that the company has already worked on both hardware and software and they actually have demos to show, so they are not just claiming fluff, but are actually building a product. I am still scratching my head over my test of the display, that was incredible.
After having tried the demos, I am more convinced than before that smart contact lenses may become a reality in the upcoming years: maybe not for the mainstream market, for which a long time may be needed, but for specific use cases. The technology is being developed right now, and it is just a matter of refining it and being sure that it is safe for its users. It’s not an easy task, but I think it is doable, especially if the first step is targeting a very small niche of controlled users.
I really can’t wait for the moment I could really put such a lens in my eyes for a demo, maybe in the next few years. Let me dream a bit.
Foto: Mojo Vision