At Design Partners, we believe that Mixed Reality (MR) is going to be big part of future working lives. It has many applications; in architecture, engineering, education to name just a few. A good part of our business is designing for professionals in these industries, so we’ve been exploring how we can support them in bringing mixed reality and augmented reality (AR) design into their businesses.
Designing for MR is a unique and different experience. You have to adapt your existing design skills into a new environment. In this post we share five learnings from our work with MR and AR. We hope you will find it useful to refer to in your own projects.
Augmenting someone’s reality doesn’t always need to be visual augmentation. For users working in hazardous environments, floating visual interfaces can be distracting. As a response, we found sound has incredible potential as an augmentation tool.
During field research with construction workers, we noticed was that Construction workers tended to only wear their ear defenders when they needed to, because they are cumbersome and isolate you from your co-workers.
We designed HEXA in response to this insight. HEXA augments the user’s hearing and removes the barriers to communication. HEXA monitors your health and monitors your environment, alerting you to hazards like a gas leak or a colleague working above you. Using sound as the augmentation method also solved the problem of visual occlusion in this case; a new take on augmented reality design.
We’ve all seen images of people in movies with a holographic HUD (Heads Up Display). One of the problems with this is occlusion; if you put a holographic display in front of users they may not be able to see real stuff that’s critical in their peripheral vision. Imagine you were on an Oil Rig; you don’t want to have lots of content blocking your view. If you are distracted by complex MR content in your vision it could be dangerous and lead to an accident.
Any UI that is necessary in a sensitive environment needs to be kept to a minimum, be movable, removable or minimized. Only show information that’s needed, when needed.
Since the MR UI happens in 3D space, we need to consider depth, distance, angle of view, contrast and size much more carefully. Just like designing apps, paper prototypes are one of the fastest means of vetting initial ideas. For MR you need to build prototypes in three dimensions. Our desks quickly became littered with maquettes made of post-its, upturned books and whatever other objects we could find, with designers ducking and weaving around our constructions, understanding how the UI could appear in MR before digitally prototyping experiences.
It’s important to also consider the context of where information could be displayed – if someone is using AR in various locations you need to be careful of using transparency in UIs, as it may be difficult to see in some environments or light levels. Make sure that elements are placed on reasonable distance so the user can focus on them. We’ve conducted ethnographic research with firefighters, oil-rig engineers and people that work in construction to understand how AR, VR and MR can naturally enhance their workflow and we’ve learnt it is important to consider how existing tools and familiar workflows can be utilised to encourage adoption.
We’ve been working on a UI for content creation for MR and exploring many options. As there are no hard and fast rules for designing in MR or AR yet (no ‘Material Design’ for MR that can act as a guide) it takes lots of trial and error to achieve an innovative solution. However, it is also a huge opportunity for designers as we can help define the future standards for creating new MR interface designs.
For most users, working in AR is a new phenomenon, so as designers we need to find appropriate metaphors to ground some of these new interactions into familiar areas (e.g. Microsoft Hololens’ toolbox). Interestingly, this sometimes leads us back to skeuomorphism as a method of guiding the user, in the same way as it helped orient early iPhone users.
Using your hands to point and grab objects in front of you is tiring if you’ve ever tried it for any prolonged period. Gaze based selection can be quite useful when using MR in environments where your hands need to be free, but it is slower than tapping or clicking with a mouse. Voice is a great interaction tool but not always suitable for the environment or task you’re trying to do.
There is too much focus on creating purely digital solutions for general interfaces for Head Mounted Displays (HMDs), such as Daqri’s Smart Helmet or Microsoft’s Hololens. If designers take a task-based approach to solving interaction solutions, such as ‘how should augmented reality content be created for end users?’, we realise quickly that physical tools provide help solve complex interactions and deliver superior experiences.
We’d like to democratize content creation for AR + MR. We’re currently solving the problem of how users can achieve superior in-context content creation, so people can build their own content for Mixed Reality devices without needing to resort to complex software or expensive development. This is a huge opportunity from a design and business standpoint.
…Stay tuned over the next couple of weeks for more to be revealed.