Illustrator: Rodrigo Diaz Mercado
In Part 1 of our (N)UI Revolution article, we discussed some of the newer tools available, on mobile devices and other gadgets, that are shaping the revolution of user interfaces and experience. 3D sensors are another key component of this change.
Gesture Interfaces are not possible without a 3D sensor. Current 3D sensors (Kinect-like sensor bars and the Leap Motion Controller) are on the way to making gesture interfaces commonplace. While the technology currently is not sci-fi Hollywood-quality, it makes it possible to start experimenting with the concepts. However, combining current peripherals (such as a gesture-enhancing glove) could bridge the gap between the ‘almost there’ and ‘mind bending’.
Recently, there has been a lot of big news around 3D-sensing technology. In Q4 of 2013, Apple acquired PrimeSense (who developed the Kinect 360 Sensor bar and its open source twins) and quickly shut down their open source 3D-sensing libraries. Microsoft created their new sensor bar independently for the Xbox One. At CES 2014, Intel announced that it was releasing a line of 3D sensors (RealSense) that range in size from a small pencil to a common webcam. Intel continued to say that they plan on releasing the new 3D sensor bars to replace your standard laptop webcam as a standard feature. Android announced Nuidroid, which will be the native Gesture Interface library on Android devices. Google is working on a 3D sensing phone currently called ‘Tango’. There are independently-funded projects out there betting on the NUI revolution: Interaxon’s Muse, which is an EEG brain wave sensor that can be used with your iOS device, or Meta’s SpaceGlasses, which ambitiously is trying to bring augmented reality and Gesture Interfaces to one platform. Startups and major research and innovation organizations have already begun to implement NUI in practical and useful ways. Military and Intelligence agencies are early adopters of touch and gesture interfaces, with plans to further explore these emerging technologies. NASA has developed their Robonaut 2 to use an intent-based interface with NUI-type inputs and controls.
Advertising has also taken notice, especially Coca-Cola! They have created emotion-stirring campaigns using 3D-sensing and Gesture Interface. Here at Grip, we are one of a handful Canadian ad agencies taking part in the Kinect for Windows (2.0) preview program. Many other 3D-sensing type installations have been created all over North America. Other startups have begun to exploit this technology for health and fitness. The applications of this technology are limitless!
3D sensing is more than just using your body to instruct software to accomplish a task. With 3D sensing technology, the possibility of bringing real world objects into a virtual world could be trivial. To a small degree you can already buy children’s toys that work with iOS apps; but with a 3D sensor bar, you can bring and use any prop with you on your virtual adventures. Even ‘magic objects’ (objects that are pre-defined within the software to have special properties) will open up our virtual worlds to new experiences. Imagine gamers buying collectable items that can be displayed in their living room. However, to use that item in-game, they will likely need to be detected holding their collectable item.
NUI isn’t just about Gesture Interfaces. It’s about interacting in ways that feel invisible and intuitive. These interfaces should be designed so that it will not require a steep learning curve, so much so that using those interfaces should feel closer to the real world rather than a virtual world. This will create a stronger connection to technology. Ideally, NUI will allow us to make technology feel more like a part of ourselves rather than become another tool to exploit.
Even with all the advantages that NUI presents, there will still be an era of transition in which UI designers will need to experiment to develop new standards. Historically [recent history], it’s been the modus operandi of Interactive Designers to capitalize on existing and established metaphors to shape our users’ experiences. Before the world of the web we have now, we saw clunky navigation metaphors, bad menus and mystery meat navigation. Lots of mistakes were made, noted and later avoided, but at the cost of countless frustrated users.
With the new technological advances we’ll soon see more and more NUI-based sensors. In the coming years, you’ll see Interactive Designers fumble through difficult-to-use and less intuitive-type interfaces until we re-establish a best set of rules for these new inputs. There could be a time when NUI will be received as confusing and hard to use, but as our users get more accustomed to these new input devices and the metaphors that they represent, it will become second nature.
New design issues will emerge and new features will need to be developed. Maybe designers will have to use modern techniques to solve problems such as using responsive design to accommodate the users’ distance from the input device rather than screen size. Standardization issues will be debated; to ‘go back’, should the standard be a left wipe or a left arm push? Until voice recognition is perfected, a new way of inputting text will need to be designed; possibly a temporary replacement for the right click. All these considerations will only be revealed when we start to experiment with building gesture-centric UI.
The emergence of NUI-related technologies will bring revolutionary changes to the world of technology and, by extension, our personal lives. This will change the way everyone thinks and feels about technology. There will be new challenges ahead, but it’s an exciting time for those innovative and creative enough to take on the challenges. I hope you enjoy the NUI revolution!