In the 1960s the Augmentation of Human Intellect project developed the oN-Line System – this computer had a mouse-driven cursor and a window based graphical interface. Fast forward to today, and about 50 years later we are still using mouse and keyboard, however we are not likely to be doing so for much longer.
Image by Makoto Funamizu
This is essentially a tablet with a transparent display that can deliver information dynamically to help with everyday life.
In other words if you are reading a book, you can just place this transparent pad on top of its pages translate the text, or if you hover it over an apple it presents you with its nutritional info from Counting Calories web site.
This idea draws on some existing technology. For example digital cameras already have face recognition, and this function can be adapted to recognize any object. Add a data connection such as Wi-Fi or 3G, a scanner and a tailored operating system which can query search engines, wikis and other sources, and you’ll be able to look up a word from a paperback book at the touch of a fingertip.
Image by Steve Jurvetson
It may seem a bit too sci-fi, but neural interfaces are being actively researched. Take Brainloop for example. The idea behind it is that the user thinks of something and the interface makes it happen.
However, the way it works is still fairly rudimentary. It still uses “level based” menus, so if you want to see a map of London for example, you need to first think of a map, then you need to think of the country, then you think of London. It would be just a lot better if you could think of a map of London, but we don’t know that much about the mind yet.
Level based menus are trying to resolve the issue of information overload. The brain does a lot of things at the same time and a computer simply can’t recognize every thought and can’t easily tell what to discard. If you have predefined functions that focus a user’s mind and limit the choices your interface has to work with, it’s easier to translate thoughts into action.
There are therapeutic applications of such interfaces: think of people with paralysis and how these interfaces can help create a bridge between thought and action and restore people’s motor ability.
You see a lot of these in movies, but they are being developed in real life too. The concept behind holographic interfaces is that you don’t need a monitor, instead you use a projection that hits a medium which can reflect light – this could be a wall or glass.
This kind of interface uses motion capture and speech recognition as a means to jump from one option to the other. Such technologies are already being applied in gaming and interactive television. To pick a menu option all you need is a gesture, alternatively you can call functions through a microphone – just say “YouTube” if you want to watch a video.
When you look at these 3 types of user interfaces, the current trend seems to be that of immediate interaction. We are trying to cut steps so that the delay between what a user wants and what happens is reduced to a minimum.
In order to reach this goal of fast and flowing interaction between human and computer, what would you like to see in an interface?
Gavin Harvey is a personal trainer with a busy life. While he might not use a computer every day, his tablet is very handy for keeping track of his training schedule and appointments, so he takes an interest in related technologies. He’s also an avid blogger for Softel Group.