REDMOND: Jonathan Cluts walks into what he calls the Microsoft Home, and places his mobile phone on a tray next to the door. The phone starts charging. No wires attached.

He walks to the kitchen, and calls out for a certain recipe, and promptly an overhead projector, which we can barely see, projects the recipe on to a countertop surface and reads it aloud. The dining room is arranged for a children's party; around each plate on the dining table are small virtual airplanes, which you can fly around the table surface using hand gestures.

The bedroom is in some ways the most fascinating. When we walk in, the colours of the walls and the digital screen images convey the feeling of a teenager's room. Cluts waves his hand and these transform into one that the grandmother would prefer.

If you have seen a Microsoft Kinect or read about it, you can get a sense of what might be happening here. Kinect can sense your body motions, recognize your voice, your commands, and take appropriate actions.

It has been applied in the context of gaming, through Microsoft's Xbox (obviating the need for a remote), but what the Microsoft Home shows is that the technology and the basic understanding behind the technology can be used in all kinds of contexts. And simplify computing further, just the way Microsoft first did with Windows.

Natural user interface
It's what is also called natural user interfaces or NUI. These are interfaces that come almost naturally to us as human beings, the use of our voice or our motion or our touch. It does not require artificial control devices like a keyboard or mouse or remote, whose operations have to be learnt. And the technologies themselves are effectively invisible.

Cluts, who is the director of Microsoft home and strategic prototyping, admits some of these applications are years from reality. In the home context, the applications require installation of cameras, microphones and motion detectors around the house, and many of us may find that creepy today. For this very reason, some applications may never become reality.

But Microsoft believes the NUI potential is enormous and the direction has become fundamental to much of what the computing giant is working on. "The idea is to make technology disappear so that more people can participate," says Alex Kipman, the architect of Kinect.

Kinect comes with two eyeballs and ears in its hardware. It also illuminates the room with its own light source, so that it's not limited by the lighting in the room and so that you don't have to adjust the lighting in the room to make it recognize you (again it's `natural' at work).

The eyeballs capture the distance at which you are standing, the colours, they scan the room, they scan you. The brain or the processor is in the Xbox; it takes all of the sensory input information and translates it into identity and voice recognition in human tracking.

Machine learning
An enormous amount of expertise in machine learning has gone into Kinect. The system recognizes you every time you come to play. "For large populations, doing this reliably is difficult. And it was a hard problem to solve," says Microsoft Research head Rick Rashid, who helped Kipman's team. Kinect involves other kinds of machine learning too.

It comes with what you could call an empathy engine. Based on the inclination of your voice and the positioning of your body, it can guess your mood, whether you are happy or sad. "The tracker even realizes that when you are moving your arms, if your dog suddenly jumps around the room near you, it should keep the focus on your arms," says Rashid.

Rashid says Kinect technologies such as 3D viewing and voice recognition, and technologies such as accelerometers (which measure motion) and magnetometers (which measure the strength or direction of the magnetic field) could in various combinations produce a host of other applications.