Gesture Recognition

Integrated Gesture Recognition Coming to Consumer Devices and Beyond

As a complement to touchscreen technology, gesture recognition appears poised to be easily and cost-effectively integrated into small systems to enhance the overall user experience.


  • Page 1 of 1
    Bookmark and Share

Article Media

The world is becoming well-accustomed to interacting with our devices via touchscreens. How many of us have sat at a desktop and instinctively tried to scroll with our finger only to wonder why nothing happened? Tablets and smartphones without interactive touch technology would be unthinkable. This assumption and expectation is also starting to migrate in all directions, including the interaction with home appliances as well as with industrial systems. Now it looks like 3D gesture recognition may be joining the party as well.

As a technology, gesture recognition is not exactly new. Solutions to date have involved a pair of cameras or a stereo camera along with sophisticated software to interpret gestures and make them available to applications. Now a new approach is making its debut from Microchip Technology in the form of a 3D gesture controller that can be connected to a set of sensors, which can be integrated unobtrusively behind screens, beneath keyboards or inside more specialized equipment.

The heart of this approach is the MGC3130 GestIC from Microchip, a 3D electrical field-based tracking and gesture controller. The GestIC performs 32-bit digital signal processing on a combination of sensor inputs, which it interprets to match to defined gestures in an on-chip library called the GestIC Colibri Suite. The GestIC interfaces via an I2C/SPI serial link to the system processor, which means that it can then be easily adapted to other peripheral interfaces such as USB if desired (Figure 1).

Figure 1
The MGC3130 GestIC is 5 mm x 5 mm device that interfaces to a transmit/receive panel to sense hand positions in 3D space. Interpreting gestures, the DSP section can correlate them with a stored gesture library and connect via a serial interface to the system, which can interpret them for the applications.

The GestIC—in a 5 mm x 5 mm QFN 28-pin package—works in conjunction with a sensor array consisting of a transmit electrode that emits a signal and is mounted behind the five receive electrodes. It can be tailored to fit a given form factor within a range of dimensions. The chip interprets the x/y/z position of the hand in terms of the relative strengths of the received signals. The emitted electrical field is in the range of 70-130 kHz with frequency hopping to avoid RF interference, and is resistant to ambient light and sound interference. The system has a detection range of 15 cm and can be sized to fit a desired form factor such as a laptop, tablet or phone screen, providing 100% surface coverage to eliminate “angle of view” blind spots. The five sensors are insulated from the flat emitting electrode by a thin layer of material. The electrodes can be made of any conductive material such as PCB traces or a touch sensor’s Indium Tin Oxide (ITO) coating (Figure 2). Active sensing state power consumption can be as low as 150 μW.

Figure 2
The MGC31330 evaluation kit includes an example of the kinds of panels that can be implemented using GestIC technology. Beneath the first layer is the transmit electrode, which corresponds to the full board area. On the surface are five receive electrodes that interpret the hand’s position in the electrical field in three dimensional space.

The low active sensing state power applies when the device is configured for an Auto Wake-Up on Approach state that will enable always-on gesture sensing for power-constrained applications. Once awakened, power consumption will increase up to a maximum of 90 mW, but after a predetermined time of inactivity it can resume the Auto Wake-Up mode. This capability would lend itself to use in such things as room light controls and proximity sensors to name a few.

At this point it may be remarked that the GestIC technology is aimed at a different set of application uses and devices than the broader and more comprehensive technology represented by camera and software-based gesture systems such as those now available on some Intel-based Ultrabooks. Such technology is capable of recognizing such things as fingers closing to grasp an object on up to full body movements and can therefore have much broader applications including things like robot arm and hand control, which are clearly not targeted by the GestIC.

Intel offers Gesture Recognition for Ultrabooks based on software developed by Computer Vision Systems of St. Petersburg, Russia. The company specializes in software for use in video-based systems to extract 3D data in real time, primarily for gesture recognition applications. The proprietary algorithms extract gestures in a specific 3D zone of complex dynamic shape. This allows the Ultrabook user to interact via gestures while browsing the Internet or reading documents, but it does not interfere with the use of the keyboard, which is outside the detection zone.

A somewhat similar issue is addressed by the GestIC in that it will no doubt be expected to work in conjunction with the use of keyboards and touchscreens. Use with keyboards is not much of a problem because a sensor pad placed below a keyboard will sense movement of a user’s hands, but that movement will not correspond to any defined gestures in the Colibri Suite or custom library and so will not interfere with operation of the device.

Combining the gesture recognition with touchscreen operation, however, requires controlling how and when the two technologies are allowed to send input to the system. Touchscreens increasingly use capacitive touch and projected capacitive touch, which allows the use of a sheet of glass to cover the screen surface. This makes it straightforward to simply shut off the gesture recognition as soon as a touch is detected and turn back on when touch is removed. There is also the alternative to multiplex the two subsystems at some rate, but this can be a more complex approach. The point is that both can coexist without interfering with one another and thus can complement each other in the user experience.

As noted, in contrast to the camera/software-based gesture recognition, for which one can develop a very wide variety of recognition algorithms, the GestIC has a set of predefined gestures that is stored on-chip as the Colibri Suite. These consist of the Wake Up on Approach function and the x/y/z/ hand position, which GestIC is able to track to a resolution of 150 dots per inch at a sampling rate of 200 Hz. In addition, there is a library of predefined gestures. These consist of flick gestures such as might be used to turn a page, and circle gestures for such things as rotating a page or a selected object.

There is also a set of symbol gestures for easily recognized symbols like an “M” or a “J.” There is probably not an “X” symbol because while we may easily differentiate “X” from “O,” the system would have a hard time ignoring the continuous hand motion needed to move from one stroke of the “X” to the next because that could be interpreted as a continuous motion. Therefore, while the chip does have the capacity to store custom-defined gestures in its integrated flash memory, care must be taken that they are distinct from other gestures.

The GestIC technology is being positioned for easy integration into new and upgraded systems designs, where the biggest effort would be the inclusion of a suitably sized platter to carry the transmit and receive electrodes. Then the small IC could go almost anywhere followed by the modification of application and/or operating system software or drivers.

To that end, Microchip is offering a set of development aids consisting of what it calls the Sabrewing Single Zone Evaluation kit, which is a board including a set of mounted electrodes, depicted in Figure 2, along with a mounted MGC3130 GestIC adapted to a USB interface. In addition, there is a software suite called Aurea with a graphical user interface (Figure 3) that allows graphical depiction of the functioning of the gesture sensing as well as a way for developers to tailor the defined functions in the Colibri Suite to their own system commands. As a side note, the words Colibri, Sabrewing and Aurea are all terms that refer in some way to hummingbirds.

Figure 2
The MGC31330 evaluation kit includes an example of the kinds of panels that can be implemented using GestIC technology. Beneath the first layer is the transmit electrode, which corresponds to the full board area. On the surface are five receive electrodes that interpret the hand’s position in the electrical field in three dimensional space.

Figure 3
A page of the Aurea tool’s graphical interface shows the relative strength of the five signals that are used to detect the hand’s 3D position (upper right). The trace of a circle gesture is shown at upper left.

Gesture recognition is on the way to becoming a part of the normal user experience we will come to expect in our interaction with technology. While it may appear in different forms for different target application areas, the GestIC represents an approach that targets a defined set of interactions that can be easily integrated at low cost into a variety of consumer and industrial products. 

Microchip Technology
Chandler, AZ.
(888) 624-7435

Computer Vision Systems
St. Petersburg, Russia

Santa Clara, CA.
(408) 765-8080