Ultrasonic Sensing

Figure 4. A smartwatch equipped with Chirp’s ultrasonic sensors enables the user to control the watch’s functions without touching the screen. (Credit: Chirp Microsystems)

Designers are additionally using ultrasonic sensing, another interface technology, to complement both touch and voice in applications such as smartwatches and augmented reality (AR)/virtual reality (VR) eyewear.

Ultrasound devices emit high-frequency sound to identify the size, range, and position of an object, moving or stationary. The sensors, then, enable consumer electronics devices to detect motion, depth, and position of objects in three-dimensional space.

Time-of-flight (ToF) measurements are made by emitting a pulse of high-frequency sound and listening for returning echoes. In air, the echo from a target two meters away returns in around 12 milliseconds, a timescale short enough for ultrasound to track fast-moving targets, but long enough to separate multiple echoes without requiring great amounts of processing bandwidth.

Ultrasound offers some significant benefits over optical range-finding solutions such as infrared (IR), time-of-flight (IR ToF), or IR proximity sensors. Ultrasonic sensors are ultra-low power, < 15 microwatts at one range measurement per second, which is 200x below competing IR ToF sensors. The ultrasonic sensors operate in all lighting conditions (including direct sunlight) and provide wide fields of view up to 180 degrees.

Chirp Microsystems, based in Berkeley, CA, offers high-accuracy ultrasonic sensing development platforms for wearables and for VR/AR. The platform senses tiny “microges-tures,” with 1 mm accuracy, allowing users to interact with smartwatches, fitness trackers, or hearables (see Figure 4). Chirp’s platform allows users to interact with the VR/AR environment without being tethered to a base station or confined to a prescribed space.

David Horsley, CTO at Chirp Microsystems, predicts that gesture and touch will coexist in some products.

“In tablets and laptops, more work is needed on the UI because air-gesture should complement and not replace the touch interface,” Horsley said. “Although introducing a new UI is hard, I think that with the correct UI design, the gesture interface could become universal, in five years or more,” according to the CTO.

Ultrasonic devices potentially protect sensors from being exposed, both visually and physically. With ultrasonic sensing, the designer could conceal a fingerprint button behind the screen itself. Less exposure to clicking on an otherwise overexposed sensor could also more easily enable waterproofing and extend the life of the fingerprint sensor.

Future Markets for Mics

Hearables that add active noise cancellation use six microphones per pair — more than a smartphone. Smart home products such as Amazon Echo and Google Home employ large arrays of microphones to spatially isolate a single speaker in a noisy environment.

The increased use in microphones, according to Crowley, allows hearables to challenge even smartphones and smart speakers, in terms of microphone volume.

“While this is already a healthy market that will continue to grow massively, hearables could still clinch the number one spot because they both interface with smartphones and can function as standalone devices that stream music, source directions, and track our fitness,” said Crowley. “The hearable market is the tip of the AR market, which has huge potential.”

Though the automotive industry’s adoption of new technology has historically been slow, automakers are actively designing in more advanced audio systems, which bodes well for the use of MEMS microphones. Between one and two microphones are featured in today’s cars, but new designs and autonomous vehicles will require 10 to 25 microphones.

Touch user interface technologies will continue to share real estate with voice and gesture technologies for years to come, but the relative allocation of those technologies is shifting. Touch will remain dominant in some applications, including keyboard-driven computing platforms such as laptops and tablets.

Voice, however, is likely to be more pronounced in hearables and smart home products such as smart speakers, and will gain headway in smart appliances and automotive uses. While gesture co-resides with touch and/or voice in many applications, gesture interfaces should become more prevalent in VR/AR applications, as replacements for fingerprint sensors, and in other scenarios for which the swipe of a finger can seamlessly communicate a host of information.

This article was written by Karen Lightman, vice president, SEMI, MEMS & Sensors Industry Group (MSIG). As a resource for linking the MEMS and sensors supply chains to diverse, global markets, MSIG advocates for near-term commercialization of MEMS/sensors-based products through a wide range of activities, including conferences, technical working groups, and industry advocacy. For more information, visit: here.