Research in AIP Publishing’s Applied Physics Letters outlines developments that could enable autonomous vehicles to see around corners (as opposed to just straight lines), biomedical imaging to detect abnormalities at different tissue depths, and telescopes to see through interstellar dust.

The schematics of (a) a conventional sensor that can detect only light intensity and (b) a nanostructured multimodal sensor, which can detect various qualities of light through the light-matter interactions at subwavelength scale. (Image: Yurui Qu and Soongyu Yi)

These latest nanostructured components, integrated on image sensor chips, are most likely to have the biggest impact in multimodal imaging, the research shows.

“Image sensors will gradually undergo a transition to become the ideal artificial eyes of machines,” said Applied Physics Letters’ Co-Author Yurui Qu, from the University of Wisconsin-Madison. “An evolution leveraging the remarkable achievement of existing imaging sensors is likely to generate more immediate impacts.”

Image sensors convert light into electrical signals and are composed of millions of pixels on a single chip; the tricky part of the research was combining and miniaturizing multifunctional components as part of the sensor.

To detect multiple-band spectra by fabricating an on-chip spectrometer, the researchers deposited silicon-made photonic crystal filters atop the pixels to spark complex interactions between incident light and the sensor.

According to Applied Physics Letters, “[t]he pixels beneath the films record the distribution of light energy, from which light spectral information can be inferred. The device — less than a hundredth of a square inch in size — is programmable to meet various dynamic ranges, resolution levels, and almost any spectral regime from visible to infrared.”

The researchers built a component to detect angular information to measure depth and construct 3D shapes at subcellular scales, the research notes. Directional hearing sensors found in animals whose heads are too small to determine where sound is coming from were the inspiration.

Pairs of silicon nanowires were constructed as resonators to support optical resonance, and the optical energy stored in two resonators is sensitive to the incident angle; the wire closest to the light sends the strongest current. Comparing the strongest and weakest currents from both wires can determine the angle of the incoming light waves. Millions of these nanowires can be placed on a 1-square-millimeter chip.

The research could support advances in lens-free cameras, augmented reality, and robotic vision, the work shows.

Here, an interview with the paper’s Co-Author, Yurui Qu

Tech Briefs: What’s the next step with the technology?

Yurui Qu: The next step is to apply miniature multimodal sensors to real-life applications. A very exciting aspect is the integration of multimodal sensors into cell phones that allows people to monitor their own health. The health indicators of the human body can be seen from the content of many human elements, such as hemoglobin, and carotene. They all have a specific spectrum. The spectral-sensing pixels are small enough to be integrated with the phone lens. It can detect the characteristic spectra of multiple components at the same time and give an assessment of the body’s health status.

Tech Briefs: When will it be available for use?

Yurui Qu: Giving a specific time is not easy. What I can say is that our technology is based on the CMOS processes, which have evolved and matured tremendously over the past 20 to 30 years. This will greatly reduce the time to get our technology to market. Maybe we will see some products in the next 3 – 5 years.

Tech Briefs: Will there be a large market for it? Do you think it will catch on? Why? Why not?

Yurui Qu: Multimodal sensors have a very wide range of markets and application scenarios, such as autonomous vehicles, biomedical imaging, robotics, and digital health monitoring. Each of these has a large market size. Take digital health monitoring devices as an example; it had a market size of $82.4 billion in 2021 and a forecast of $446 billion in 2028. The emergence of pandemics in recent years has increased the need for at-home medical monitoring such as respiratory rate, pulse, and blood oxygen.

The advantage of multimodal sensors is that they are lightweight and compact, and easy to integrate with mobile devices such as phones or watches. We expect it will catch on because the demand for multimodal sensors is expected to grow further as the pandemic greatly contributes to the increased desire of the general population to stay healthy.

Tech Briefs: How will this change the machine-vision game?

Yurui Qu: Current machine vision mainly uses RGB trichromatic intensity information. However, light has rich information other than intensity information, such as phase, direction, polarization, etc. One, or even more, additional dimensions can be added to the original three-dimensional image, which greatly extends the capabilities of machine vision. This extension allows the machine to see more information; for example, transparent cellular tissue can be clearly visible in the phase sensor.

Tech Briefs: Pros? Cons?

Yurui Qu: Pros: compact size, fast response speed, multi-dimensional information, CMOS compatible. Cons: moderate sacrifice of resolution.

Tech Briefs: Are you or the team working on any other such advances?

Yurui Qu: We are now working on the application of multimodal pixels for chemical and material recognition, as well as biological hyperspectral imaging. Another project is the integration of health detection sensors in smart watches.

Tech Briefs: Anything else you’d like to add?

Yurui Qu: We think advanced signal processing and deep-learning algorithms can significantly enhance the performance of a light-sensing pixel array in the future. We are exploring appropriate options for the use of neural networks in multimodal light-sensing pixels.