A user wears the new device on her neck for sensory feedback. (Image: John A. Rogers/Northwestern University)
Tech Briefs: What are you currently working on?

Matthew Flavin: I’m starting a lab at Georgia Institute of Technology in the area of wearable bioelectronics. There are many applications, some of them even in things like gaming and social media. However, I'm especially interested in biomedical applications. We want to develop systems that are helping people. One of the general problems that I've been addressing in my research is using a haptic patch that stimulates skin to help people who have different neurological disorders.

The ultimate demonstration of our recent project has been systems that help people who have different types of impairments — visual impairment, in particular. We’re developing systems that can substitute, and augment, missing sensory information and help them to be much more confident in their daily lives. One of the directions that we're pursuing now is working with people who have had visual impairment from a very early age.

A lot of the research in this area of wearable bioelectronics is developing sensors to help detect things that are going on inside the body — to help monitor and prevent illnesses. However, we use sensors not for detecting things inside the body, but for delivering information to our haptic devices. We call this area of research epidermal virtual reality, similar to how a virtual reality headset like Oculus tries to reproduce a realistic and immersive sense of visual stimuli.

We're using our haptic devices to recreate an immersive sense of physical touch. The first thing you think of is VR goggles for gaming, and we're interested in all the applications of those technologies. But we're especially interested in things that can be used to help people.

Our project at Northwestern University was about technology for engaging the skin in new ways that haven't previously been possible. Our skin is kind of like our eyes in that our eyes have different receptors, red, green, and blue, which is the basis for our color vision. Our skin has different touch receptors. For example, if you're moving cloth around your skin, it's going to be perceived by a different type of cell, a different type of receptor, than the receptors that you use for lower-frequency pressures, such as poking.

The device that we developed can twist the skin, can vibrate the skin, and can also press. It's kind of like going from black and white vision to color vision. The haptic devices that most people will be familiar with are vibration actuators. Your phone has that, and video game controllers have that. But vibration is not the entire thing.

Tech Briefs: What frequencies are the vibrations?

Flavin: The actuator that we're using lets us apply a very broad range of frequencies. However, what we tested was between 50 and 200 Hertz, which is the range of frequencies that our skin is most sensitive to.

There are other types that can deliver vibration over broader ranges, but with less intensity. What’s really special about our device is that along with being able to vibrate, it can also press. But that requires a lot more energy, a lot more force. If you vibrate your skin at 200 Hertz, you could move it tens of microns — you could feel that — it doesn't take a lot of energy to do that.

But to feel pressing, you would have to press in a few millimeters, which is why you don't see that as much in video game controllers.

There are existing devices that press your skin, but they require a ton of energy, and they're tethered to a power supply. So, if we're talking about things that you want to have people carry on their skin, to walk around with, or for any daily activity, you don't want to be tethered.

But our device can do all those things; it can deliver patterns of mechanical stimuli and it's powered by just a battery. It's one integrated system that we showed could be used for different applications. The way we're able to achieve the required force, but with low energy, is to harness the energy stored in skin, to recover it during its operation, rather than fighting against it.

We store energy by using the skin as a spring, and then combine that with some special magnetic materials to produce a bistable operation. What bistability means is it acts kind of like a light switch — we flip it on, and it stays on, we flip it off, and it stays off — we don't need to hold it there. We don't need to spend energy to hold it off or on, we just need a quick burst of energy to flip it back and forth.

So, by combining the skin’s natural mechanical properties and special magnetic materials, we have this bistable system that can either be indenting the skin or not indenting the skin. Then we just need a quick burst of energy to switch back and forth.

Tech Briefs: Can you explain the function of the magnetic material.

Flavin: There are two key elements to the actuator’s structure. One is the plunger, which moves the skin. Built around that is a core structure, a cylinder made of soft ferromagnetic material, which has an electromagnetic coil in it to move the plunger in and out.

Tech Briefs: Do you use the same actuator for vibrating, pressing, and rotating?

Flavin: One device will be able to deliver pressing and vibration. Another device, which is basically a variation of this same structure, does the rotating. For that, we use a kirigami sort of structure to translate linear motion into rotational motion.

Since the devices are very small, we can fit multiples of them close to each other on the skin, where they can work kind of like sub-pixels. It’s similar to a visual color monitor, where although there are independent red, green, and blue pixels, since they're close enough together we perceive them as a single color.

Tech Briefs: It's hard for me to understand how someone can translate these sensations into an understanding of their environment.

Flavin: Haptics, to different extents, have been around for decades. There have even been vests with vibrating actuators in them that could tap out the stock market in Morse code. Obviously, you need to learn how to do that, which is a huge barrier to it being useful, especially when you could just pick up your phone and get the prices.

Our approach was that if we can make the stimuli as intuitive as possible, which means matching them to some natural sensory experience, it will be much easier to learn and can be learned over relatively short training periods. In our case, what that means is having the individual elements of the array correspond spatially to specific regions in what would be our visual field. So, when something you know crosses that particular area, it's going to trigger indentation in that spot. The users are instructed to try to identify where the object is, based on where they're feeling it in the device. It takes a little bit of training, but usually within an hour or two they're able to kind of pick that up. And we think that if they have more training, it could become second nature.

Tech Briefs: So, you're saying, for example, it would focus on a wall and then if something came in front of the wall it would pick that up and communicate it haptically.

Flavin: Yes. We're getting that information from our cell phones, which are very sophisticated now. The newer ones incorporate LiDAR to measure distances. And Lidar is able to map out the three-dimensional surroundings. You can tell not just that there's a wall in front of me, but how far it is from me. That app can connect by Bluetooth to our haptic device. So, we are taking that information, which is very complex, and communicating it to a device that's capable of reading out that complex information in a way that someone could perceive it without having vision.

Tech Briefs: It seems like there are some pretty sophisticated algorithms involved.

Flavin: Yes, there are, but the great thing is that when we use those LiDAR systems, we're utilizing some of the libraries that Apple makes available. And since they want people to use their systems, they expose APIs that can be used to write code for mobile devices just out-of-the-box.

Tech Briefs: Does all of the computation take place in the phone rather than in your device?

Flavin: That's right. Our devices are embedded systems that can do some pretty sophisticated computation. But when we're talking about embedded systems, those are always resource constrained. So, 3D reconstruction and classifying images are things we’re doing right now on the cell phone. And we're talking about doing some even more sophisticated things by doing some of the computations in the cloud. So, having very close communication between the devices and the Internet gives us many more resources. If we could connect to the cloud, we could have a supercomputing level of processing. Especially if we figure out how to make that low enough latency, there are some interesting things we can do to expand our capabilities.

Tech Briefs: Do you have electronics embedded in your array?

Flavin: Yes, our array has integrated systems embedded in it in the form of a System-on-a-Chip (SoC). It has an ARM-based processor and a Bluetooth stack, based on Nordic Semiconductor’s devices, and it even has an antenna built into it. So that's kind of a one- stop solution that enables a lot of these things. We can use it to define the communication between all the actuators, all the other functionalities on a board, and then use it to define the communication between this device and external devices like smartphones. Utilizing elements like that is really enabling for these different technologies.

Tech Briefs: Did you design the SoC specifically for this application?

Flavin: No, we're using commercial SoC technologies to leverage those functionalities and then combining them at the system level. The technology of this device is really focused on the actuators and their ability to press, twist, and vibrate. However, I think it'd be interesting to expand what we could do by maybe using some edge computing with a neuromorphic processor.

Tech Briefs: So, your actuator does these three different things: it vibrates, presses, and rotates. How do you decide which should be doing what, to indicate what?

Flavin: It kind of expands the information that we can deliver to the skin in a particular area. We can vibrate, we can twist, and we can press, so what do we want to communicate with those different channels? We've really only scratched the surface of figuring out what are the most effective configurations. In this visual sensory substitution system, for example, we use indentation and patterns of indentation to indicate when there's something in front of you and if you're about to bump into it. It’s also going to tell you within a fixed distance where that object is. So that would give us enough information that we can use to avoid it.

We can use vibration as a prescriptive signal to navigate someone toward a particular object in their surroundings that they might be interested in. For example, the LiDAR systems and the APIs that support them are currently able to classify certain types of objects in a relatively limited way. They can tell me if there are chairs in front of me, walls, ceilings, or doors. You can imagine if I want to find a chair, this can help me navigate toward it and then the indentation will tell me when I’m about to run into it.

Tech Briefs: How would you know it's a chair you’re going toward?

Flavin: The sensor is able to classify that, and to be honest, those are things that kind of exist in a black box — it's something that Apple puts together.

One of the things we're thinking about now is how to make our system sophisticated enough to work on some of those problems on our own. Our system is able to, based on the geometry of the chair that it's evaluating using LiDAR, tell that it's a chair. And, in this case, the person knows it's a chair because we've told them that if they feel vibration, it's guiding them toward a chair. So, this is relatively simple.

But obviously you would want a flexible system that could detect all kinds of different things. Maybe I could even tell it I want to be guided toward a desk of certain color or something more abstractly. That's something we're actively working on. But right now, these approaches are pretty simple. Our demonstration was really centered around showing that with these different modalities, we can deliver that information. There are a lot of areas that we've only scratched the surface on, but there are all kinds of sophisticated systems that can potentially use the three modalities in even more meaningful ways.

Tech Briefs: What does the twisting tell you?

Flavin: We tried a few different things. We tried both twisting and vibration as a navigational signal. We use vibration as sort of a compass telling people where an object is that they might be guided toward. And then the torsion signals when the person is facing it directly.

This isn't necessarily the optimal way of realizing the system, it's to show that these things are possible, and showing the elements of the system that make it possible.

Tech Briefs: How do you deal with variations in skin properties?

Flavin: That is one of our key challenges. As we talked about, one of the things that makes this system possible is that we're using the skin, not just fighting against it. We're actually using it as an integral mechanical element. So, the challenge there is that there is a huge variation in the mechanical properties of human skin. There are systematic differences between individuals, systematic differences between men and women, and as we get older, our skin’s mechanical properties change. So, a big part of this study was to evaluate a broad range of individuals to make sure that our system will work with skin. We did that using some numerical modeling. We did it by using skin phantoms, which means we basically simulate the mechanical properties of the skin and then enumerate a broader range of conditions and then test under what conditions it's working.

What that let us do is define criteria and a range of of mechanical properties where the device would work effectively. We aimed to maximize that over the broadest range possible and then ultimately tested it with more than 30 actual people. Our device was able to accommodate those different mechanical properties.

Tech Briefs: I read that this could also be used for people with neurological problems. Could you give me a brief idea of the sorts of things it could do for them?

Flavin: This is another project that is ongoing. It is a clinical study in which we're testing people who have conditions like stroke and spinal cord injury. The general aim is very similar to what we were talking about for visual impairment. With visual impairment, we have some missing sensory information and then we're substituting and augmenting it.

For people who have stroke and spinal cord injuries, what we're looking at is that they can’t feel their feet. There are a range of motor and sensory symptoms. If they have some motor control but can't feel where their foot is even when it’s touching the ground, that can make it really hard to walk around. There are a lot of gait problems, a lot of balance issues that patients who have these conditions are commonly facing.



Magazine cover
Tech Briefs Magazine

This article first appeared in the January, 2025 issue of Tech Briefs Magazine (Vol. 49 No. 1).

Read more articles from this issue here.

Read more articles from the archives here.