Self-Powered Gesture Recognition with Ambient Light


– So thanks so much for
an introduction, Dan. I’m XD, I’m here to present
a collaborative work with my colleagues: Yichen,
Tianxing, Xia and Ruchir we’re all from Dartmouth. Yichen is a postdoc at
Dartmouth, he is also a leading author of this work. But unfortunately he cannot be here because of the visa issue. I’m here to deliver the presentation. So, what we’re looking at here is, basically Yichen is wearing
a pair of google glass, he’s making some mid-air swipe gestures, as well as some finger touch gestures. And we have a computer monitor showing that the system can recognize what gesture he is performing, and you must have noticed that there’s
a plastic, white plastic box coming off of the
end of the google glass. And there is also a
narrow, plastic type seen mounted on the side of the google glass, at the location where
the track pad is located. If we take a closer look
at it, you’ll find that that’s a narrow plastic thing is basically a grid of solar cells. We know that solar cells is something that we can use to harvest the energy, and convert energy to
electronics, electrics. And but in this work we proposed that the solar cells can also be used as a sensor to detect touch gestures, or mid-air finger gestures, so gives a little bit of a sensing principle. And it will look What are you looking at
here is a two solar cells, in the figure, and the
solar cells can harvest energy from indoors
and outdoor lightening. I find my finger moving
from a right to left. And my finger is gonna hover
above one of the solar cells. Which may create some
light breakage, which may introduce a drop in the amount of energy, that is harvested by that
particular solar cells. Representing by the dip in on the top left corner of this slide, in the blue line. And if I keep moving
my fingers to the left, so my fingers is gonna
hover another solar cells, which may cause a dip in the green line. So based on this temporal information, we should be able to
figure out what gesture but just actually and user is performing. So, in order to detect this
drop in an amount of energy that can be harvested by the solar cells. we use a fairly simple threshold method, to get dynamic threshold method called a constant for long rate,
and popularly used in radar applications. And the reason why we
want to use this dynamic threshold method is because
this is a pretty good at you know against environmental noises, which is very common in
outdoor environments. So, what I’m looking at here
is the sensors in action. We have two solar cells
on the breadboard and oscilloscope showing two
lines, a yellow lines in, and in the red line represented
the amount of energy that is harvested by the solar
cell and if I move finger and hover above the solar cells, you see that the energy drops. If I touch the solar
cells, you will see drastic changing in the amount of energy. This is basically how it works. And the entire system is composed
of this grid solar cells, some electronics, we also
run the gestures detection and the recognition logics
on that electronics. On that MCU basically. And the system consumed
fairly good amount of power. This particular time
when we took the photo, the system consumes about less
then 40 micro watts of power, which is pretty good. But I think what’s fascinating here is that an amount of energy, that can be harvested
by these solar cells, exceeded in the amount
of energy that is needed, to detect and recognize what
gesture to use it to perform, and at this particular moment we took this photo in an indoor lab environment as this
grid of solar cells can harvest 668 micro watts of power. Which means that the
system can run and detect and recognize finger gestures without the need of external battery. So the motivation of this work, is definitely you know
recent years, we’ve seen increasing amount of wearable
devices, small devices and small IoT devices, and one of the sort of like challenges for this device is to become even more popular is the battery. We do not want the chance to charge this devices with per day. In many situations, we want
to charge them perhaps for once per lifetime, so
this is the reason why recently we’ve seen a
line of very, very cool interesting and projects in this field of battery-less devices. If you could look at the left one, see that this is a battery-less phone. The one in the middle is
the battery-less screen, the one on the right, middle, left, the one on the left is
the battery-less camera. And but within the existing
literature, there is no devices allowed user to perform
input or touch input, mid-air touch input, without
the need of a external battery. And this is exactly
what we are doing here. And we created two prototypes,
one in the smart glass form factor and another one in
smart watch form factor. But please understand that
this technology can be used in many other small devices,
like IoT devices. So, this is the smart glass form factor, we have this 48 solar cells and we connected this solar cells to MCU circuit and we use
the MCU to read the voltage of data in order to detect, whether there is a gesture or not. And then we run the
gesture recognition logic, also on this MCU unit, okay. So, for the MCU to read voltage data, it takes only about 5
microsecond, which is pretty fast. but within that 5 microsecond,
all other solar cells are dedicated for harvesting energy. Which means that most of the times, the system is actually
useful for harvesting energy, and in order to detect and
use gesture we run the system in about 35 Hertz. And if you closely on the
back of this MCU circuit, we also have this power
management circuit, which basically relating to anything that is related to power. But most significant from
one being power harvesting, so this is how the system works. We have number of solar
cells, and we’ve connected the solar cells to two circuits; one is power management circuit and the other one is
voltage reading circuit, that basically due is just
detection and recognition. We have to switch with
this circuit frequently, and this is why on the
back of this solar panel, we have number of switches,
and decoder or decomplexer. And we also created this
smart watch prototype. That’s pretty much the same thing, except that we have a slightly
less number of solar cells, and we have more switches and decoders. This is basically the
circuit design limitation, which actually increase
the power consumption which I will discuss later. And for the rest part of the system is pretty much the same, we have micro controller in
that power management unit. In terms of the gestures that
the user can perform on this devices, very simple gestures
but they are very useful, I mean common gestures,
interacting with small devices, for example the users can swipe forward, the user can swipe backward. The user can tap it and do a
double tap and two finger taps. On the watch you can swipe right, left, swipe up, swipe down, and tap it and double tap without tapping the screen. Alright, so we’ve
conducted user evaluation, where we recruited 10 participants. We asked each participants
to repeat this 12 gestures. And each gesture was repeated 20 times, in total we collected 2400
gesture samples for analyze. So we were very interesting knowing about the gesture
recognition accuracy. So, we reported the accuracy in two forms; one is precision, and another is record. So, the precision
represents the percentage of correctly recognized the gesture among all the detective gestures. Not all the gestures can be detected. Record, means that the percentage, of the correctly recognizable
gestures, all the gestures, regardless of detected or not. So, for both prototypes we
got pretty good numbers. And for both precision record, we got a numbers higher than
97%, which is pretty good, and the record is slightly lower. As you can imagine that, not
all gestures can be detected. Alright, in terms of power
consumption, we tested the power consumption, and
only when the system is running in the voltage reading mode, because in energy harvesting mode, does not
really consume as much power. And of course the video
is too fast but I want you guys to focus on this table. So, 90%, 95% of the power
was consumed by the MCU, and we think, this amount
of power if you look at the column on the left side, you’ll see that the ADC; the Analog Digital Converter actually consumed quite a lot of power
and at the gesture recognition logic also consumes some power. And the decoder in the switch
did not consume much power because we used energy efficient hardware. Average of the glass prototype consumes about 34 micro watts of energy. The watch actually consumes much more, almost double it up. The reason for that is pretty simple, we have more switches,
we have more decoders, which consumes much power. And then also for the watch we have two of the gestures, so which means that the
gesture recognition logic also consumes more power. Alright let’s see in the amount of energy, that can be harvest, and we tested the system, in an indoor environment as
well as outdoor environment, because time limitation,
we’re only gonna show you examples in indoor environment. In the indoor environment, we tested the difference lightening intensities ranging from 200 to 2k Lux. 200 is really, really dark. For all the other, for all the
other conditions other than 200 Lux we can get enough power, meaning that greater than 44 micro watts, that’s greater then 34. But for 200 unfortunately
we won’t be able to harvest enough power, but when I show the power management circuit, I
don’t know if you guys recognize as something that looks like the battery
but it’s not battery, it’s a super capacitor, which is something we use to store the energy when the system is not in use. We don’t
want to waste the energy, in that situation we can
use the power from super capacitor to power the system. And this is for the smart
watch, you may notice that for very similar conditions the smart watches can harvest more power, in most situation we can get a more then
66 micro watts of power. The reasons is very simple
for most of the time, because smart watches solar
cells facing the ceiling, this is where the actual light come from. And in order for the system
to sort of walk real world situation, we have tested
in sort of situation where a lot lightening
and noises happening, we tested the robustness
of the system and in terms of different light intensity,
light direction, and fluctuation of the light,
all sorts of conditions. These are in the papers,
for now in this talk, I’m going to just show you two examples. First off we test the robustness
of the system in terms of gesture recognition accuracy, with different light intensity. So we tested indoor environment,
with light intensity ranging from 200 to 1 gigalux. (sudden background noise) So, the average precision
record are pretty good, 100%. When we move on to the outdoor condition, you can see that, the percentage
actually goes down a bit, 95 not too bad but goes down a bit. The reason is very simple, the
outdoor lightening condition, is uncontrolled, we have
a lot of fluctuations, in the natural light
and also in the shadows on the moving leaves may
also cause some problems. So, we’ve tested the smart watch prototype in similar condition, except
that we have one more condition for the smart watch, that
is the dark room condition. It’s completely dark,
the reason for that is we want to mimic the
situation where people wake up in the night and just
want to look at the watch and maybe do some very
brief interruptions. So in this particular case, the watch actually sensors actually detected a finger gesture
based on the light emitted from the screen, and it bounced it back from the finger. You can see that the
precision looks pretty good, but the record goes down a
bit, we get 90% of the record. Simply means that the light
intensity is not strong enough to guarantee that all
the gestures can be detected. So what about nearby movements right, when we use this devices with people. And we hired a grad
students to do some crazy hand movements near the
device, we found out that the recognition accuracy was pretty good. The reason for that is very simple, because the sensing range of our devices, are basically 3 centimeter. Anything that doesn’t run 3 centimeter and beyond may not create enough
light breakage to trigger our threshold algorithm. So, we have implemented some applications. Nothing really fancy here, but just the Yichen, here is that interacting with the cat images, and on the smart glass, and
he is playing in the video game by using the swipe gestures,
mid-air finger gestures. And without touching the screen,
which may cause occlusion. So here it comes to end of
this talk, I would like to wrap it up with some take home messages. The first, we proposed an
implement a touch gesture and detection recognition
method that does not really need a external battery. And we’ve run a number of studies to prove that the system is pretty accurate in different lightening conditions. And we believe that
this type of technology can be used for in a many
power constrained devices or even powerless devices, in IoTs and small wearable devices and I want to take this
opportunity to acknowledge my other co-authors again and also NSF which supports this research. Look at the left side of this slide, Dartmouth engineering
hiring faculty in HCI, please apply. Alright, I want to thank
you guys for your attention and I’m willing to answer your questions. (applause) – [Dan] Great, you ready to
take some questions? Not bad – [Turner] Hi, I’m Turner at Georgia Tech I love this stuff, question is, false positive per hour, so
if you wore this glass device, for an entire day, how many
times does it false trigger, as you go from indoor to
outdoor, and that sort of thing? Go through a doorway that stuff. – If just an environmental light, it’s not even a trigger, I would not say, it’s not a good any, but I’m pretty positive because, if you go from indoor to outdoor, it’s the global lightening change, that’s something our
algorithm can take care of. As long as there’s small sort of a spot change in light intensity, then we may think, the
system might think it’s a- – [Turner] It’s an easy test to do I highly recommend doing it, because It would really kind
of nail it, in my, in my book. – Thank you. – [Dan] Thanks, question in the back. – [Dennis] Dennis from
University of Portland I actually had kind of a similar question, when you’re walking outside,
with light condition without changing, so does this mean
your device is not working. – The device will work,
the device will work. If you think about it. If you- – If you’re walking, – Huh? – While you’re walking,
the moment of walking. – The moment in walking,
so basically you switch from a dark room to a
bright outside patio, is that right? – So, yeah, yeah. – So you’re question is whether
this is going to trigger and if false positive? – [Dennis] And generally, when
you move even in this room, the light condition change all the time, it’s different here, like
setting here is different. – It’s definitely going to
work as long as, as long as a small part of solar
cells trigger is triggered, because if it’s global
lightening changes is not gonna trigger and because
we have this algorithm to take care of that. If a small part of the solar
cells, detect as a big drop in energy then that may
trigger false positive. – Okay thanks. – [Dan] Okay, I think we have
time for one more question guess we’re wrong, we’re over
to go to Queen’s university. Very nice work. I’m assuming there’s no
such thing as transparent solar cell because it has to be a photon electronic interaction right? – Sorry say that again. – There is no such thing as a
transparent solar cell right? – There is no such thing
called a solar cell? – As a transparent- – It’s basically for photoiodes. – Yeah, so-

Leave a Reply

Your email address will not be published. Required fields are marked *