2019 ADSI Summer Workshop: Algorithmic Foundations of Learning and Control, Necmiye Ozay


First up, we have Necmiye. Necmiye. Necmiye Ozay. And she is a expert
linear dynamical systems, dynamic systems in general. And she’s a professor at
the University of Michigan. And can’t wait to see what
she has in store for us. Thanks. So I’m going to try to give you
some perspective from a control theorist and
partial practitioner view of how I think
about control, and how I think about learning,
and how things come together. Then I will talk about
some work that we recently did with [INAUDIBLE]
from UC Riverside. So that’s going to be the
technical part of the talk. So I will start a
little high level, and I will try to show you the
type of problems that I work on and the type of problems
that I care about. So I work a little
bit on control of safety-critical
autonomous systems. And, typically, we have
high-fidelity simulations. So these are two
systems that I work on. One is autonomous driving. This is a simulator from Carson,
and industry uses the simulator to test their algorithms. The other example is autonomous
taxiing for an aircraft. Again, that’s X-Plane. It’s an industrial-scale
simulator. Very complex models, but
there are models in there. And we try to do controlled
design for systems, typically with constraints. And one thing
that’s interesting, when I start working
on these problems, you learn interesting
constraints. For example, if you are trying
to make an aircraft park– I don’t know if you’re aware
of this –but aircrafts cannot back up. They can only go forward. So if you screw up something,
then you cannot back up and recover. The truck needs to
come and pull you back. So the constraints
are kind of important. When I get a little
adventurous, I try to put things
on real machines. So we have a test track at
the University of Michigan. Actually, that simulator has
a model of the test track. We call it [INAUDIBLE]. And this is a self-driving car
running one of the controllers that we automatically
synthesize. So when I play
with real systems– So, at Michigan,
I wouldn’t say I am the person that plays with
the real systems the most. But I feel like I learn
about certain things that I can put in the
theory and reason about. Finally, I collaborate
with lots of people doing applied controlling
different domains. And there are these
domains, but it is hard to get on policy data. So one of these
domains is smart grid. You have this
large grid systems. You have a bunch of loads– maybe thousands,
hundreds of thousands of loads –that
you can coordinate, if you have a Nest-like device. But there’s also lots of
things going on in the grid, other laws coming on and off
that you might have data, but you cannot control. And the thing is,
even if you come up with a policy-scheduling
policy to control these loads, it’s hard to test
that scheduling policy on the real system. And here is another data, again,
in the context of driving. If you’re at Michigan, you
cannot avoid working with cars. So this is data
collected over a highway. So there is a month long data. They equipped the
highway with cameras. They are tracking
different vehicles. And, for example, if you want
to design an autonomous vehicle, you want to learn how
people react, for example, if a car is overtaking them. And we have these
type of data, but it’s hard to get on policy data or
put your autonomous vehicle, without knowing anything
into the highway, and expect that people
will let you do that. So that’s why models are
central to most of the things that I do. So I always think about going
from data to models and models to controls. So models is always
in the middle. So I don’t understand that
much about model theory, but I am willing to
learn, and how it can help in the control context. And, typically, we have
this type of model. It could be nonlinear, dynamics. We have some observations why. We typically have some
noise uncertainty models. And we have some
control constraints in how we pick our controls. And on the from
data-to-model side, I look at problems like
system identification. If I have input
output data, how do I learn a model of this form? So that’s one of
the problems that I am going to talk on the
technical part of the talk. I care about
modeling validation. I have model now, and I
am collecting more data. And I want to say something
about whether this model is a good model. Is it still representing
the data or not? And if I can answer
those questions, I can also do false-detection
anomaly detection. So if this model is a
good model of the world, and I keep collecting
data from the world, and if something changes in
the world based on this model, I can do those type of analysis. So those are the
things where our models plays a role in what I do. And from
model-to-control part, I work on formal verification
and control synthesis, where we look at the decision
problems of this form. Does there exist
the control policy? Sometimes we restrict the
policy class, such that X holds. And this X could be stability
of the closed-loop– existence of some
inductive invariant, if you want to prove safety. It achieves a task. My closed-loop system
achieves a task described by an automaton
or some temporal logics. It could be reachability
or those type of things. And the type of answers
that I want to get is I either want to
get a yes answer. And whenever the
answer is yes, I want to have an algorithm
that automatically generates this control policy. Or, in some cases, I want to
get the no answer, together with a certificate and
proof of non-existence, so that you know with this
model class what we are trying to achieve is impossible. Then, you either ask
your system to do less, or you add sensors
or X raters so that your system is
more capable if you want to do that specific thing. As it turns out,
if you write down nonlinear dynamics of that
form, these decision problems– even the simplest ones
–are undecidable. So we try to
understand, are there decidable subclasses here? And the other thing that
we try to understand is, if we have something
that says yes and no, what is the gap in between
if we cannot close the gap? AUDIENCE: How are
the models modeled? NECMIYE OZAY: What’s that? AUDIENCE: How are you
describing the models? The input is the model, and the
output is the yes or no, right? NECMIYE OZAY: So
input is the model. AUDIENCE: Right, and how
does the model look like? NECMIYE OZAY: It looks like
that, in the most general form. AUDIENCE: How can I
access the model then? Is it [INAUDIBLE]. AUDIENCE: I think it’s
later on the talk. NECMIYE OZAY: So it’s physics. AUDIENCE: OK. Later in the talk. NECMIYE OZAY: What’s that? AUDIENCE: I know, I’m
saying it’s probably later in the talk [INAUDIBLE]. NECMIYE OZAY: Yeah, I
will specify what type of. But this is more high-level. F could be your physics– and you know how to write down
your F equals ma and get those. AUDIENCE: Do you
chain these together and talk about how
much data you need, in order to provide
either [INAUDIBLE].. Do you talk about how
much data you need, in order to provide a
yes or no certificate? NECMIYE OZAY: So that’s
what we are trying to do, but I don’t have
answers to that yet. And I think this
community can help. AUDIENCE: It’s the model, right? You would model. The certificate is with the
model, not from the data. AUDIENCE: I’m asking
if you can chain them. So can you go all the
way– one question we’re interested
in is how much data do you need to identify what
the [INAUDIBLE] is in a class? And so I’m curious if you
can say, given this data, and then you chain
it through a model, can you answer how much
data do you need to answer a control that already exists? NECMIYE OZAY: So
the thing that I am going to talk
about is how much data do I need to learn the model
with a certain accuracy? And the thing that we know how
to answer very simple classes of problems, for very
simple classes of models, and very simple
classes of X’s, is how much uncertainty
in that model I can tolerate so that
my answer is yes, no? But general solutions
don’t exist. And we always go
through the model. I don’t personally know
how to go past the model, and have the data, and have
the task, on the other hand. So things that I like– I like models because I know
how to ask this decision problem and, in some cases, how to
answer that decision problem if I have a model. I also like models because
I want to have the ability to change my control objective. So, today, I might be
interested in this. But if I see that
something is the answer is no, then I might switch
to an alternative objective. So that’s why I
also like models. If I have the model
in the middle, I can change the objectives
and look at the yes, no answer. And this is the question
I was trying to pose to some people at lunch– say I have a model. If I get more data,
I know how to check the validity of my
model, and therefore, the validity of the controller. But it’s a little
hard to do validation from data for a policy
if I just had the policy. That’s probably
related to Emma’s talk. I don’t know, in
most of the settings that I am interested
in, how to use additional off-policy data
to reason about my existing control. But if I have the
model in between, I can reason about validity
of my model in the middle. So the other thing
that I like is– was there a question? The other thing that I
like is constraints– constraints, rather than
objective functions. And I would argue
specifying certain tasks with a single reward or cost
function is typically hard. And this is
something, when I talk to people in more
applied domains, they also complain about
they are having trouble in, for example, they are
doing and MPC, but they don’t really know how to pick
the cost function in the MPC. And they say they do
lots of model tuning, and they do lots of cost
function tuning, as well. So you can say, oh, I
can append my constraints to my objective function, get
an indicator function type thing for my constraints,
put it in the objective. But nice, additive
quadratic cost functions that we really understand well– theoretically, they
are no longer enough. AUDIENCE: Can you
explain a little how you think about constraints? So I can think about
constraints [INAUDIBLE].. I really need to
respect this constraint. I [INAUDIBLE] while I
am creating my policy. Or I can think of constraints
where, eventually, I want to discover a policy that
satisfies these constraints. And the distinction may
be made, first of all, if you’re actually
acting in the real world. So I’m curious how should
I work constraints? NECMIYE OZAY: So both. So one type of
constraints we care about is this inductive invariant. So there is an unsafe set. You don’t want to
crash your car. You don’t want
your robot to fall. And you want to find a
subset in your state space that you can guarantee that
you remain in that subset. And, if possible, you want to
find the maximal side subset. And for invariant-type
properties, the maximal exists. It’s unique. But if you have
something different, like reachability, then the
maximal initial conditions is not unique. And it could be objectives
like reach avoid. You want to reach a certain
state while avoiding something. You can think about it as a
spatial-temporal constraint that you want them to hold. So these are some perspectives. And I see two
different ways people approach data in control. And there are two
different regimes. One regime, where we have
this very complex model– for example, the
high-fidelity simulations that I showed earlier, they
have super complex physics. They have this PD model. Sometimes to simulate
one minute of reality, you need to wait a whole day
because your finite element model and PD [INAUDIBLE] are
cracking a bunch of things. But this complex model,
you can think about it as an infinite data set. So you can set
certain parameters and get a bunch of data from it. And we can also
collect lots of data. And system
identification is a way to learn some simple models
from these complex models. All the people built
these complex models, they never use them for control. For control, you try to
get a much simpler model. For example, if you want to
test the flight controller, you test it on a simulator
that has all the finite element model of the wings. But the model that
you are using is just a Newtonian model and
very simple models on the physics of the aircraft. So if I can learn
this simple model, I can do multiple things. As I said, if I have
a simple enough model, I can do controlled design. If the model is complicated,
even if you give me this model, we don’t know how to
do controlled design. This is nested nonlinear. But for some simple
models, we know how to do a controlled design. We can do fast simulations. Sometimes you want to
do faster simulations. And with those fast
simulations, you can do system monitoring
type things, as well. So simple models are
useful in that sense. But this is thinking about
where you can get lots of data. And people talk
about this problem. There is also this other regime
where we only have small data. So if you’re exploring
unknown environments, you cannot collect data. You just go to the
marks, and you are there. You are getting data
for the first time. And you want to do something
with that or handling an unexpected failure. So if something fails,
when you are in flight, then you need to
learn something fast. So that you can
react to what failed. And, in this case, online
system identification becomes important. You don’t have
batch data, and you want to learn these
models at runtime. And I still like having
models because I can adapt. So I can change my
mission objectives, if I figure I lost lots
of things in my aircraft. Maybe I just want to land now. I don’t care about reaching my
destination, and having models will help with those
type of things. And, in this talk, I will talk
about this small data regime. And I will use a very
simple model class. And I will talk about how
to learn that model class. So this is the type of
models I will try to learn. Give us some input,
output data– U and Y. We want to find
the model of this form where’s there ABCD matrices. There is some noise,
both process noise and measurement noise. And, in system
identification literature, people looked into this problem. And, again, I said there’s
this large data regime and small data regime. And depending on which
regime you’re at, you are interested in
different type of analysis. So most of the early American
system identification reason about asymptotic analysis. As the data size
goes to infinity, and the noise
level goes to zero, can we learn the system model? What is learnable in
this system model? And if there is something
learnable, can we learn it? Is there an algorithm? And, more recently,
people started looking into this
non-asymptotic analysis where you try to reason about
finite amount of noisy data. And you want to understand how
your identification accuracy depends on the data size. And you are also interested in
lower bounds in this regime, as well. So there is quite a bit
of work in both areas. For asymptotic analysis, if you
open up a system identification book, there are lots
of standard methods. And we understand very well
how they behave asymptotically. And for non-asymptotic
analysis, there was an interest in the
topic from control theorist, back in the days
and also recently. And there’s this
nice survey paper– if you are interested
in –that came up last year that
summarizes what has been done for non-asymptotic
analysis in the control setting. And there’s also this
new statistical machine learning type. [INAUDIBLE] worked
on it quite a bit. And we are inspired by
these type of methods. But there is one
thing I would mention. There are different
assumptions, both in this work and in this line of work. And we are trying to
relax those assumptions. For example, old control
theoretic methods, they have this assumption
of noise invertibility which roughly means– if I go back to my model
–if I give you ABCD, and if you observe Y and
U, you should uniquely recover your noise, which
doesn’t hold, in this case. But if you have
the property done, there are quite a
bit of techniques out there to do
non-asymptotic analysis for that class of models. However, even for the
simple model that I showed, that noise invertibles
does not hold. Then there is the statistical
machine learning methods where people come up with
different algorithms for doing system identification,
again, with some simplifying assumptions from the perspective
of what you are learning. But they do a very
deep analysis reasoning about statistical properties
of what is going on. So what we try to do
is we try to just pick these traditional algorithms
that people use in practice and try to understand
whether we can do non-asymptotic analysis on
them, rather than coming up with a new algorithm. These are algorithms
used in practice. And can I take that algorithm
and reason about it? So the contributions
of this work is related to this
whole common algorithm. I will explain what
Ho-Kalman algorithm is. And that will help everybody
to understand noise sensitivity and noise robustness of
this Ho-Kalman algorithm. And based on that, we get this
non-asymptotic learning results on Ho-Kalman. And Ho-Kalman is also known as
Eigen realization algorithm. Ho-Kalman is the version
that’s noise-free. When you have noise, it’s called
Eigen realization algorithm, and it has these properties. So it learns from
input, output data. So you cannot
measure your state, because when you
measure your state, the problems become
a little simpler. At least the identification
problem becomes very simple if you don’t have noise. But we want to learn
from input, output data, so this brings two challenges. This is a generically
ill-posed problem, and I will tell you
why it’s ill-posed. And we can only learn some
canonical representations because it’s ill-posed. I can learn up to something. So we are trying to learn
from single trajectory, so I don’t have
the luxury to run multiple experiments
in the settings that I am interested in. And this brings
some difficulties in doing the statistical
analysis because you have dependent data, and you
want to reason very carefully about your dependent data. And, as I said, the results
from the control community doesn’t apply, in this case,
because for the simple model that we have, that noise
invertibility assumption does not hold. So what Ho-Kalman algorithm do? Ho-Kalman, when they came
up with this algorithm, they talked about this
noise-free setting. So you have some linear
model of this form. And, before I explain
the algorithm, I will mention this is
an ill-posed problem. Can you see why it is ill-posed? AUDIENCE: It’s because the
[INAUDIBLE] changed the state by multiplying this anomaly
in an invertible matrix and then reverseed
that thing with C. NECMIYE OZAY: Exactly. So if I define a new
state it’s still that. I don’t observe the state. I can rewrite the dynamics. Now, there’s new C,
new A, new B, new D. But with the same input,
it will give me the exactly same output, assuming
if I do a transformation to my initial
condition, as well. So we cannot learn ABCD. We can only learn up to
simulated transformation. The other thing is
that we can only learn controllable and
observable part of the system. So this is something we
understand in control really well. So we will either
assume the system is controllable and
observable, or you can also infer from Ho-Kalman the
controllable, observable part of your model, if
your model is not controllable and observable. So given these assumptions,
how does Ho-Kalman work? It’s a two-step procedure. You first estimate the
so-called Markov parameters of your system. And Markov parameters,
they are also known as the impulse
response of the system. And one thing to note,
these Markov parameters, they invariant to
the change of basis. So if you look at this
CB, CAB-type terms– if you look at CB, if
I have this updated C, my CB is still the same. So this is something I
can invariantly learn, independent of my
state representation. And, in the second part,
what Ho-Kalman does is that it estimates system
matrices from these Markov parameters. And I will show you
how this two-step works and how the error propagates
through the two steps. So any questions so far? We are good? So, in the first
part, I will talk about how to estimate the
system matrices, given that you know your Markov parameters. So Hankel matrices came up
during the workshop yesterday. So this is where Hankel
matrices pop up in control. So if I have these
Markov parameters, I can form this Hankel matrix. And one observation
about this matrix is that this matrix is a
factorization of this form. And if you look at
this factorization, this is, essentially, your
extended observability matrix, and that’s your extended
controllability matrix. And one thing we
know from that is that if my system is
controllable and observable, the rank of this
matrix is equal to N when N is the state dimension. So how does Ho-Kalman proceed
from this Hankel matrix to obtaining a set of ABCD,
and why this set of ABC is a nice one? So what you do is that you
get rid of the last block column of your Hankel matrix. You take this matrix– let’s call it H plus. You take this. You get rid of the first
column of your Hankel matrix. You take this other matrix. Let’s call it H minus. And one thing you can
see is that H plus itself is a Hankel matrix
for your system. So it’s a multiplication of some
extended observability matrix times and extended
controllability matrix. And H minus is the
same observability and controllability matrix
with an A in between. And we have this. We have this. And, from that, we can
figure out what A is. So the algorithm goes like this. You take your H plus. You factor it out. I am writing SVD there, but
that SVD in the noiseless case is exact. Then you set your O to be at
the left side of your SVD, Q to be at the right
side of your SVD. Then you can extract
the C bar, B bar, A bar that essentially gives
you a realization of the system. And there is something special
about this realization. This is called the
balanced realization. Balanced here means each state,
or each model of this system, or each eigenvector
of this system is such that it’s as
controllable as its observable. So this also helps you to
select your model order if you do it this way. So it’s kind of
not parameterized. So you are not fixing
the model order, and model order comes
out of this analysis. And that singular
value decomposition– the sigma that I
have there –they are called Hankel
singular values, and those are related
to certain properties, like H infinite and
norms of my system model. So this is what happens when you
have exact Markov parameters. But we won’t have exact
Markov parameters. As I said, we will just
estimate our Markov parameters. Now, instead of G– this matrix of
Markov parameters –I will have some estimated
Markov parameters. And to our surprise, nobody– this is used a lot in practice. This is the name– Eigen realization algorithm. And I talked to a bunch of
senior system ID people. They were like, oh, there
should be an analysis. People are using this. How does noise propagate
through this algorithm? And there wasn’t any. So the first thing we did is– let’s try to understand
how noise propagates through the steps of SVD,
picking O hat, C hat, B hat, A hat like this. So the difference here is that. The main difference is that
my SVD, it used to be exact. Now, I am just truncating
it [INAUDIBLE].. So let’s say I have
an estimate on, or I have a bound on
how far away my Markov parameter estimates from
the true Markov parameters. How good would be
the estimates that I obtained from this algorithm? So this is– AUDIENCE: So the bound
holds for any index T? NECMIYE OZAY: We
will fix capital T. AUDIENCE: OK. AUDIENCE: [INAUDIBLE] NECMIYE OZAY: And I
will come back to that. I have some issues
with the T there, but it’s a different story. So the two results we have
is if I have some estimate– G hat –then I can bound the
spectral norm of the error between H and H hat. And based on that, I can bound
the spectral norm of the error between L and L hat. AUDIENCE: All of these
are spectral norms? NECMIYE OZAY: Yes. It’s not the best, but the
math works out easier that way. And if we further
assume the error is such that it’s smaller than the
minimum singular value here, it’s just saying
my smallest mode is controllable and observable
enough, compared to the error that I get here. Then, there exist
the unitary matrix P, such that the ideal balanced
realization is close to the CBA hat I compute through
this algorithm, multiplied with
the unitary matrix. And these are the
type of bounds you get on A hat, B hat, C hat, S dot. Yes? AUDIENCE: So it seems
that B and C are fine. And this A, we have this divided
by the minimum Eigenvalue, which might bother some people. It’s just a bad thing. Yeah, it’s real. Yeah. AUDIENCE: Plug it in B. NECMIYE OZAY: It’s not zero. So what you’re assuming
here is that there is this controllable and
observable part of the system. This essentially tells you
how you pick your model order. AUDIENCE: N is the
model order, right? And is the chosen
model order there? NECMIYE OZAY: Yes. Yes. N is the chosen model order. That’s your parameter. You get to pick it. NECMIYE OZAY: Yeah. If you pick your N wrong, so
if your noise is overriding your semi-uncontrollable
semi-unobservable modes, then you cannot learn. And if you pick N wrong,
meaning your noise level is sort of overriding,
but you can observe, then you don’t expect
to learn anyways. But if you pick your
N right, then this won’t be small,
assuming your noise level is comparable to your– This is telling you the level of
indentifiablity of the system. AUDIENCE: One more question. T1, T2, that’s the shape
of the Hankel matrix. Is that right? NECMIYE OZAY: Yes. AUDIENCE: So why wouldn’t
you just pick it really long. There must be a downside
to picking T1 equals 1. Oh, because you
want N to be large. So it has to be at least N. NECMIYE OZAY: Yes. So if we somehow estimate
G hat good enough, then we can learn our balanced
realization good enough. Now, the question is
how do I estimate G hat? So one thing you can
observe is Markov parameters are essentially equal
to this quantity. This would be useful if I
had independent trajectory, so that I can take this
expectation by just averaging. But I have only a
single trajectory, so it’s not easy to compute
this expectation from the data. So for that, what do we do? Again, we consider giving
this single trajectory. We consider and swap sequences
of my data of length T. And one thing I can observe here
is that this Y capital T here is a function of X zero and
the U’s that I have up to time T. Then I have
these subsequences. This is why capital T
plus 1, I can write it as a function of X 1, and
the other U’s, and so on. Then I can essentially
represent this YT, in terms of XT minus capital T.
My input terms, which I know, and some noise terms that are
my process noise and measurement noise. So I can just make
vectorize everything to write it in a
more simple way. I have my Markov
parameter matrix multiplying the vector of use. So F matrix, that’s a truncated
version of this Markov matrix, multiplying process noise, my
measurement noise, and then extra term, ET,
which essentially captures the effect of
this non-zero initial state from the past. And we can characterize the
statistics of this thing and treat it as noise. Then, once you have this,
we just use these squares, and we estimate a G hat. And this is something you
can solve in closed form. AUDIENCE: Why is it OK
to take [INAUDIBLE].. Does zero mean
noise, essentially? AUDIENCE: [INAUDIBLE] [INTERPOSING VOICES] NECMIYE OZAY: It’s zero
mean, but the covariances– [INTERPOSING VOICES] AUDIENCE: Oh, I think
that’s important, yeah. Here the idea is that you’re
driving this to someone else. NECMIYE OZAY: Yes. [INTERPOSING VOICES] NECMIYE OZAY: U is Gaussian. U is the Gaussian that I know. AUDIENCE: Oh, OK. NECMIYE OZAY: Sorry I forgot to. That’s one of the questions
that I had left for the future. So, now, if this is
my estimation scheme, which is used in Ho-Kalman,
how good is this estimate? So if this is the
theorem, given input, output data from a process of
this form where A is stable and initial state of zero,
that’s easy to relax. If I am picking
my U as Gaussian, and if my noise process and
measurement noise are Gaussian, then if I pick N larger
than this quantity, then it’s very high
probability this error is bounded by this quantity. So there is a more refined
version of this in the paper, but, essentially, your
noise terms hit you. And if your signal to noise
ratio is low, you gain things. And this is how much
data you want to have. So with this, we
can combine this. And given any delta and epsilon,
we can find a tight N of N bar, such that if we collect and put
out the data for N bar steps with probability
1 minus delta, we can estimate the system matrices
with accuracy at most epsilon. And you can pull the
opposite statement, as well. And there are several
extensions of this. You can estimate H-infinity
norm of the error, because we are using
Ho-Kalman, and because we have Hankel singular values. We can estimate
the system order. And we can give you conditions
under which you can do this. So I have some
numerical examples, how it works, and
we compare this to something else
[INAUDIBLE] worked on. So one observation
[INAUDIBLE] had, he was looking into nonlinear
version of this problem. Now, your state update goes
through a non linearity, more in recursive
neural networks with value type activation. And he said, if I separate
these trajectories, and instead of taking this
overlapping trajectories, and if my system is
stable, I can almost treat these two as independent. Then I can do an easier
analysis on that. The analysis is
easy, but this is how the practical
performance changes if you start doing separation. So we have something that,
practically, better makes use of the data. And it has similar tight
statistical bounds. AUDIENCE: Why is it deep and
then going up and [INAUDIBLE].. NECMIYE OZAY: I don’t know. AUDIENCE: I know, but it’s
always the same, right? [INTERPOSING VOICES] AUDIENCE: Look at the graph. AUDIENCE: The progression. [INTERPOSING VOICES] NECMIYE OZAY: And these
are some other results for different noise levels. And we are looking into
relative H-infinity error, and how this algorithm
performs in those cases. So with this, I will
conclude with a summary and some future directions. So we provided, essentially,
with the robustness analysis for Ho-Kalman, from the
sensitivity analysis combined with this estimation
error on the Markov parameters. We have the sample
data guarantees on learning– so, given a finite
amount of data, how accurately I can learn my system models. There are a few interesting
future directions. So can we analyze other system
identification techniques? One very popular one is N4SID. It’s used a lot. It’s more complicated
matrix operations, so we don’t have an
analysis for that. Is there any value in doing
input or experiment design? We are running
this with Gaussian, but if it wasn’t Gaussian,
then I can design the inputs. Can I do better? And the other question is when
you talk about input design, there are two different
input design problems. One is you can design
an open-loop input. So you design your inputs, and
you apply it to your system without doing any online
update of your input. Or you can design
an input policy more similar to online
learning setting, where your input policy
now decides on what input to apply at the next step. And I don’t know if this
will gain you anything in terms of sample complexity. Maybe people in the audience
might have thoughts on that. And we are also looking
into controlled design type problems, so that when we have
these yes, no answers in the N, we can say something about
the probability of correctness of those yes, no answers. AUDIENCE: Do you have
an opinion [INAUDIBLE] on the stability of the model? NECMIYE OZAY: Yes,
model is stable. AUDIENCE: Oh, it’s stable. NECMIYE OZAY: Yes. AUDIENCE: Thank you. [APPLAUSE]

Leave a Reply

Your email address will not be published. Required fields are marked *