Extending F* in F*: Proof automation and Metaprogramming for Typeclasses

>>I’m really happy
to introduce Guido. This is his second
internship here. So, you’ve probably heard
everything about him already. He’s been working on
almost everything F-star, but he’s going to talk about the Metaprogramming
framework of F-star, and both his design, and some of its applications.>>Thanks. Thanks
everyone for coming. So, yeah, I’m going to
speak about what we did for the past wild three months. We’ve been all over the place. So, yeah, pretty
excited to tell you about it and share
your comments if you want. So, you probably all know
F-star that we all work here. So, very, very briefly, I really don’t want to
spend a lot of time, so in one slide, I
can say that it’s independently type defect full-functional
programming language. It’s designed for verification. Not only that, but
it’s designed for efficient verification to not spend a lot of time trying
to verify your programs, but leverage automation
as much as possible. So, in the ML family, it’s a higher order
language, polymorphic, strict, safe, type preferred. All of the usual
things you’d expect.>>[inaudible].>>Yeah. Call by value.
Call by value, not lazy. The specs, you can write over both values and effects
for computations are very expressive or I almost say
arbitrarily expressive, even over the effectful
terms of the language, which of course do not
form part of the logic, but you can still
specify them probably. There’s a proof of relevant logic based mostly on refinements
and WP-calculus where the proof are not
really materialized in the source code or in
the language itself. It’s on a logical level and we’ll figure it
out, that as well. That’s one of the big factors that allows to use an SMT solver to
the assertion over the truth. We usually use Z3. I think we had experiments
in using those solvers, but we mostly use
Z3 and tractors. Now a quick, quick taste of
F-star just to get going. Here’s a very, very small, two let’s say
functions working over the state effect
that not annotated, most of the things
can be inferred. So, there’s a certain
increment function, takes a reference to an integer
and then, adds one to it. Amazing. Then, this is F function that allocates new reference with a starting value of one, calls incr, then bridge the reference and
there’s an assertion. This is a static check that the value V must be equal to
two. This program verifies. This solely due to the WP-calculus and the logical
contexts that F-star handles and has at all points
because nothing in the types or anything visible here will tell you
that V must be equal to two. It comes from the spec of incr, which is inferred and so, it has basically a post-condition saying that it increments
I think by one. If you call incr, you get to
assume it here and all that. So, the first thing you
should take from different to say [inaudible] or
other languages is that, you do not have your hands on the entire program contexts and everything you need to do to show that
this assertion holds. There’s just no, without going into these things that are
not immediately at hand. You’re just going to show it. So, most of the times,
we write these programs, write these assertions and annotate these specs
with whatever we want. That gets us through
more than half the way there. We verify things pretty
fast using the SMT solver. But at some point,
the solver must fail. Solvers are inherently
incomplete. Of course, it’s
an undecidable problem so that at some point, they’re bound to fail. But beyond that, there are things that don’t look so complicated, but still make
the solver go nuts. So, let’s take this small
example of this lemma and in the SAR’s standard library about module in arithmetic or just the module in
divisions saying that, a over p is the same that if I take the remainder and
then divide by p then, it’s the same because
this doesn’t. It’s less than one p quarter. You are expect to write
this and just say, “Okay, SMT solver
go and find it”, but it won’t find it. There’s just no way
for this to find or at least not reliably. What we usually did before we had a tactics language is find
the bunch of lemmas that basically break down this into the smallest set of
facts that we know are true and call them all, assert them all one by one, and basically guide the SMT
solver in defining the proof. So, in this case,
the program area which was pretty heroic to prove this lemma made use of three other limits
already proven and then added these assertions to guide the SMT
solver in the proof. Then this worked reliably. So.>>Real example?>>This is a real
example of this.>>Thank you.>>Yeah. This is not one of
the biggest to be honest.>>Right, but this is, I mean, removal of the virus and to
some of the other proof.>>Yeah. I imagine it
was JK then probably.>>Yeah. I’ve checked
with him [inaudible].>>Yeah. Yeah. The
heroic is for JK. Maybe not for this lemma,
but [inaudible].>>Heroism was JK is
extends beyond this things.>>Yeah, yeah, yeah.>>They’re very tedious. You don’t take over
pictures and save.>>Sure. So, some proofs you want to do, it’s just out of
the solver’s scope. We’re not going to be able
to get in them efficiently, the solver will not find them efficiently or
reliably and we cannot do them efficiently
as programmers or as verifiers by finding
all these lemmas, writing all these assertions. It takes a long time. So, I guess at this point, there was the main motivation
for the tactics engine, which we started
last year actually. So, this is partly repeating the talk that I
gave last year, I guess. So, we want a way to take this thing that we
need to prove and tweak it before sending it
to the SMT solver and not by modifying
the program’s source code. Manually, possibly somehow
do it automatically. By the shape of the things, the Metaprogram that will
attempt to break down the proof in some well-defined way before
sending them to the SMT. We don’t want to
get rid of the SMT. The SMT works. It’s fast. In the domains where it
works, it works really well. So, we do not want
to get rid of it. That was never a go. So, this is maybe a regression, I guess. That was basically, one is a way to have tactics for Metaprograms, mostly when I use
them as synonyms for now to modify the proofstate. What I’m calling
the proofstate is the set of things you need to prove
at a given point in time. So, in this lemma for instance, we started with
a single goal that it’s in some contexts of typing and assumptions to
prove this thing. Metaprograms in general are things that perform correct transformations of
the proofstate. We’ll take one proofstate
and give you another one that whose validity
implies the previous one. So, if we start, they’re basically a set of goals. If we started with
a single goal such as this one, in contexts Gamma zero, find me something of type T
zero, this could be a proof. I’m sketching a bit over, but trying to prove a lemma is more or
less isomorphic defining a term of a given
type related to the lemma. So, I’m just going to deal with, assume that all of
the goals are typing goals. Then, you need to find
the term of a given type. So, the proof state is a set of these goals and
what are we trying to do is, apply steps that break them down in whatever way we
see fit as we verify, as we made our programs. The only thing we do is, bring them down in ways that are, so we break this goal into
any amount n of other goals in possibly different contexts
and different types and new existential variables such that if we find the solution
to all of these, then we get a solution
for this one. That solution is
also well-defined by some function of
the solutions above. This is very LCF style, if you’re familiar with
the LCF process system. So, right. By designing this primitives to
be valid, and so, what I’m calling the primitive
is basically this rule R, this judgment R, that allows you to move
from this to this, and instantiate
this existential variable. If all of them, we design them faithfully by following
F-star typing judgment, it’s formal definition, then
all of this ought to be safe because it simply follows the definition
of the language. This is an example of the actual
tool running and showing your goal that we need to prove this huge equality of things, and this plane, whatever
is in the logical context. Right. So, that’s how well we want to transform this
proof-state into other things, and do it somehow safely. How do we actually do it? How
do we write a meta-program? We certainly cannot give
the user access to, here’s the proofs they do
whatever you want with it, and then return back a new one. Right? You could just discard goals that you need to show, or transform them, or whatever. So, the way we embed this into a star is we design a new effect, so F-star has many effects in the language and
model state exceptions. You can create new ones, and what we’ll do is
create a new one, for meta-programming
in particular, and it is basically going to be an effect of state-full computations
that also raise exceptions, but that’s not particularly
interesting at this point. Over the proof-state,
and proof the state is kept abstract from user programs. You cannot simply, so
its representation is abstract, and also meta-programs
are not allowed to just set the proof-state
to some arbitrary thing. They can only modify them
via some assumed primitives. Besides this assumed primitives, this effect that will be the same for meta-programming,
it’s completely standard. It’s using this feature for effect extension in F-star
where you define a monad, then you get a new effect, which means you get a WP calculus, you get a way to specify them, you get a way to run them, and extract them, and all that. All of that is re-using
previously existing things. So, the amount of work
is really minimized. So, this is
the actual definition. It’s a bit clean for
the photo slide, but not very. It’s defining a monad
proof-state into result, which is success or failed. It’s basically stay one
and then exceptions, return and buying and
you say new effect, Tac has this representation
does return, and this bind and then boom, we get a new effect, Tac into the language with
this representation. Once we do that,
then we can write meta-programs exactly as any other kind
of F-star program. So, this is an example of, say some random tactic that, this is my Tac applied to unit. It returns a unit, but it has the effect Tac, and it does and implies intro, which if your goal
is an implication, it’s going to introduce
an hypothesis, and you get it here, you rewrite it, apply some lemma, and you basically write programs as you would
in any other effect.>>At the higher loop,
can you remind me what is the difference between a tactic and a lemma?>>Yes. A lemma is a
term of F-star that has, let’s say just a precondition
and a post-condition, it asserts system values through. But tactics, I’m calling these computations that
transform the proof-state, and in particular,
tactics can apply lemmas.>>Lemma is effect, whereas a tactic is manipulating
proof-state, right?>>[inaudible]>>Yes. And more things but
yes, in particular, yes.>>How is it referring
to the proof-state?>>It’s implicit throughout
the computation. The same way as it is in your.>>But it’s always implicit? I guess I don’t understand.>>Yes, in a way, it’s
implicit in the same way as, if I had here R lock one, and then R set two, like in the first
problem I showed.>>Yes.>>There are the heap, the actual full state
is also implicit.>>Yeah, okay.>>The actions of
allocating, reading, and writing are the
things that modify it, or if you’re writing a state in modern Haskell, it’s also, and you’re only using get input, the thing it’s also
implicit, right?>>Okay.>>In a way, it’s
familiar to that.>>So, if I think of
stage computation in tactics like a stage compiler, a stage can regulate
AST and transform it, and say this is then you
playing with the proof-state, but in the case of like AST, it actually have
the AST exposure, but here you’re saying
all the proof-state is as implicit as well.>>It’s going to show
you the AST as well.>>Okay, so there’s
more. Okay, cool.>>[inaudible]>>In your meta-programming, do you allow non-determinism?>>No. No, we don’t.>>There is no backtracking?>>Sorry, there is backtracking. You can fail and backtrack
but no real non-determinism.>>If you want to backtrack,
you have to explicitly, say in ordering in which
physically black color, you have an ordering, and tactics determines
thick liquid in your in which their valence, you try and fail, et cetera. We explicit this
simply backtracking.>>Yes, it’s quite easy. I think it’s easy given, there is a combinator that will attend a tactic, and if it fails, it will reset the state
because that also has some primitive support to
make it faster because the proof state includes
some compiler internals. I’m going to show you an
example of a backtrack in tactic in a while.>>Just be ready to answer
a little bit of suggestion. Take mylan, that’s
a piece of syntax. It’s closer in the syntax.>>Oh, that’s what
you meant. Okay, yes.>>Okay, where do that come from?>>At this side, a quick example.>>I see. It has to be somewhere. Okay, it’s implicit.>>Yes. Is the name in
your program called mylan, and you can code it
and get this syntax.>>So, it’s like the actual things you’d
prefer to actually like all the variables are globalizing in refer to it. Okay.>>Also the local ones.>>Yes. Okay.>>Right. Yes. I wasn’t
going to say a lot. So, please ask if you’re
nervous about it. But I was going to say a lot
about how we handle syntax, and the type of this thing
is term into Tac Unitary. It takes a term, you can build
terms independently off, if the accesses monad or not, but I wasn’t going to do
a lot of venture that.>>This is why
you’re in exception to mylan may not be there?>>Yes. This code
actually would succeed, but the apply lemma would
then fail in that case, and apply lemma might fail
for a million other reasons.>>Oh, right. Once you try
finding it, [inaudible]>>Exactly, yes.>>They’re the only
reason for exception?>>No, no. A lot of
things can fail. Apply lemma could fail because the lemma doesn’t
apply to your goal. You could just raise an error
because you want to.>>Writing this can be a tactic, you have to be careful
that it make sense.>>The exception also is
important for doing the [inaudible] if you want to try to
process [inaudible] in particular way, you make some progress, and you say, wow this is dead end. I’m not going to finish
the proof in here. So, you raise an exception
backtracking tactically for R.>>Right. Yeah. Okay. So, what I was saying
to show this one anyway was that given this magnetic definition
that’s under the hood, we still can write direct style
syntax for meta programs, which actually was not easy. This time last year that
this was not puzzle. Also, you can write higher-order combinators for
your many programs, or whatever this new effect this is really just the same
as any other effect, this is the classical repeat. Tactic combinator
that just attempts some tactic indefinitely until it fails and returns
a list of results, right? This is just as simple as this.>>Okay, can you say one
more time why proper maybe. So, there is always saved that
means proven informations. Just call to give me
a trusted set of primitives. That’s the only way to make it create think kind of thing?>>Yes. Yes.>>Get to trust
your primitives [inaudible].>>Yes, the primitives are part of the TCV. That’s correct.>>Obviously.>>It’s given how we
call the SMT solver and the proofs are not
materialized in many places, it’s hard to see how
to get certificates and check them and all that, as well as those who do. Anyway, having an effect we can write higher-order
combinators. We can inspect the goals
and types of terms, this is a more
complicated examples. So, F* has many ways to prove
a lemma or to prove a fact. You can use the lemma syntax, or you can just write
a function that returns a proof or
returns a squashed proof or many of those kinds of things and here’s a utility function. It’s like a pie lemma, but checks the type of
the thing it was fed, checks the type of
the thing and the goal, and sees how we can make it fit. If it’s a lemma, then it
tries to do something. If the lemma is
actually implication, then it’s going to try to
introduce implication and all that by using all the actions, but then all the rest
like matching, exactly, this is just F*.>>Just to emphasize something on the third line
or the fourth line, you see TC of T, that’s an F* program calling back into
your strategy checker.>>Right, TCs are primitive of the tag effect and it’s actually wired to
call into the tag checker.>>So, it takes the
term to, always type.>>To his type.>>Yes.>>Okay.>>Good also fail, this one
would’ve failed, right?>>Yeah.>>You might give it crap.>>Okay. [inaudible]>>I don’t mind if
you interrupt. So yes those just an example. So, once we had
this specific thing working. We can write meta programs and call into the primitives that are going to manipulate
the true state and all that. How do we actually use them on particular
assertions, right? It’s quite simple by
following the previous style, where we have some piece of code and some
assertion in the way, some assertion in
the middle and this is going to be like previously, this was going to
be simply added to the huge verification
condition that we’re computing for this function
and be sent to the SMT, pretty much our asserts. Now, we have synthetics to
slap the tactic in here, this is a thing of type, tac unit or tac whatever, and here we’re calling these
are data for primitive. This compute is some
normalization call and canonization tactic
that we wrote for, say if these things are
arithmetic expressions, then it will try to get them into a normal form before
sending them to the SMT, or actually this will
succeed, but anyway. So, the thing is that we can inside a large program we’ll have many particular assertions. We can decorate each of them
with some particular tactic and problem as we wish. In this case, there’s
a little much, but this is an actual lemma
from a proof of poly 1305, descriptor
primitive and this->>[inaudible].>>I don’t know actually.>>Just remember [inaudible].>>Right, which I think this one, I think this version
never worked actually. This will never or maybe
like one every 100 runs if you vary the random seed, get through the SMT solver.>>I need to know phase two.>>So, tell him about
modular arithmetic and it’s the only fact that
really needed it says, additional lemma for this
basically says that, a plus n, b modulo b is you note of the NB
or something like this. Then, assert this equality and actually this is quite easy by just shuffling factors and
canceling and all that, but it’s just impossible
that the SMT solver we’ll see it in, in one go. So, we now have a way to
actually normalize all that. We have a canonization that
canon equalization technique, I should stop saying
canonization. Genetic overall semi rings we call it here for the semi ring of integers because
we’re dealing with integers and before this, especially in this
thing to the SMT will actually normalize them and get rid of all the need
for non-linear arithmetic, which is like the things that
the SMT solvers are bad at. The tactic could do more and
could actually get rid of a bunch more of sub-turns
and normalize them more. But, we don’t have to. We just stop there and send it to the SMT once we know it’s
good enough and the thing, we’ll just prove it. So, by having met F*, which we use here for,
not only for automation, but actually for having
more fine-grade control over the things we’re
trying to prove or sending to the SMT solver. This thing I think
we’ll never or once in a million years go through
the solver and this consistently goes
through and it’s faster in all aspects even if you count the time that the tactic takes and then
the SMT solver takes.>>These modular arithmetic
makes instead of slug.>>Yes.>>So, if the non-linear solver,
if this seems better, that will eliminate
the need for a tactic?>>I guess what it’s widely
known to be a problem, not only C3 like usually.>>Yeah, okay.>>It turns on
the decidable when I think the linear fragment
is decidable, so and apparently also has sufficient algorithms
to solve them.>>It is done when you’ve
started this as soon as you have factory modulo
multiplication. You better know how you intend to prove and called
the lemmas yourself because of worthless
to prove this [inaudible] or something
that we had the [inaudible].>>Even if we don’t have modulo, just large contexts proving
seemingly simple things like associativity [inaudible] of
star of multiplication [inaudible]>>I’ll get one. Okay.>>Yeah. Right.>>So, this was a bit from
the proving side of things. We’re trying to get
these assertions to go through more easily. But once we had all
this and a way to like poke with F stars typing
judgment and all that, we can actually use it as well to generate terms and
new business code. Right? So, we have this syntax, the underscore by, and there’s also synth, the keyword synth. Basically, that’s the same thing. So treat them as synonyms. When we have an underscore by, the mid F star is going to run this meta-program
with this original goal, and the meta-program can do
anything to try and solve it, and for instance this really round about a way
of defining one. It’s just, you get a new goal
and then you apply the primitive exact
with a quotation of one that will type-check one and see that it
fits and get it there, and this one, you apply the addition function
then you get two more goals and then
use exact one, exact two. This actually generate syntax
for one plus two not three. It actually gets a syntax
for one plus three. With this automated, like extending this and just
repeating the same steps, you can do arbitrarily
complex things. One pretty arbitrarily
complex thing is what kind of developed
not long ago which is a set of this library for generating verified parser and
generator pairs. So, you give it an enum type. So, take this one
just for constructors and having the librarians cope with all
the tactics defined, this one, this implication
generates a specification for what a parser and generator
for this enum must be, and this one actually generates the implementation
of the parser, a low-level
implementation that meets the spec via
this dependency here, this is a carefully defined type. This is all automatic and
actually makes use of, its very careful of, so, in one of these two, which one do you need to prove that two functions are inverses, and that’s the first one, and that’s a group
that’s very hard. If you just send it to C three, it’s gonna take a long time.
It’s hard to do. So, by having control of the way this terms are type checked and the way
we construct them, we can actually get
the thing as acid pops up and just normalize it because doing
enough normalization, you can just prove that it holds, that the two functions
are inverses, which you would not get if
you were just like generate the syntax for all this crap
and put it here. If you were to have like some
external, the processor, that will just generate
syntax and slaps it here, you will not have that liberty because you would get
F star to type check and it will do its default thing and possibly not
be what you want. Beyond generating pieces
of terms in some places, we actually generate
top-level definitions by large amounts like kingdom generate only one
but we can easily do a, there’s a saying, so, we define a type has
some constructs or as some parameters
in its constructor, and we call this
directive spliced with the MK printer meta-program
that takes a term, and its attack unit, and the thing will actually generate and bring into
scope this function, like the printer for
this type basically. Not like the same way as deriving show in Haskell or PBS arriving. No Kamel, I think. But this is actually all
done in the user space. This meta-program is
completely user-defined. There’s nothing special about it. All right. I briefly glossed over this. I will say in the parsers
example we can get to, since we’re kind of scripting over the typing
judgment of F star, we get to make decisions over how to type check certain terms, and this actually is important
since typing in F star, not type ability but typing, like say this precise term, in this precise environment have this precise type that’s
undecidable in general. A real quick example
it shows that you need to solve first-order logic or higher
order logic actually. So, if typing is
undecidable, I mean, F star’s type-checker
is only doing its best. The actual type checker that
F-star has is doing the best it can but it’s not
necessarily doing the, it’s not going to type
check all programs, and if you know of a new way
to type-check the program, but you don’t want to go and modify the type
checker of F star, you can actually use
meta F star of generating the typing derivation for it and whatever way you see fit.>>Do you check
that way are valid, in other words, you can actually
write a bad tactic that could type-check
incorrectly type term?>>In practice, I guess
the engine might be wrong and you could, but if you follow the typing
judgement, then no. So, you can know that you
apply the rules safely. Like each obligation, you
can see that it’s safe. But actually deciding is there a derivation
that ends with, say this one? That’s uncertain. You do not know where
to look forward. So, like this actually is
equivalent to proving.>>So, if you trust with the primitives that are
in the back effect, and each of this primitives, you can see them as an inverting
or typing in F-Star. If you believed that we correctly exposed the typing of
F star in these primitives, then you cannot construct
an invalid [inaudible]>>Okay. You could construct
arbitrary in your terms.>>Yes.>>You can build these syntax, and this is syntax, this is data. If you want to say, hey, use this piece of syntax as a term of type T. At that point, you have to show that [inaudible]>>Okay. You can’t fool into
thinking that it’s wrong.>>You cannot override. You cannot have
conflict [inaudible]>>That’s also why you don’t
have non-determinism perhaps. What if you have
two different things that you tried the one way and you’ve
come up with type one, type two, for instance,
and both go through. You don’t have
conference anymore. [inaudible] But if you have
said always try this one first, well, if you succeed
with this one, you stop. Right? I don’t know.>>Okay, that’s
one way to see it.>>Okay. Sure. This is kind of showing how
we can actually, meta F star can actually
see it as a way to script F star’s typing judgment
to do whatever you want. Well, we actually went a bit
further and started to type check the way the
tool itself works. So, in surface F-star, in the thing programmers use, there are implicit arguments as in pretty much every
pro-facist endeavor, which really have
no semantic meaning. But they’re extremely useful. So, they’re usually solved
by a pyro unification. Just when an argument has to
be a certain term then it gets solved to that term and otherwise it’s just unsolved. But sometimes you might want to define your own arbitrary
strategies for it. So, we actually provide, the hash is the syntax for
implicit arguments in F-star. If this was like this, then y would it be an implicit? They would never be solved
because there’s nothing to tell what y is supposed
to be from the types. But by adding this,
this is tactic, which is actually depending on the previous argument text, and this is saying if the syntax is saying
that if y remains unsolved, then call this tactic to see if you can find
a term for it, and this actually
just tries to use X. So, essentially this defines the default argument
for the second, the similar element of
this diagonal function. So, if you just
write, they are 42, you’re going to get this
because it’s going to get, we have a free implicit here, and we’re going to call
it tactic to get it. But if you give it explicitly, then you get a different thing.
So, it’s a way to do.>>If it remains unsolved, so, I guess I would
think that you might instruct try detecting first because I know it’s
going to work. Why waste the time?>>I guess, yes. So, why we
actually made that choice, I guess, is to allow for
this, to override it. Although, I guess in this case, it wouldn’t actually have
an implicit to solve. Yeah, we could think about it.>>Heading up on time.>>Okay, come when. Oh wow, okay.>>You got it though,
like seven plus or so.>>Okay, yeah. Let’s
blast through this. I’m like halfway
through the slides. So, another thing you can do with implicits,
not important. This is the important one. So, we have two cool ingredients, generally in Tableau definitions and solving implicit arguments. So, type classes are like
shouting at us that we need to implement them. We define this actually, everything I’m going
to show you is absolutely user space, there’s no compiler support
except when I say there’s. So, we have a module called
FStar type classes that has an mk_class and
tcresolve tactics. Then what we can do is, in any other module, you define a type
like this and has inequality and prove that the equality is actually correct. So, at decidable equality
and prove that it’s correct. We call splice of mk_class, giving it the name of
this thing or quote, I’m not sure which one it is, and this will actually generate these two definitions that have our polymorphic
in the type, sure. Just project from this type, the function and the proof,
each of the fields, but the field, the
record argument, it’s marked with this
tcresolve tactics, so you do not need to
give it and it’s going to run tcresolve to instantiate it. So, now you have eq
and eq_ok that are overloaded and I’m going to run tcresolve to try to find
a dictionary for it. Since we don’t want to
write all this scribe, we just give some
shortcut for it. This is actually compiler
support, but that’s about it. The word “Class” just so less, there’s like no eq type, and then puts
the splice, that’s it, and we also have this syntax
for tcresolve arguments, this basically means it’s an implicit for tcresolve
and that’s it. Then we have instances, which are just definitions of this type marked
with this attribute. Attributes are an FStar thing that you put in every definition. We also give shortcut for
this instead of ran this, just write “instance”
and that’s it. You can also define
parametric instances and whatever and super classes. Tcresolve, the thing that does the resolution for
dictionaries is actually this, it’s like 20 lines or something. The mk_class is
actually blood larger, and this is basically
a backtracking search. Where did you see it? It’s in using the “or
else” here, or else, and, first all of these
are terminators allow you to do
some backtracking, and as soon as something
fails and you go back the proof is reset along with all unification
metavariable. So, you’re like completely clean in the point
where you just started. The only thing this does
is try all the local, so you have a goal
for dictionary, try all the local
binders and then try all the global ones and when one matches you get arguments
resolved, possibly. If your instances like
one of these you get a new one just recursively
go through them, and this is like
a quick invalidation of type classes which works pretty
well in practice I think. Besides solving
implicit arguments, another thing we do so, Fstar code usually extract into a camel and that’s
where we compile and run it or extract
it to see as well. Sometimes, and you’re gonna hear about other thing
in the next block, you have equivalent shapes
of a program semantically equivalent but one of
them more amenable for verification and one of them that you actually want
to run because it’s faster. So, you might want to
just before extracting, try to modify it in some way via a Meta program,
doing whatever you want. The trick here is that
the meta program, this thing will ask you
to prove the goal of your old definition
equals something new and you need to
instantiate and- both instantiate and
prove that things are equal in order for
us to accept this, so you can only
rewrite things into equals to try to keep it safe right otherwise
you would just whatever. So, with this small code, during verification this tactic generates a huge number
of additions, at this example of course, and extract the code you just get the result because
the skull and compute. The equalities and force
because the goal is inequality and many of such checks that it’s
actually proven. Right, so, you tell me if you
want me to skip anything.>>I think there’s
a gap of 45 minutes and we’re getting closer
questions in line. So, that means you won’t
have questions at the end, you can take
another eight minutes.>>Yeah, okay sure. So, another benefit of defining the meta
programming effect just as a new effect in Fstar, using the standard
mechanism and all that, is that we can
actually extract them into chemical as
any other user-defined effect. So, by default meta programs are evaluated in the normalizer like the abstract machine
that performs beta reductions and all that inside Fstar it’s interpreted, and the primitives call into compiler internals and
come back with a term, and there is a
translation in the way. As an alternative we can
actually just extract the things and link them back
into the compiler. So, quick view is, the usual lifetime
of Fstar program as you have helloworld FST. Fstar makes it into
a camel and you compile it into an executable
and you run it. Fstar itself is actually written, like the Fstar compiler
itself is actually written in Fstar and follows this same path. Now if you have
a file with a tactic, there’s no support
that allows you to extract the tactic with Fstar
as well and you compile it, but you compile it
into a dynamic object and then you use a flight
to load it into F star and basically gaining
a new primitive steps such as, like addition is
a primitive step, that implements the
addition of integers. Now you get a new primitive step for the tactic you just defined. It hooks into the
normalizer and whatever, but you get huge speed gains. Both because it’s no
longer interpreted and also the barrier between interpreted and
the actual source code is gone and now they work on
the same data structures. Forget about this example. So, coming back actually to
the, to typeless example. This tcresolve tactic, this attribute that
allows you to compile it, to extract it to
native camel code, compile it and load it into Fstar and you get typeless resolution. It’s a user made program, it had no primitive semantics, no trust assumptions,
nothing to be added back as an Fstar extension. There’s nothing that
you can do that to make Fstar go wrong, right? There’s no need to address it and there’re significant
efficiency gains, on some examples we see
growing nano 15 times faster. That’s just one example
right, anything you think of, you can define a meta program, compile it, link it
back and you extend Fstar by defining the meta
program in Fstar then. So, that’s why the title was kind of extended
Fstar and Fstar. No addition to disagree
of course this was just pick a trusted. Right, pretty much close then. So, now there is many use cases that people have been developing over
the past year I guess but, here’s one we’re kinda
working on and verifying concurrent programs and using concurrent separation
logic inside Fstar. So, this is following work that Siemen Monolid and
an emissary of Angular and maybe you worked
a lot on this. So, possibly the
hardest part about verifying grant
programs is worrying about resources and ownership and how the different
threads interact. Actually worrying about
resources it’s already a problem in single-threaded programs to deliver diffusion efficiently. So, separation logic was
invented in late ’70s, earlier maybe, I don’t know, to have more compact ways of specifying and verifying
effective programs. Yeah.>>Concurrency, so you assume that the concurrent
programs are deterministic, or you have to prove that
they are deterministic first by construction?>>By the current formalization, which is relatively new,
so it might change. They are deterministic because they’re
completely separate. So, they’re not racy. If you want to read
a variable you need to exclusively own it, and no one codes right away,
so there shouldn’t be any.>>So, you can’t
join [inaudible] by construction [inaudible] data
flows [inaudible] concurrency, there’s no race, no
logging [inaudible].>>There’s logging.>>You can have
deadlocks [inaudible].>>Yes, it’s a
partial correctness.>>[inaudible]
definition, partial correctness that you’re
talking about here.>>No, we’re talking about partial correctness of
concurrent programs.>>Okay. Thank you.>>Right and so separation logic by itself even without
concurrency is already challenging to get
into a star because of the solving for heap
frames at pretty much every primitive or every combination of
steps that we need to do. By some careful design of
the specifications for the primitives and the
effect of the state, we can make them evident in their innovation
conditions and assign a tactic that we’ll find them and instantiate
them appropriately, and use the tactic to solve for all of the heap frames before
colony SMT solver. While, colony SMT solver
to try to find the frames, then just forget it, it’s
just not going to happen. But, by doing this, the size of the tactic is not so great. It’s just finding the heap
frames, instantiating them, and descending and
trying to leave everything as it was except for istantiating
the heap frames, and then colonies in the solver. That works decently well, so I’m not going to go into this, I guess the specs
for the actions and they have the footprints and how they change the heaps and they need to be in
a particular shape. You cannot just write any kind of predicates or
the tactic can find it. What happens is when you get a revision version of this style, or this is a frame, this is the frame rule, and you get left heap
and the right heap, and you need to show that
your current heap is the combination of both and well the tactic
finds these things. Users canonicalized it
from before actually to get these things
in the same shape, and find the frame efficiently, or to find the frame
actually, instantiates, and then sends most things to the S and P and then head sign to the continuation to get
the rest of the things. Concurrent concurrency,
we’re adding some more operations with some more intricate specs
and locks, I mean, so that the main additions are parallel composition of
two programs and locks, working over some footprint, and keeping an invariant of the resources that are locked. Here’s a tiny example that shows. So, we allocate two references, and make log with the invariant. They’re equal and then make a parallel
composition of this. It will acquire the lock, increase both of
them, release it, and all of this verifies by checking that the invariant
holds at anytime you release the lock
and you get the assuming that anytime
you acquire the lock. So, those are the baby steps we’re taking the concurrency
for now, I guess. So, I think we’re done. The next steps that
I see for this in the future I guess are, actually due to make usage of better certain tactics
more widespread. It Seems to be something
that we’re using. Well, relatively few
people are using it. Well, it should be more of
a standard tool that you use. Efficiency is usually concerned with when you’re working with very large things
you need to prove, for example, the
separation logic, it tends to take a lot of
time to even run the tactics. Then, yes, just like
nitpicking improvements over the thing, and that’s what it is. So, I’ll take any questions you have. Thanks for listening.>>Some questions
if any [inaudible].>>What do you mean
by bug extermination. I couldn’t hear
you talk about it?>>No, no, I skipped
all these three. I just mean that there’s
some bugs in many places that many defects were scattered
all over the place.>>So, in terms of usage, is it still in fox
or kind of settle. What point you people
take a dependence on it assuming that it’s
going to stay the same?>>Right. I think it’s
relatively stable. It’s against your.>>Did you have to make
changes when you went to the concurrency, did you
have to make changes?>>Not to the engine.>>Okay.>>Well, possibly
fixes in the engine, but no real changes. Yes, so it’s been pretty
much over a year, that it’s existed and I think, writing a metaprogramming
is relatively stable. Yeah, I think it’s rather stable>>So, in terms of
the type class primitive, is that a primitive that
could be used to construct, use the Haskell in some level. I’m just wondering if now you
have it as a mechanism with structuring a
star programs around it be useful or is it just
sort of a side thing?>>I think it is, but it forces you to take it up, and so the tactics are also. I think you
need this, right? Many of sort is is also like
a wave PD user-defined. So, it’s in a module and it
has a bunch of dependencies. So, if you wanted to find, say, the addition operation, which is like in
the very most basic FOR module. It forces you to,
in every program to load the tactics module enough with all of
its dependencies. So, it’s not so clear that
you want to do it right away.>>I see.>>But, besides that,
I think it should be usable everywhere. So, yes.>>I think it’s
also a question we haven’t have
time passes for awhile, so people have found
creative ways.>>Yes, in different ways, yeah.>>Now, it’s like
[inaudible] technology, you want to change over your
existing code [inaudible].>>Do you think they’re
reserving a belief that there are opportunities now, they have them, in
terms of restructuring?>>There are some for sure.>>Hi all. Thank you for being here and thank you Jonathan
for the introduction. So, verification of
stateful programs is essential when it comes to scaling verification to
real world applications. Many of these programs are low level code that
is written with efficiency in line
and this is often not amenable to
formal verification savings. It involves explicit
reasoning about the state and properties
such as liveness, memory safety that is making sure that pointer will not be
used if it points to now, and aliasing and
things like that. So, over the years, a lot of program logics have
been developed, including, how logic separation, or
the logic weakest precondition, calculus, and things like that. of course, that gave rise to tool that
implements these logics, and some of them are
Pharmacy Daphne VST, and of course, F*, which I’m going to be focusing
for the rest of the talk. So, a very brief
introduction in F*. So, most of these things have already been
covered by Guido. So, at its very core F* is a purely functional
lumbar calculus language with dependent types that
allows formal reasoning by, as usual, treating proposition as types and proofs of these propositions as
programs with this type. In addition, in
order to facilitate reasoning about
this stateful code, F* has a cast, an extensible effect system that allows monadic encapsulation
of effectual programs. For instance, we can have
a metal like references, by writing a program
in the ST effect, and that we can have references, and we can treat them
as ML references. in addition, F* is equipped with a weakest precondition
calculus that allows us to prove things about
this effectual programs. For example, for
this function here, incr, which was also
Guido’s example, and we haven’t talked about that. So, we can prove that if L is a pointer
defined in the initial heap, then the value of L in
the resulting cube will be one plus the value
L in the final heap. The way we prove that is a F* will infer weakest precondition for this program and then it will ask the SMT solver if the weakest precondition of this program is stronger than
the weakest precondition inferred for these particular
post condition, and then we are able to prove that this post
condition is true. One of the recent advances
in F star is low star. So, low star is a subset of a star that enjoys a
translation to C. So, if you wish, it’s
shallowly embedded subset of C in a star. It comes with a compiler namely Kremlin that allows us to extract F star code to C and
eventually run it. So, low star provides access to a computer
like memory model, with the stack unless
the effects and allows us to write efficient low-level code with full verification
power of F star. For instance, we can write this program swap that
takes this parameters two pointers and then swaps the
values of this two pointers. We can again annotate that resulting type with
pre and post conditions, and the type of pre and
post-conditions reflect that the C memory into
the logic and this is the type of reflected memory. So, the specification
of this program says about if p1 is live in memory
and p2 is live in memory, then the values of these two pointers
will be swapped after we run the program. Low star has been
successfully used to verify many crypto implementations and has been used to build
a library of crypto primitives, namely Hacl star, and- yes.>>Every low star program
develops extra program.>>Yes.>>I only noticed if I used too much F star once I run
primarily, it will fail. Once I run really. So, usually when
you develop like in F star and every one after some while run
primarily or is it?>>Yeah, roughly after while, you get sense of what
works and what doesn’t.>>So, in low star, I can do anything that’s
just about proof, I can use full power of the star.>>So, here is an example of a very common style in low star. So, assume you want to write an implementation of
encryption algorithm. So, this is how the
program would look like. So, we have a few parameters. We have the key, we have
the initial vector, we have the input which
is the cipher text and we have the buffer where the
program will right that. We have the input which is
the plain text, I’m sorry, and we have the output which is the buffer where the program will write the encrypted plain text
that is the cipher text. So, we have to require that all of
these buffers will be live in memory and we also have to make sure that there is no aliasing happening in
the arguments of this program. Then we ensure that the parameters will be
live in the final memory. I meant to write m prime here. I’m sorry, that’s a typo. Then and finally, we can
prove that the result of these implementation is indeed the AES
encryption algorithm. So, as you can imagine, the interesting parts
of this proof is here, where all of there is, this boilerplate
that is associated with the generality of the memory model we
have access there. Since we have access
to the full C comes concern like
C memory model, we have to be very explicit about liveness of pointers
and avoiding aliasing.>>I guess we’re not even
showing all of the border page.>>Yes, I’m hiding. Yeah, I’m even hiding
parameters here. It’s a very pseudo code step. So, to alleviate
some of the proof, some of the boilerplate
in this proof., What is common to do is to provide a high level
implementation of the encryption scheme. This high level
implementation instead of having access to
the full C memory model, we’ll only have access to a very customized memory model
where liveness, one does not have to reason about liveness and in addition, we can control the amount of
aliasing that is going on, and in particular her, e there is no aliasing in this record. So, this is the state that’s
particularly tailored for the AES algorithm and we have four fields
which is the key, the initial vector in the plain
text and the cipher text. These are all represented
as sequences. This is a pure data structure. Then we can write, we can provide a high level
implementation of AES algorithm that is much lighter in the specification particularly that the
precondition is trivial. Then we have that it implements the highest algorithm
that the result of this implementation
implements the AES algorithm. Then what we can do is to prove that our low level implementation refines the high level
implementation. So, in particular,
the specification of the low-level code will be as boilerplate as it
was before but here, instead of having
the actual thing that we want to prove
about this implementation, it will ensure that low-level implements
the high-level goal which in turn is proved correct with respect to
the specification. This kind of is
a separation of concerns. We have the high level proof
that is the essence of what we want to prove that captures the algorithmic properties
of our program, and we have a refinement proof, the proof that says that
the low-level refines the high level where
we can hide all of our boilerplate and
all of the reasoning that is required by the general memory model
that we have access to. So, the question that we are trying to answer in this work is, how much of the boilerplate code and proof can be
generated automatically, or if you wish, we are trying to systematically approach
these style in order to find out patterns that will allow us to make it more automatic. So, let’s start by observing
the high-level computations. So, high level implementations are purely functional programs. They do not have side effects and they are
written in monadic style so that the state funding is in place it and we can write
them in a convenient way. So, let’s start by defining the type of high computations
that we are interested in. So, it’s basically a state monad, where the state is a high level
state that denotes that the view of the C memory, but we are interested in that the high level
programs operate on. This is vanilla style monad with the only difference
that is also indexed by weakest precondition
but allows us to reason about the behavior
of the program. So, basically, if a high-level
program has this type, it means that for
any initial state and that for any post-completion, if the inferred weakest
precondition holds, then we can run the program and the results that
we’re getting out which is the result
and the final state, which satisfy the post-condition. We provide a DSL for
high computations which is a monadic DSL. It has returned by getting the axons and in
addition recursor, a combinator that
allows us to write for loops in a very imperative looking style but not imperative. F star allows us to extend perfect system and declare
this as an effect, which is very convenient
because of course we can write programs like that using
bind and return explicitly, but after we define
the high effect in F star, we can turn our
high level programs into this program that loose look
smaller and more elegant. Yes, yes.>>The former symmetrics
that is defined in F star?>>Yes, pretty much.>>It is not a subset of F star.>>It is. Yes, it is. Yes, yes, yes, yes.>>Is DSL a subset of F star?>>It is an embedded DSL.>>Is it what?>>An embedded DSL.>>What do you mean embeded?>>It’s solidly
embedded into F star.>>Is what?>>Solidly embedded into F star.>>So, Zhao Shuo Jiu Xiang Yi
Yang, what does that mean?>>It means that we- basically the combinators have of our DSLR themselves
have star programs.>>Just limit yourself to calling the five functions that you
see and we call that our DSL.>>Yes, exactly. These’s
are philosophical question.>>Simplify please. Simplify. A few fancy word as possible. Maybe, but shallow I mean, I learnt that word. Thanks.>>Can we also put Greek letters to feel very smart about ourselves,
so that [inaudible]>>This is not true when
you’re in Greece though.>>[inaudible]>>No. I apologize. I mutually now
understand exactly. No problem. It’s
a joke [inaudible]>>[inaudible] It’s
just a library that is domain-specific
[inaudible]. It’s a library that
is dormain-specific.>>Anyway [inaudible]>>So, here is the running
example for this framework. We can write a program that
reads two locations and then moves the contents of the first location
into the second one, and the sum of the two locations
into the first one. This is inspired by a writing a program that
computes the nth Fibonacci, basically is
the procedure that will be around by the for-loop
in this program. So, it’s not entirely artificial, just slightly artificial. So, the program will read the zeroth location of the state, of the high level state. We read the first location
of the state, and then we’ll put the sum
at the zeroth location, and the zeroth contents
to the first location, and we can write wp that captures the semantics
of this program. The type of this program will be a high computation
with unit type, sorry, that’s a typo, and this particular wp. Let’s now try to study how we can take high level programs, and map them into
low level programs that operate on the full
C memory model. So, first of all, we need a low state, which we denote L state, and it is the pointers
to the C memory that correspond to the
high level view that we are manipulating in
our high level code. It comes with two functions. First, there is a pair
of lenses that allows us to go back and forth between the low level and
high level states. So, we have the
up-arrow that takes a pair of memory in the low-state and gives us high-state. Then that takes, intuitively, it takes low-level state and turns it into high-level state. Then, we have the down-arrow that takes a pair of
memory in null-state, and then your high-level state, and it applies the
high level state to the low level state in order to obtain
a new low-level state. We require that
these two functions have three properties. That is, if we start by
hiring a low-level state, and then we lower it again, we get exactly the same memory. If we lower two times
successively high-level state, then only the second
lowering matters. If we hire a low-level state that we have obtained by
lowering a high-level state, we’ll get again the exact
same high-level state. So, let see how
low computations look like. So, low computations
are programs indexed by a weakest precondition of
a high computation and a high computation that defines what is going to be the behavior of
the low level program. So, they are parameterized
by the low state, and they are computations in
the St effect of low state. So, their specification is saying that if we start by low-state that’s
well-formed in memory, and if the weakest precondition of the high computation is satisfiable in
the high-state that we obtain by hiring the low-state, then the low-state will form
in the resulting memory, and if we run the computation
in the state we obtain, again, by hiring
the low-level state, we will get the result
that will be exactly the same result as we get by running the low computation, and the resulting memory of the computation
will be the same, as if we had applied the resulting state of the high computation to
the initial low-level memory. This is a bit elaborate, so please interrupt me
if you have questions.>>[inaudible].>>What is that?>>[inaudible].>>They second one is a->>The second one is a- Sorry.>>[inaudible].>>Yes.>>Suppose there was
something you need to do?>>So, the up-arrow has only one parameter that is
a pair of memory or low state, and the down-arrow host
the parameters, the initial pair of
low-level memory and state, and the high-level state, but we want to apply to
the low-level memory.>>The down is the foot. So, then it has the old initial state and the new value you’re trying to put gives you
the new initial [inaudible]>>Provides an updated,
how I understand it.>>Exactly.>>[inaudible]>>We are trying to do here is by construction relay through level stateful programs and their high level
pure counterpart. So, that instead of
writing the low level code first and then
fighting [inaudible]. Does the high level thing at every step, at every combinator? The two are constantly related.>>Again, low computations
come with the DSL that’s very similar to
the high level DSL. So, the only difference
is the type, which is now indexed by the weakest precondition and the high computation as well. So, we see that every
low level combinator is indexed by the corresponding
high level combinator. So, lreturn refines return, lbind refines bind,
lget refines get, and so on, and so forth. We can write shift_and_sum in
this low level code style. So, this is what
we’ve seen before, and this is the corresponding
low level code. So, now, instead of having just a pair of integers
for the state, we have two pointers. Instead of using
the high level combinators, we use the low level
combinators but the shape of the program
is exactly the same. We know by the type that, the low shift_and_sum program refines the high
shift_and_sum program.>>So, the refine relation that you defined this way
is fixed. That’s right?>>Yes.>>Now, whenever these steps, it forces every step to match, when the step is complete. But, the user, like
if you want to do a performance optimization
in the low level one, you could add intermediate
stress where you might break reference the quality of values but when it returns, then you have to, your
previous force them to return the same value.
Is that correct?>>Yes, but the general idea is that the high level
program will already have the hand written
performance optimization. The only difference between
the two programs will be their access, their
memory behavior.>>I see.>>So, we want to separate reasoning about
the algorithmic behavior, to reasoning about
the memory behavior.>>What come to
reason on the high level program data to show that this equivalent will leverage
similar high level program?>>So, the question is, how can we write the
high computation? We only want to write
the high summons up program, and somehow obtain
the low summons up program. We know by virtue of typing that it will have
the correct behavior, but we don’t know yet how
we can obtain automatic. Here is a very simple
and straightforward way to obtain a low computation, from a high computation. So, we can write a morphism from high computation to low computation
as follows: So, assume that we have
a high computation indexed by weakest precondition, then we can trivially turn this into a low
computation by saying, get the current memory state, F* allows us to do that. Then, turn it to a high level state and
around the computationcy. Get the result and then update the current memory
with what we’ve got as a result from
the high level computation, and also return the result
of the computation. That will give us
a low computation that has exactly the
semantics that we want, but however, it does not have the memory performance
that we want. Because it doesn’t in fact
operate on the memory, it operates on its own memory
model and then we just apply the differences though the memory, to
the current memory. So, this is not yet
exactly what we want but, it’s a very good step
towards that. So, we know that
shift_and_sum_low and more shift_and_ sum that the low level program
will remain trivial have exactly the same
observable behavior. So, the first question is
how do we formalize that? We have two low level
computations and they have side effects so we cannot
run them into the logic. So, how we can define this as a predicate over
low computations? Second, how can we write morphism laws and I will show you very soon what are
the morphism laws, that these morphism satisfies. How can we rewrite, how can we apply successive
rewrites of morphism law equalities to obtain
the low level version, the desired low level
version of the program from the trivial low level
version of the program. So, let’s start by trying to define equality
of low level computations.>>My suggestion we go a
little quickly over this part.>>Okay. So, I will present
this very intuitively, this is a type of a weakest preconditions
for low level programs. We define that two low level
programs are equal, if their weakest
preconditions are precise, that is they precisely capture the behavior
of the program.>>You want to rule
out the general.>>Yes.>>Because precondition
[inaudible] will tell you nothing, if you want to rule
out something, like truth, it doesn’t
tell you anything informative about what
happens to your arguments.>>Then, we require that this weakest precondition are precise and also that
they are equisatisfiable. For every initial state
and every post-condition, the first one is true exactly when the
second one is true. Then, we define low equality, as equality of the low
weakest preconditions. Then, we extend our assumptions with an axiom that says that, if we are looking for two
low level computations, then we have equality for
two low level computations. I mean, using that
we can prove this, this nice equality lemma that says that applying morphism
to the high level combinator, gives us the
low level combinator. Applying morphism to bind, is the same as lbind and pushing more of inside and morph
of Gary’s algorithm, morph of [inaudible]
little morph, for again pushes morph inside. Here’s the high level picture, we have a high level computation that we can turn with morph
into a low level computation, and then we can
successively reapply the rewrite steps in order to obtain the low level
implementation of the program, which is exactly what
we want it to be. Our goal and what Guido has
already started working on, is to be able to [inaudible] these
transformations with a tactic. We aim to build a generic
user-defined equality based program
differential framework, that we’ll be able to
rewrite with equalities, proven correct in F* and
provided by the user, in order to perform
optimizations on F* programs. So, here is the final picture
and our goal for this work. We want the programmer
to be able to write the high level implementation
of encryption algorithms. We aim to automatically derive the low level implementation
of the algorithm, that is, and the proof that it refines the
high level implementation. We plan to apply this
into implementing more cryptographic primitives and extending the already
existing libraries for cryptography in laser. This concludes
the first part of the talk. The second part of the talk is just one and
a half slides though. So, I will briefly talk to you about another line of work, another project that I worked on during the first month
before the put in deadline. This is normalization
by evaluation in F*. So, Nero talked about tactics and tactics
are F* programs, executed at type-checking time. This means that they are
interpreted and often not as efficient as we would like. So, an idea also followed by other proof-assistance
such as Calc, is to interpret F* programs in terms of OCaml programs
which is the host language, is the language where
the type checker is written. Then, use the
evaluation mechanism of the host language in order to evaluate these terms and then translate them back
into F* values, and we know that
the translation of these OCaml terms into
F* AST will be values. This is the property
of our evaluator. So, NB looks like that, translated in
the empty environment, and then read back into. This will return an OCaml term and then read it back into F* AST and we know that
this term will be a value. Then, when we write the program, we can insert
normalization requests for some subexpressions
and we can add a flag to notify the normalizer
that we want to use NB for normalization
of this subexpression. Then, whenever the normalizer will hit an application node, if the head of
the application isn’t a normalization request and
if the flag for NBs are on, it will just use the normalizer
to normalize the F* term. This turns out to be orders
of magnitude faster than the interpreted code and it was part of the POPL2019 submission. This concludes what I
had to tell you today. Thanks so much for listening. I’m happy to take questions.>>Because it is
a list of authors [inaudible].>>So, going back to the
high to low transformation, so I have a bunch of
proof sovereign in Britain at the low level in F* for
these cryptographic primitives.>>Yes.>>So, yes.>>So, the idea would
be replaced those?>>Maybe at least use this
framework for future efforts. But, maybe some of the
existing code can be simplified as well, at least.>>You have stuff coming to post quantum algorithms we
haven’t implemented yet. We have high level space
for some of them. We’d prefer [inaudible] , right? To verify that with
the additions of F* rather than slaughter
you to the break of.>>Yes. I was just
curious if there were the before and after you could try to look at how much simpler it was.>>That will definitely
be one of our evaluation.>>They’re some simple
enough algorithms that it’s not too much
we deal to rewrite them.>>To just do it, yeah, I see. You wouldn’t do it if
you didn’t have to do, but for a paper, or for some demonstration.>>The existing ones are
so hard to maintain. For instance, one
through five which is measured or with
pretty when a finite field that is such a pain because, one reason that Nero showed with those
gigantic modulo proofs.>>Right.>>But also, because
the reasoning about the math is interspersed with reasoning
about the lowe level state.>>[inaudible] Yeah.>>Then, you’re struggling to perform both at the same time, and we could just
split it out and do some code that just
is concerned with the legal stuff and you see them generated automatically
and then do all of your modular reasoning
on sequences and with all the
lightness and modifies processor aside that
be such [inaudible].>>[inaudible] Legacy code
can be converted into a proof or did all been
from specification forward?>>We have not applied
yet to any extensive.>>Existing code.>>Yeah, existing code. So, basically right now what we have is
small examples written by hand that are basically in the style of system time
that I presented here. That’s the largest thing that we have applied
that somewhere too, but there’s more to come.>>About the virtue about
the scope of your question, if you’re talking
about whether we use low star to verify existing code. Was about the experience
what it would be like if there was a practical example some body has done the work going in the other direction to see what the experience feels like.>>Yeah. So, a lot
of our cyrptocode, usually we don’t try to innovate in finding new ways of implementing existing
crypto primitives. We take existing implementations that would hand optimizing
sewer assembly, transcribe them into low star
and do your proof there.>>Then, regenerate them.>>Then, regenerate
them. Exactly.>>There was one more question.>>So, yes. So, I just
have a question about the two kinds of proofs. So, if you were to use tactics and replace these older
proofs with the new stuff, would they actually
execute faster? Will they be easier to prove?>>You mean with the old style,
but using tactics?>>Doing it at the low level as opposed to the high level
with translation.>>Has anyone tried that during the specification for low level? I’m not doing the layering thing.>>So, you can use tactics to do the low level proof directly. Is
that what you are [inaudible]?>>So, I’m comparing
low level proof by hand and then a high level translated to
a low level of tactics, those two data points
because would one be faster? Would Z three be faster
with the tactics or would it be awash?>>In terms of
proving that you can go you can take a high level program and
transform it to [inaudible].>>Yeah.>>There’s almost zero Z
three proof.>>Right. Okay. So,
there would be a win.>>Yeah.>>Especially since the Z
three was already complicated.>>Yes. So, we do expect an improvement in
verification time as well. Yes. Thank you so
much for the remark. Okay. Thank you again.

Leave a Reply

Your email address will not be published. Required fields are marked *