PLSE Seminar Series: Amal Ahmed, “Compositional Compiler Verification for a Multi-Language World”


ZACHARY TATLOCK:
All right everyone. Welcome to the Pulse seminar. Today, I am incredibly excited
that we have Amal here. So her work’s been on a
lot of different topics spanning all the
classic PL favorites– modularity, type
systems, et cetera. But today she’s going to
talk about a particularly interesting challenge that is
near and dear for many of us. It comes from
compiler verification. So we’ve seen many
landmark successes on how to verify compilers
for realistic languages, but a bunch of problems remain. Real systems actually tend to
compose components implemented in different languages. And the problem with
existing approaches– or past approaches– to
proving compilers correct is that you weren’t
really able to compose those correctness
guarantees together when you brought together
those pieces of a system, and still maintain
the guarantees. So Amal’s going to
show us that today. She also does a lot of
leadership in the community, helping with things like this
young community for AESOP. And she’s a big time
professor from Northeastern and a great role model. So watch closely, and listen up. Thank you very much. AMAL AHMED: Thanks, Zack. Thanks for the
wonderful introduction. All right, so what I’m
going to talk about today is compiler verification,
in particular compositional compiler verification. But before I jump
in, let’s just sort of– a nod to how old
this entire area is. It goes back 50 plus years
now, back to the 1967 paper by McCarthy and
Painter, where they talked about proving a compiler
for arithmetic expressions correct. How do we do that? But really, the big
Renaissance, if you will, in compiler verification
started in 2006 with Xavier Leroy’s
paper on CompCert. This was the first piece of
work that showed us how to take a compiler for a realistic
language and compile it in a completely– do a proof
of compiler correctness in a completely machine-checked
way and also get an implementation of the
compiler out at the site– verifying a compiler
using a proofer system. Now, the amazing
thing about CompCert was that it kind of provided a
proof architecture for people to follow. And that led to a lot
of work that came after. This is just a tiny
little sampling. So people took how Xavier was
doing compiler verification in CompCert and tried
to essentially use that methodology
to verify compiler for Java threads,
just-in-time compilation. CompCertTSO was this
work where you’re compiling to relax memory
concurrency, a total store ordering model. And then, of course, there’s
the work out of Penn on Vellvm, where they verified one major
pass of the LLVM, the MemtoReg pass of the LLVM back end. And there’s the
CakeML compiler, which is a full-fledged
compiler for an ML all the way down to assembly. Now, all of these
things are up there. But there’s a problem
with all of them, which is that their compiler
correct misstatements make an assumption that you will
only ever write whole programs. You will only ever
compile whole programs. So the correct
compilation guarantee applies if you take
a whole program– and throughout this talk, I’ll
write big P for whole program, not to be linked
with anything else– you take a program in the
source and you compile it down to a program in the target. But in reality, we almost
never write whole programs. Even when we think we’re
writing a whole program, we’re writing a component. Even Hollow World is
sort of a component. I’m going to write little
e for program expression. You’re writing some
subset, some fragment. So if you compile
a source component, which is my word for
just not whole program– if you compile the source
component to down to a target component, you almost always– whatever program you
write, you end up linking with some
sort of low level code, some sort of
low level libraries, the runtime system, et cetera. And much more
generally, these days we write a lot of
multi-language software. We combine code that might be
code written in a completely different language–
it’s supposed to be red, doesn’t look very red– compiled down to the same
target and then linked there. And so the problem with all
of the papers that I listed on the previous slide is that
they don’t take this scenario into account. This is how we actually
use our compilers. And there’s something missing
in their compiler correctness theorems. They don’t fully apply to the
way we use these compilers. So this whole area– correct compilation
of components, not whole programs– is called
compositional compiler verification. That name has come
to be adopted. So what we want here is to be
able to compile a component and then link it with some
target code and run back. We need to be able to talk about
what correctness means here in the presence of linking
with some other code. And what I’m going
to talk about today, really, is I want
to focus on why specifying compositional
compiler correctness is so hard. Even stating what the compiler
correctness theorem should be is hard and non-obvious. I’m going to do a sort
of survey of three major recent compositional
compiler correctness results and show you how they
all specify compiler correctness differently. And what people who read these
papers don’t, and to be honest, we experts who
work in this area– it takes us a while
to understand– what we don’t understand
is what are the pros and cons of specifying
compiler correctness in each of those particular ways. And then, towards
the end of the talk, I’ll introduce a generic
CCC compositional compiler correctness theorem that
serves as a sort of framework. And I’ll try to convince you–
by that time in the talk, hopefully, you’ll see why that
should be the right desired theorem, if we will, for
any compositional compiler correctness result.
And then I’ll end with lessons drawn from the
early part of the talk for how we want to formalize linking,
because that’s linking happening right
there, and how we want to think about verifying
multi-pass compilers. And I’ll talk about how my
group is trying to approach these things these days. I’ll just give you a little
taste of what we’re working on. OK, so compiler correctness. And please, feel free to
interrupt with questions. I love back and forth
during the talk. Don’t save them for the end. OK, so if we have a source
program s and we compile it to a target program t, what
compiler correctness is all about is we want to
show that s and t somehow have the same behavior. It’s a 30,000 foot view. So the whole question
is, how do we express that s and t have the
same behavior, that they’re equivalent somehow? OK. Now, if we just step
back for a second and talk about how do we
express this equivalence for whole programs–
let’s look at that first. If you have a whole
source program, Ps, that you compiled
through a target program, Pt, then the way that CompCert
and the other papers that I showed earlier specify that
sort of compiler correctness is essentially as follows. The essential idea is if
I run the target program, Pt, then I’m going to see the
same observable behavior– file opening, printing,
whatever else– I’m going to see the same
observable behavior as when I run the source program Ps. And more formally, the
way this is specified is you have to come up with
a relation between source and target programs of
simulation relation. And you have to show that
your initial programs are related by the relation. And then you have to show that,
no matter how many steps you’ve taken, if your P source source
and P target are related, then if you take one
step at the source, then you can take zero or
more steps at the target, and they will still
be related according to that same
simulation relation. OK? Yes? AUDIENCE: So after each step, do
we rewrite the program source? AMAL AHMED: You don’t
rewrite the program source. You sort of think
about scenarios. Well, so in a sense, what the
theorem is saying intuitively is that if you ran it– and yes, in that sense,
this is a reduction. This arrow up here is
the operational semantics of the program of the
source language being used to evaluate the program. AUDIENCE: I see. AMAL AHMED: Yeah. In the theorem itself,
you’re not actually running the program. You’re talking
about an arbitrary state that your source
program might be in and an arbitrary state
that your target program configuration might be in. You assume that they’re related. If we take a step, we want
to show that the new ones are related again. So P here essentially becomes
a program configuration for a complicated language. I’m abstracting over that. And then there are issues of
forward simulation and backward simulation I’m not
going to go into. For those of you
who know, you know that there is no such thing. So these are closed simulations. Just by saying that, if
I run the target program, and I see certain observable
behavior, when I run the source program, I’m going to see
the same observable behavior. That’s all we’re saying, right? But if you think of it, this is
a very closed-world assumption. I have my whole program
that I need to run. If we tried to ask
that same question when we talk about how do we
specify correct compilation of components, these
are now components. I can’t run them. They’re pieces of a program. They aren’t a whole program yet. So this idea of,
let’s run them and see if they have the same
observable behavior, doesn’t even hold already. So the question
is how, then, are we going to express that a
component, es, that compiles to a target component, et– how are we going to express
that they are equivalent? And intuitively, what we want– at the end of the day, what we
want out of these theorems is we want them to say
something about what happens when we link
our compiled code with some other code. et is our compiled code. Some other code that
someone gave us, et prime– this other target code. And what I want is to be able
to say that when I link together et with some other
et prime, and I run that, then I should get
some corresponding behavior at the source. So et linked with et
prime, when I run it, I should see the same
behavior as es linked with– well, the question is what. And that’s sort of the
heart of the problem. OK. So this is compositional
compiler correctness. And now I want to
point out, before I start answering this
question of what this is, that when we specify
this equivalence here, there’s a lot baked into it. There are a lot of sort of
constraints built into it. One of the big ones
is, what are we– so these are open programs. So, somehow, this
equivalence has to say something about what
you can allow the target code to be linked with and what
you can allow the source code to be linked with. It’s baked in to
that equivalence. And there are many
different choices you could make
about what et primes you’re going to allow your
target code to be linked with. You could choose the
simplest possible answer, which is, I’m only
going to allow linking with et primes that
come from the same compiler, from your verified compiler. I call that separate
compilation, using the same
compiler to compile different pieces of
code down to target and linking them together. Separate compilation is easy. I’m not really going
to talk about it. I’m going to use the term
compositional compiler correctness to mean
more than that. So other choices
we could make are, I won’t allow linking with
et primes that were produced by a different compiler, but
for the same source language S, the source language that your
verified compiler is compiling. Or a compiler for
a very different– for a different language,
let’s call it R. And then I’ve sort
of tried to make a distinction between compiler
for a different language R and compiler for a
different language R that is very different from
S. And what I have in mind there is, there’s
something subtle going on. There’s this question of, do you
wish to allow linking with et primes whose behavior
cannot even be expressed in your source language? Remember, we’re after semantics
preserving compilation. This is a scenario in which
you’re not preserving anything. Do we want that? And I would argue
we want that if we want multi-language software. And I’ll come back to this. I’m just going to say
one other thing about– if we want to be able to
do compositional compiler correctness, realistic
compilers, then, just sort of stepping
back and saying, what are the properties
that we need? Well, I would argue that however
you specify that equivalence, you should permit linking
with target code that’s of arbitrary provenance. So I don’t want it to just
be produced by my compiler. And you have to be able
to support verification of multi-pass compilers because
that’s how we write compilers. They are multi-pass. They’re complicated. You can’t possibly write
a one-pass compiler for a realistic language. And these two things are
tough to do together, which is a large point of the talk. OK. I’m going to start
with this survey. And really, it’s going
to be a focus on how do different compositional
compiler correctness results specify this equivalence. And I want to bring
out a few things. So all of the papers on
compositional compiler correctness– they must all allow
linking the target code. And in the literature,
linking is often referred to as horizontal
compositionality. You can already sort of
see it in my pictures. I can compose my target
code with some other code. And on the other side, I compose
my source code with something. So linking is thought of as
horizontal compositionality. Compositional compiler
correctness results have to have that. Now, the choice of how you
specify that equivalence also ends up affecting how you
check if some target code that someone gave you, some
et prime, is OK to link with. Does it fall within the bounds
of my compiler correctness theorem? Zack, you have a question? ZACHARY TATLOCK:
Well, one thing. So there’s maybe
two different senses of not knowing, or having
something that you– like, it gets impossible to run. So even in the original
CompCert semantics, before compositional
CompCert, which is not the same compositional
you’re talking about– but they have external
calls, which allow sort of arbitrary behavior. But you could
imagine restricting– you say, well, for this external
call, here is some spec for it. And then you would
know what that does. And then, as long
as whatever happened when you follow the calling
in the change up to that code, you can sort of put
anything in there. AMAL AHMED: You can
put anything in there. So the way that worked,
the external calls, it was just about the external
behaviors and sort of noting the trace, essentially. It didn’t– essentially,
the interaction between the external code in your code
was not very permissive at all. It really wasn’t
permissive, right? This idea of jumping
back and forth, or callbacks, or something that
would be external component in yours, that’s very
subtle to reason about. And that’s exactly what
gets completely left out. And in the space of
compositional compiler correctness results, what
people are trying to work out is, how do we allow that
maximum flexibility, interrupting the code. Callbacks is the
really tough one. OK, how do we check if some
et prime is OK to link with? And then, this issue of
multi-pass compilers. If you have a
multi-pass compiler, let’s say you’re going from
source to intermediate, and intermediate to target. You want to verify the
source to intermediate pass. You want to verify the
intermediate to target pass. And then you want to put
these two correctness results for the two passes
together and say that I have end to end source
to target compiler correctness. So essentially, it’s a
transitivity property, right? Correctness of each
pass implies correctness of the whole, end to end. And so this
transitivity that you need for multi-pass
compiler correctness is often referred to as
vertical compositionality. Again, I sort of did the
motion, source to intermediate, intermediate to target. I’m bringing these up because
I’ll come back to it later. These ideas are conflated
in the literature right now, and I’ll make a point that
maybe they shouldn’t be. And finally, imagine that
you’re a reviewer of one of these papers on compositional
compiler correctness. I’ll try to explain to you why
you have a very, very, very tough job figuring out if
the people writing the paper actually got the theorem right. Let’s jump in. So I have my linking line. This is original
CompCert, before 2.7. It allowed linking with nothing. The compiler correctness
theorem basically did not allow linking with anything. It assumed you already
had a whole program. There’s work on SepCompCert. This is CompCert with
separate compilation. So, essentially, if
you compile two modules with exactly the same
SepCompCert compiler, then you can put them together and
you have a compiler correctness result. SepCompCert has now been
built into mainline CompCert, so since version
2.7 of CompCert. And we’re now at 3.4, I think. This is built in. The CompCert as we
know it now does handle separate compilation. But I want to talk about
what comes after that. I’m going to talk about
this work on Pilsner, out of Max Planck. This is Derek and Viktor
Vafeiades and their students. So this essentially
allows linking with code that comes from
a different compiler, but for the same
source language, S. And then there’s Compositional
CompCert out of Princeton, Gordon Stewart’s dissertation. This effectively
allows you to link with code that’s compiled
from a different language R. Essentially, for them,
it’s any language that falls within the
CompCert languages– in a set of CompCert languages. And then, finally, I’m
going to talk about work that my students and I have
been doing on this idea of using a multi-language semantics to
specify compiler correctness, where we mix the source
and target into one, and use that to specify what
the compiler correctness theorem should be. And this falls into
the category of where we show that you can link
with code that could never be expressed in your
source language, in your compiler
source language. So we’re getting more and more
permissive as we go down this. And I’m basically going
to survey these three. So let me start with Pilsner. I won’t talk about
all of Pilsner. I’ll talk about
the basic approach. Pilsner uses this
approach that I refer to as specifying
this equivalence using a cross-language relation. So when you have a
source and a target, compiled from source to
target, the first thing you do in order to specify
compiler correctness is you go and you specify
this equivalence as– you actually write
down a relation. This relation is
your specification. It relates, when is some source
expression, or component, or fragment, related to
some target component? So you actually specify this. It’s cross-language because
it’s two different languages on the two different sides. This approach has been used
in work like on compiling ML-like languages. So normally, that cross-language
relation is a logical relation. So it relates source
programs and target programs. And Nick Benton and Kil Hur had
an ICFP ’09 paper where they did this. And then Derek and Kil extended
this later to a language where the source had
mutable references in it. And actually, before
I go on, I should say that this approach of
using a logical relation to specify when source and
target should be related, this cross-language relation
approach, it goes back decades. There are papers in which people
have used the logical relation approach, this
cross-language thing, to prove their
translations correct. It’s just that before ICFP ’09,
before this Benton-Hur paper, they were normally doing it for
really small toyish languages. So it would be
normally languages that were not Turing complete– terminating languages. And that was because the
logical relations technology didn’t quite exist back
then to be able to scale it to more practical languages. That’s why this one’s given
the bulk of the credit, because they did it for a
Turing-complete language down to an SEC machine. Now, the problem with
these two results is that they had
single-pass compilers. They could not
prove transitivity. These cross-language
logical relations– think about what’s going on. If you set up a relation
between source and intermediate, you set up a different
relation between intermediate and target. Now, for
transititivity, you need to prove that there’s a
third relation between source and target holds. If S and I are related
and I and T are related, then the S to T
components are related. And this is not easy to do. The main sticking point becomes
if you have a language which allocates mutable references,
it becomes pretty much impossible to do. So this work was
followed up by work on parametric
inter-language simulations. This is Derek and Viktor
Vafeiades, and Georg Nice, and a couple of others. Sorry, I forgot. So inter-language, again,
think cross-language. But they essentially
crafted particular relations so that they could
prove transitivity. So they verified a
multi-pass compiler. They proved transitivity. But it requires a lot of
effort because they do still have a relation between
source and intermediate, another one between
intermediate and target, and they need to show
that if those two hold, then the source to target
relation will hold. Yes? AUDIENCE: So when you
have more than two passes, does it get quadratic? AMAL AHMED: That’s a
really great question. I don’t know, because
no one’s done it. And here’s the funny thing. The Pilsner paper has a source. It has an intermediate language
in which they then do many inter-lang– it’s called i– and then they
do many different passes from i to i itself, and then
they go from i to t. So yes, I think it does get– we don’t know if it’s
exactly a quadratic law, but I can imagine
it’s not easy yet. So let’s sort of zoom in
on how these relations work because I want to tell you the
pros and cons of using this. I’m going to use, again, very
high level sort of notation, but let me explain to you
what my notion of linking is. So I’m thinking of
a source expression es that has type T
because these were results for ML-like
languages, so I thought I’d stick with that. This expression has a
free variable in it, and think of that
as the hole that I’m going to plug in when
I link with other code. So when you give me other
code of type T prime, I will link it with
es on this side. On the target side, I want
you to imagine that the code that I’m compiling to– this is supposed to
be purple, I don’t know if it looks like it– this target code, you can think
of it as having a purple x. This has a blue x. This has a purple x. So those are my
little holes where I will link in other code. That’s my linking model. I’m keeping it very high level. So what this approach says is
that if es compiles to et– normally a
type-directed compiler– then es is equivalent to et at– and we normally put
the source type tau, because es should
have that type. And we ignore whatever
the type of the target is. Something appropriate. Now, what’s really going on
under the hood in this relation is the following thing. This is the essence of it. Normally, they’re very large,
but this is the essence of it. What this is really saying
is that if you give me an es prime that’s
equivalent to et prime– PILS-related or cross-language
related to et prime at the appropriate
argument type tau prime– then I will link es with es
prime, link et with et prime, and they will be equivalent. Now that you know that,
let’s see what happens. So suppose that I’ve verified
my compiler using this approach. Now I compile es to et,
and I have that theorem. It’s done. I’ve proved compiler
correctness. You come along and you
give me an et prime that I can go link my
compiled code with. Here’s the question for you. Does the compiler
correctness theorem permit linking with this et
prime that you just gave me? How can we check? Yes. AUDIENCE: You have to find a
program in the source language that is equivalent to et prime. AMAL AHMED: Exactly, right? Because it comes
from the definition. Someone just gave
you an et prime. You need to come up
with an es prime, and you need to show everything
before the implication that these two are equivalent. Once you can do that,
then you’re good. But otherwise, if
you can’t do that, then this is not
an et prime that is considered kosher by your
compiler correctness theorem. Sorry, this is just
displaying that. You need to come up
with an es prime. You need to show the proof
that they are equivalent. So this is problematic. You need to come up
with the es prime. That’s not feasible in practice
unless you had the source code to begin with. Let’s say you had the
source code to begin with, and you compiled
it to the target. Notice that I have to show that
these two are related according to the same relation that
I’m using for my compiler. So if this is my compiler
and this is your compiler, notice that your
compiler from S to T has to use the same equivalence
as my compiler for S to T. And what’s annoying about
this is, are we ever going to build two serious verified
compilers from the same source, to the same target, with the
same verification methodology, and the same relation? We are not. This talk is a lot about
poking holes in our assumptions about the compiler– about the
theorems that we say we want, and sort of talking about,
do we really want that? Is this practical? And then, the other thing to
notice is that you cannot link with any et prime,
in this framework, whose behavior cannot be
expressed in the source. This requirement that
you need to come up with an es prime whose behavior
is equivalent to et prime is pretty serious. Now before I move on, I
just want to say one thing. I think that this is not so bad. In the sense that I
could say that if you’re the compiler writer who
wants to make that choice– when you’re verifying your
compiler, you want to say, I don’t want to allow
linking with code that can’t be expressed
in my language– fine. But this is sort of far
more damning in my opinion. Though, this won’t allow you
to do multi-language software in full generality. AUDIENCE: I mean that’s
fine, if you want to make that– it’s a fine distinction. There’s, of course, many shades
of gray of the second fluid. And a lot of times,
it’s exactly because we want to be able to do some
things that are inexpressible. Like, I just don’t
have network semantics and I want a cloud
on the network. AMAL AHMED: You want to
write code in some language or in some DSL,
and then you want to link with code in
maybe the host language. And they have different
expressive power. How do you think about that? So I’m completely with you. I think we want to be able
to link with code that is not expressible in your language. Otherwise, we’re
never going to be able to handle the
multi-language software scenario. That’s why we write code
in different languages and put them together. Yeah? AUDIENCE: How different are
these two bullet points? Because if you say that et
prime cannot be expressed in the source, isn’t that
exactly saying that there’s no es primes that you could find? AMAL AHMED: Good point. I’m making a distinction
between the first and the second because with the
first one, I want to draw your
attention to the fact that I gave you some
code– et prime. Can you run a type checker,
or some sort of program on it to validate whether
this code is kosher according to your compiler
correctness theorem? In this work, it isn’t. We can’t write a decompiler. Yeah? AUDIENCE: Does this
have to be constructive? I don’t necessarily need
to find an es prime. AMAL AHMED: You just need
the proof that it exists. Proof that it exists, and
it’s equivalent to et prime. AUDIENCE: So if I’m
looking at x86 assembly, there’s some very
cheap– like, if you stay within a certain
instruction subset, you’re not doing
anything I can’t do. AMAL AHMED: Actually, you
could do this with types, too. In our work, we do it with
a typed assembly language. We can use types
to restrict what et primes are OK to link with. And then, if you can
show something about– we basically type
check to say, yes, I’m allowed to link with it. And that is OK because
that’s an automated way to check whether et prime falls
within the purview of compiler correctness. That’s the only point
I was trying to make. And specifically,
in these papers, this was not the case
because they don’t have type preserving compilers. They don’t have
types of the target. They don’t have
any way to check. So it’s something
to be careful about. There are ways to get around it. Types are my favorite, or
some sort of specifications. So remember, I told you that
horizontal compositionality is conflated with linking. I just want to sort
of pause and say that let’s look at what
horizontal compositionality should mean, and let’s look
at what linking should mean, because they’re
not the same thing. What do we normally mean when we
just say the phrase horizontal compositionality? Well, I claim that this
is kind of the picture that we would want– that if you have an es and et
that are equivalent according to my source target relation,
and I have an es prime and et prime that are equivalent
according to my source target relation, horizontal
compositionality should be the
property that says, well, you can put them
together, and the combined es es prime is related
according to this ST relation to the combined et et prime. When we PL people brandy about
the term compositionality, this is kind of
what we would mean. But I claim that’s not what we
want for compositional compiler correctness. Here’s what I think we
want, or I claim we want. We want to be able to
link with an arbitrary et prime that comes from anywhere. And it shouldn’t necessarily
have a source level counterpart. So what we really want,
if we could get it, is that et linked with
et prime is somehow equivalent to es linked with the
same et prime, because nobody gave you any es prime. The only thing you
have is the et prime. We should be able to
do– it would be nice if we could prove theorems
that could just say, well, I want to link with this. That means I want to
link with this here, too. But of course, this leads
to the obvious question. What is source target linking? What language am I putting
on this side of my relation? It’s a mixed language,
source and target. But I call this
source-independent linking. Our compiler
correctness theorems, the equivalence should be
such that they now don’t just support horizontal
compositionality, they should support this. Otherwise, you constantly will
find yourself in the position where you need to come up with
es prime for a given et prime. Fortunately, the
next two results do support
source-independent linking. So let’s see how that works. So Compositional CompCert and
our multi-language semantics work. So, going back to
square one, the question is, how do we express
that equivalence when someone’s given us an et prime? Well, I just showed you on the
last slide that what I want is source-independent
linking, and somehow I want to use the
same et prime here. And what I want, then, out of
my compiler correctness theorem is these two put
together behave the same as those two put together. But of course, that means,
how do I put together source and target components? So you need a semantics
of source and target interoperability so that
you can put together these two components
from these two languages. Compositional CompCert
does that using something called an interaction semantics. In our work, we do that
using a multi-language that combines the source
language and target language and allows them to interoperate. And I’ll walk you
through both of them. So Compositional CompCert. Here’s the diagram from the
Compositional CompCert paper. Please don’t read all of it. But figure 2, interaction
semantics interface– I’m just going to explain
to you, at a high level, how they’re doing things. They have this idea of
language-independent linking. For now, you can think
of it as any language in the whole
CompCert pipeline can be linked with code from any
other language in the CompCert pipeline, or any
other beyond that, as long as they’re sort
of conformant in some way, but I’ll get to that. So how does it work? How do they interact when
you put them together, these two different languages? Well, they had this idea of
starting off in an initial core state, and this is your
language, and you’re running. Let’s say language
A and language B. Language A is
running, and it’s doing something to the
core state, the program configuration. And then there is
this interface which thinks of interacting
with the external– with the other language. This is specified semantically
in terms of behaviors, not in terms of
syntax of languages. And basically, the idea here is
that other language B component that we are interacting with
can do some interference, but there are certain
bounds on what it can do. What this interaction
semantics does, its main value is
that it specifies what those bounds are. Formalizes– gives
a specification for what’s allowed
and what isn’t. And then you can run
your B component. But then you come
back, and you should be able to come back to
a core state A component. That’s all that’s really
going on in this figure. You don’t have to read the rest. So language-independent
linking and this piece, structured simulations. So, remember we have to do
multi-pass compilation, right? What they do in the
compositional CompCert work is they came up
with a simulation relation between source
programs and target programs. Think about– and this is
going to be the same structure simulation that
you’re going to use to verify every single pass
of the compositional CompCert compiler. Because if you use
the same relation to verify every single pass,
you can get transitivity kind of for free very
easily, all right? Now, what is this
relation doing? It’s essentially, I would
say, Gordon, and Leonard, and whoever, all the
Princeton people that kind of sat down for two, three years. And they looked at every pass
of the CompCert compiler. And they looked at what are all
the funky transformations that are going on? And whenever these
transformations happen across one pass,
if I need code from one pass to be source of one pass
to be interacting with code in a much lower pass, what
are sort of the rely-guarantee and variance that need
to hold between these? And they took all of these
rely-guarantee requirements for all the passes,
and they injected them into one structured simulation. It’s a massive design
problem, right? And then, they essentially
used that structure simulation to verify every single pass. And they get
multi-pass compilation or vertical
compositionality for free. OK? Now what’s interesting
about CompCert is– the hard part there was
CompCert has these passes where you do memory injections. Memory is the hard part. OK, memory is always
the hard part. You have a lot of memory,
and you do memory injections into a certain state. There are other passes
which do memory extensions. And getting these to reconcile
into one structured simulation was the hard part. OK, so, the
important bit to note here is that transitivity
of this compiler, right, which gives you multi-pass
compiler correctness, relies on the compiler passes
performing a restricted set of memory transformations. So in other words,
if I come along and if I introduce
one other funky pass into the compositional
CompCert pipeline, and this does something that
neither memory extensions or injections were
doing before, I would have to go and change
the structure simulation. And I would have to
then use the new one to verify every pass, OK? All right. The other interesting
bit is that– so this is actually a
big thing about CompCert. The reason CompCert
has been feasible is because all of the languages
in the CompCert pipeline use exactly the same
memory model, right? It’s because C and Assembly
are not all that far as my intuitive way
of describing it. So it’s not clear,
with this approach, how you would scale the
compositional CompCert methodology, this
interaction semantics thing and structured
simulations, to richer source languages. So if you were compiling
ML down to Assembly, you’re dealing with very
different memory models. And there’s no way
to get around that. OK? So let’s talk about
the next piece of work, which actually does
let you do compiler verification for where the source
and target might have very different memory models. OK, so this is our work. And basically, we’re
back to the same problem of how do we allow linking
between source and target? What does this
interoperability mean? We’re not going to do
language-independent linking. Instead, we’re going to
say if you have a source and you have a target, and you
need to make them interoperate, we’re going to define a
multi-language semantics, which takes the source language
and the target language, globs them together, and adds
two constructs, two boundary forms. The ST boundary, which lets
you take target components and put them into
a source context, surround them with source code,
and a source, the TS boundary, which allows you to
take source components and surround them
with target code. OK? And there are
established principles for how you define
multi-language semantics like this. They sort of date back to
this paper by Jacob Matthews and Robbie Findler from POPL 07. I’ll give you a little flavor
of that, but not too much. So, the benefit then
is that once you’ve done all this work
of specifying how source and target interoperate,
this interoperability is clear. You just take your
target component that someone gave you, you
wrap a boundary around it, and you know exactly– and you already have
a formal semantics for how these two pieces
of code will interoperate. You know what it means to
link them together and run. Any question? Yeah. AUDIENCE: This may be
kind of a dumb question, but what’s the distinction
between this sort of wrapping versus the
interference model that was used before? AMAL AHMED: The interaction
semantics before? Yes. It’s one of syntactic versus
semantic interoperability. The compositional
CompCert thing, they don’t actually tell you
what the syntax of the two languages is. They lift things up out
into the semantic level. And actually, the
set CompCert people have sort of come back
and criticized that. They’ve said that this
is not straightforward because normally,
we think of linking as taking two syntactic
pieces of code and putting them together. So how does the semantic thing
map to the syntactic thing? That is a bit of a
gap, I think, still. So, OK, yeah, so this
is very syntactic. OK. So once you’ve done
this, you basically get what you would like. That et linked with
et prime should be contextually equivalent,
equivalent in any context, of the multi-language. This is contextual equivalence
for the multi-language between source and target. And on the other
side, this should be equivalent to es linked
with that ST of et prime. But then notice that I have
to put a TS boundary around it because I have
target on this side, so I need target
on the other side. OK? All right. And then, here’s how we
take competitive practice. If es compiles to
et, then es should be contextually equivalent
to a boundary wrapped around the target code. And that captures
the right notion of equivalence that you want. Now you can link with any et
prime that someone gives you. OK. The skills to
multi-language compilers, but this is how
it needs to scale to multi-language compilers. If you have a source
intermediate in a target, then you have to take
them all and put them into one single
multi-language, all right? And we can call that as IT. There are boundaries. The way we do this, to be
modular, is for every pass, there will be boundaries
that take you from S to I and I to S. And
for the next pass, we have I to T and
TI boundaries that let you mediate between those. If you want to put one
of these purple T code into the middle of an
S context, well, you put an IT boundary around that
and SI boundary around that. And you put it in there. OK? Boundaries stack up in
a obvious sort of way. OK. And then when you prove
compiler correctness, you prove for the first pass
that if es compiles to ei, then es is equivalent
to wrapped ei. And similarly, for
the second pass. And notice that this is
contextual equivalence for SIT. This is also contextual
equivalence for SIT. Each pass of the
compiler is being proved correct with the
same contextual equivalent of the full multi-language
for all of the passes. All right? Now we can get transitivity. How do we get it? Well, if we put an SI
boundary around that, you can see that es
is related to SI ei, and this is related to the two
boundaries stacked up, right? So in the end, you
get es is contextually equivalent to– this is just
my short form for SI IT of et. OK? You get transitivity. You get multi-pass
compiler correctness. And of course, if someone
gives you an et prime, you can just link that in. Interoperability is defined. I’ll come back to drawbacks
in just one second. Let me just quickly tell
you what we did with this. So, we wanted to try out if
this methodology even works. And so what we did
is we essentially built a type-preserving compiler
from system F, basically a polymorphic language without
any mutable references. We do closure conversion. We do heap allocation. We do code generation down
to a typed Assembly language. Very much in the spirit of
the original system F Assembly language paper. Which is fun, Dan
in the audience. AUDIENCE: I’m not on there. AMAL AHMED: I know
you’re not on that paper. But you were there. AUDIENCE: I showed up late. AMAL AHMED: Yeah. And this is a type-preserving
compiler all the way. So I have a source type blue
tau is being translated down to the closure
conversion language C. So we have types being
translated at every pass. And then, of course,
we do the obvious thing that I’ve already shown you. We specify boundaries
between every pass. We called our
multi-language FCAT. Let me tell you things that
I haven’t told you yet. These boundaries
that we specify when we have to formalize the
multi-language semantics, the boundaries actually mediate
between things of source type tau and things of translation
type on the other side. So we’re using the fact that we
are building a type-preserving compiler to get sort of a more– we can type check the code
that you put under a boundary to know whether it’s
OK for it to be there. It won’t actually error. If this orange e here has
translation type, translation of the blue type
tau, then it’s good. You can stick it under there. This is a perfectly valid,
well-formed program. You can imagine, this can
be used to very good effect by the time you get down to
the type assembly language. Because type assembly
language, you can use the type system
to encode various calling conventions and so on. So now, sticking Assembly
into the middle of ultimately the high level program should– when it makes sense. OK, so the boundaries
mediate between the type and its translation. How do we design the
operational semantics? The principle
generally is the same. So I’ll just show it to
you for this F and C pass. If I have some component
under this boundary, what I almost always do, either
case, I evaluate it. So in this case, I evaluate
it using the blue language semantics down to
this blue value v, using the F language semantics. Once I have a v
there, that’s where I have specialized boundary
conversions happening. OK? These are indexed by the type. So if this happens
to be the number 5 and this is the type int,
then I will convert it– I don’t know. Let’s say this was Assembly. If it’s 5, I would convert
it to 101 or something. Just to take an extreme case. If this was a function and
this was a tau 1 or tau 2, and maybe on the other
side, I have Assembly, which I don’t here, but– actually, let’s not do Assembly. But if I had a
function here, then what I want to produce here
is I will keep the function on the inside. I’ll do what you do
with contracts, right? So I take the underlying
function and I wrap it so that I leave
the blue function inside. I put it in an orange wrapper. The orange wrapper knows
to take orange inputs, convert them to blue. You run the blue
function on them. Take the blue output,
convert it back to orange, and give it back to the
orange that’s surrounding it. OK? All right. OK, now, not any multi-language
semantics will suffice. You need to design a
multi-language semantics that has this property. I mean, suffice for
compiler correctness, right? You’re designing this
multi-language semantics to serve as a specification
of compiler correctness. So not any nonsense should work. Right? Here is an essential
property we need, which is that if you
have a component, you wrap it one way
and then the other. It should remain
contextually equivalent in the multi-language
to the underlying thing, and the other way, too. And this holds for
all boundaries. OK. There are interesting
problems that come up. I’ll just tell you
briefly what they were. When you have two languages
that both have type abstraction, this becomes really difficult
to make them interoperate. Like, what is the right way
to do that interoperability was an open question. To be honest, what we
put in our ESOP 14 paper is an answer that we ended up
not being happy with later on. And hopefully, in the
next two, three months, we’ll publish an updated
version which works much better. But I won’t say more. Interoperability between
the C and A, right? So this pass is taking closure
converted code and a heap allocating it. So something that’s a
tuple or a function here is suddenly on the
heap here, right? So what are we going to
do when we interoperate? And I just want to
show you one scenario. If you have a tuple
in this language, and you put a boundary around
it, which takes it to this. That boundary, when you run it,
it’s going to take the tuple and allocate it in the heap. If you take the same tuple
again and run the boundary, it’s going to put it in
another place in the heap. In other words,
we’re not memorizing. OK? We’re not saying
this is a tuple, and we’re only going to
put it on the heap once. We’re just doing
the naive thing. Because we want our
multi-language semantics to just be a specification
for compiler correctness. If you wanted to do a
more efficient thing, you would design it differently. You could memorize,
but we didn’t do that. AUDIENCE: Oh. AMAL AHMED: Yeah. AUDIENCE: I mean, I will
vigorously defend this. AMAL AHMED: Good. Why? AUDIENCE: Because
a correct compiler for a language with
immutable tuples can introduce hash
content or not. And both compilers are correct. AMAL AHMED: Correct, exactly. AUDIENCE: And so you should
in no way be apologizing. AMAL AHMED: Yeah. It’s just something
to point out. Because people
think of, oh, it’s an interoperability semantics. Can I go run it? And I’m just saying it’s not
ideal for running efficiently. It’s serving as a
spec here, yeah. But yes, good point. OK, the last question– sorry,
I talked about that one. This was one of the hardest
things we’ve tackled. All right? So I’m going to talk about
it, at least at a high level. When you try to define
interoperability between the language A and
the language T, until here, the heap allocation
language, you still have high level
program structure. And you have direct
style control flow. OK? Assembly is not compositional. Assembly is a bunch of
basic blocks with a bunch of instructions in them. And we didn’t want to cheat. We did not want to change
the nature of our assembly. OK? So when you try to
define interoperability between a higher
level language– that’s direct
style, an assembly. Lots of questions come up. First question
that should come up is what is an
assembly expression? There’s no such thing. And yet, a slide ago,
I told you that, oh, the way we run these
boundary semantics is that if I have this–
this is a purple e. If I have it sitting
under a boundary, then I’m just going to
run the e all the way down using the purple
language semantics until I get a purple v. And then
I’ll do a conversion. But at this point, you should be
like, well, what is a purple v? What is a value
form in assembly? Right, they’re a thing. There are word values
and things or heap values that sit in the heap
or in registers. But there is no value form. So when do you know when you’re
ready to do the conversion? It’s an interesting
question if we answered it. I won’t tell you how because– yeah, I don’t have
slides on that in detail. Then there’s this
question of how do we define contextual
equivalence for tau components? And here, the
interesting bit is this. So when I say the assembly
is not compositional, I want to draw your attention
to the following scenario. You might have two
equivalent source programs that you compiled
on to assembly. But maybe one of them
was doing an extra F & L. That was unnecessary, so
you get two basic blocks here, and here, you get
three basic blocks. The original source
components were equivalent. The compiled version
should be equivalent. But now we’re talking
about setting something up for assembly where
a two basic block component is equivalent to a
three basic block component. How do you allow for that? There’s nothing in the
structure of assembly that lets us talk
about equivalence beyond the single
basic block level. So we have to
figure out how to– again, without
changing assembly, we added something
to the type system. It’s called return markers. And I’m not going to
go into it too much. But essentially, with a simple
change to the type system, we were able to distinguish– not distinguish, but be able
to tell when a component ends on the assembly side. And then we’re able to
reason about equivalence between tau components
that have different numbers of basic blocks. Yeah. AUDIENCE: One thing
that was challenging when we were working on
[INAUDIBLE] optimizations in CompCert was that the
assembly level, the size of the program matters. Because if you, for example,
take PC offset jumps, and then you change
size of the program, then the offset changes. So are there also
sort of restrictions on the assembly language? You gave us example of that,
like three basic blocks and two. This is the kind of thing
where all of a sudden, the PC matters. And now if I
substitute something in in a different
context, where it’s running programs
of different size, that could change the
area of the component. So do you just make sure that
those assembly components don’t have that ability? Like, our search program’s
not well typed or– AMAL AHMED: So you’re basically
saying that does our assembly allow you to jump where– no, take the program– AUDIENCE: The way I
usually boil this down is that an assembly
language, literally the size of the assembly program
is pretty easily observable. AMAL AHMED: Yes, exactly. And you can do
things based on that. So in that sense, our
assembly is idealized. We are not observing
the size of programs. Yeah. AUDIENCE: So you’re on
a slightly higher level. AMAL AHMED: Yeah, we’re
slightly higher level. AUDIENCE: Which is fine. AMAL AHMED: We really
are at the sort of tau from the original tau paper
level, or stack-based tau. AUDIENCE: But you
want to allow, even at that level, I was
going to ask this, perhaps for the
equation, it’s fine. A correct compiler can add
salt, and add a bunch of junk, and add unreachable code. I mean, they actually
sometimes do. AMAL AHMED: Obfuscation. Yeah, exactly. And that they should be
considered correct or whatever. Equivalence would be defined
a logical relation for this. But however you want
to prove equivalence, they should be equivalent. So, actually, yeah, so
really the main challenge here in a nutshell was you’re
trying to do interoperability between the direct style
language and a language in which you just have
sort of continuation style. You’re just jumping. At the end of basic block,
you just jump somewhere else. There’s nothing really to
distinguish jumps and returns. So we do something
in the type system to identify what is
an actual return. So we can get nested component
structure and nested call structure. OK, and this FunTAL paper
mixes a functional language with a typed assembly language. It’s fun. OK. I just want to quickly
compare compositional CompCert and the multi-language approach. OK? For transitivity, they use
structured simulations, the same structured
simulation for all passes. We use the same multi-language
contextual equivalent for all passes. So we’re not better. We’re doing the same
cheat that they are. OK? That’s something to think about. I can call it a cheat now
because when we did it, I don’t think either
party understood what the other one was
doing or was fully aware of what we were doing. And I’ll come back to that one. How did we each check whether
et prime is OK to link with? For them, it
essentially boils down to satisfying constraints
imposed by the CompCert memory model. For us, something has to be
of an appropriate translation type. OK? They require a uniform memory
model across all of the IRs. We do not. We actually showed an
example of linking with– we did the ML to assembly. And then allows linking
with behavior inexpressible in source. This is where I wanted to say
we actually showed an example of linking with code that
cannot possibly be expressed in the source. So our source doesn’t
have mutable references. We have linked with
the stateful counter to exhibit that
this is possible. With them, I think
the answer is no because C and
assembly are already sort of equi-expressive. I don’t think there’s
anything there. So this is maybe a maybe no. I don’t think they
really allow linking with anything that’s not
expressible in their source. But maybe I should
mark that a maybe no. Yeah. AUDIENCE: So what about mirror
features of assembly DNAs or if you’re going to make a
driver access to a graphics card or something like that? Like how do you model that? Is it just like–
because I mean, you could model
it as like, oh, I have some RAM that
it has access to, or maybe I have
some memory on IO. Random bytes are
going to appear. Or is there a more structured
way to deal with that? AMAL AHMED: To be
honest, I don’t actually know what the answer is
for compositional CompCert. I would think that
it’s the whatever is being modeled in the
CompCert memory model and by the
operational semantics, if you’re boiling
it down to RAM, I think that’s how they do it. But I don’t know
the answer for sure. I thought you were
going to bring up things like set jump,
long jump, and whatever. AUDIENCE: Oh yeah. I think there’s a whole host
of weird stuff that can happen. And usually, people’s
first foray– oh, I can’t. I want the darn cycle counter. Right? It should be the Hello World. [INTERPOSING VOICES] AUDIENCE: RBTSD Hello World. Yeah, no, what’s the
RBTSD when I read it? But remember, on
all of this, right? I mean, and I think
I’m all, right? I mean, this is about
how do our proofs scale to separate population? How do we make assumptions
about other things? I don’t think anybody
is up at night worried that if you use
Intel’s cycle counter that you will break CompCert. AMAL AHMED: Yeah. There may be a way, right? But then maybe try, yeah. AUDIENCE: Question? AMAL AHMED: Yeah. AUDIENCE: So, in the
timeline slide, right, you have the arrow going– I mean, as you go
to it right, it seems to be less constrained. AMAL AHMED: Yes. You can leave it
more and more things. AUDIENCE: Yeah, but also,
the year seems to go down. AMAL AHMED: He noticed. I was wondering if
someone would notice. How many other people did? It’s funny that that happened. Those papers came out. And literally– even
the two papers in there, they’re both marked 15,
but they are I think– yeah. The order is exactly reversed. And here it is. Yes, OK. So this was POPL 15,
and this was ICFP 15. So it came after. AUDIENCE: Interpretation
right, with 1900 or something, whatever. AMAL AHMED: So we
put that there. Yeah, no, I mean, it’s
a good question, right? You could phrase it
as wait a second. If there were people
writing papers that allowed linking with
more, then how come people were publishing papers that
allowed linking with less? But I don’t think that’s the
right way to think about this. I think this is a
sort of new area where different people are
trying different things. And in particular, this
paper, the subcontra paper, made a very different point. They said that all of these
compositional compiler correctness results, they take
a lot of effort to mechanize. All right? And we are going to show you
their whole contribution, if you look at the
intro to the paper is. They’re trying to say
we’re going to show you how to take existing
CompCert and make it handle separate
compilation, nothing more ambitious than that. But we’re going to
do it and within– it took us– I don’t know how many– one man year or maybe
something like that. Maybe even less than that. And that was the great
part because then that got built into CompCert. And the funny thing is– for me, the sad thing is the
compositional CompCert part did not get built into
mainline CompCert. But I think compositional
compiler correctness is still very much an active area. Yeah. AUDIENCE: I don’t remember
which of these works you were discussing. You talked about
how they required that you had come to the same
source language et prime could not be necessarily expressible
in some source file right? Going back to the C
and assembly world, you can imagine that you
get some just assembly code that will spit out by,
who knows, maybe some number. And I cheekily thought that,
well, most C line assembly. So you could easily
construct a C fragment that by just taking
that something and then betting it
inside of the inventory. And C is actually sort of
morally equivalent to what you’ve actually done– AMAL AHMED: In multi-language. AUDIENCE: In this
combined semantic world. Is that broadly speaking? AMAL AHMED: In broadly
speaking, that’s about right. Yeah. But the problem with
C, as we know it, handling in line assembly
is that it does not have a good semantics, right? And so if you can– right. Right, if you can
do that, if you can have a good semantics
between your source and your target, and
your source already supports embedded
assembly, great. That’s perfect. That’s kind of what
we’re after, right? I mean, part of
the message here is that what’s hard about
compositional compiler correctness is– and what I think
we would like– is to be able to say
what it formally means to interoperate with
target code that might have come from anywhere, right? It’s this target code. And give a formal
semantics for that. If you can do that, you
can specify the compiler correctness theorems in a better
way, and a lot more linking. Yeah. AUDIENCE: So, on this theme–
so that makes a lot of sense. Like you described about
the types and capture things like calling conventions. So I can make sure
they simply in there. I jump into it. It’s going to look
like a C function. Everything makes sense. But the other kind
of things sometimes I might want to link
in is like the bone collector or something, right? And it’s going do wonky
stuff, like go back and walk the stack, re-write
the pointer, do goofy stuff. Can I give that a type that
allows it to be embedded into the trend, like the– AMAL AHMED: My
conjecture there is that you need a more
sophisticated type system in the target
that is able to capture much richer invariance. If your type system is cocked,
like use the logic, rather than a type system or a dependent
type system that’s very rich, then I think yeah. It’s a matter of writing
down the invariant. AUDIENCE: Right,
like whatever state the stack end is equivalent,
with respect to what the program would have done. AMAL AHMED: Exactly. What are the program’s
restrictions on how the stack can be altered? If you can express that
in your type system, if it’s rich enough
to do that, you’re OK. OK, so, yeah. AUDIENCE: Sorry. Just to go on that. I think you would
end up expressing all the things you’d normally
express in a garbage collector. Is it conservative, or
does it know stack maps? Those stack maps are basically
going to be in your types. AMAL AHMED: They have to be. AUDIENCE: So the interface
is going to be reflected. Which would let you what? Which is like a
calling convention. Except here, it’s the interface
for the garbage collector. So it does the right
thing, but it’s just the types are
going to be fancier. AMAL AHMED: Yeah, in order
to be able to write them. Yeah, exactly. Because the invariant
depends here. OK, so let’s talk about what
it takes to prove transitivity in all of these
three pieces of work. It’s a lot of work. Pilsner, there’s a lot
of effort involved. And we talked about
possibly quadratic if you have more, right? And if you have more
languages, that will be bad. With compositional CompCert
multi-language approaches, we paid the price by designing
an interaction semantics, in their case, and the
multi-language semantics, in our case. But in every case is either
effort, or engineering, or both, right? So I’m going to do
the same comparison I did for horizontal
compositionality and linking. I’m going to do this for
vertical compositionality and this transitivity
property, right? What do we mean when we say
vertical compositionality? I claim this is the
picture we want. I would like to be able to
verify my pass from source to intermediate, using just
some sort of SI relation. I would like to be able
to verify the E to I pass by using an IT relation. Vertical compositionality
means not knowing what’s happening in the other layer. At least, that’s
what it should mean. Once you have this, it
should give you this. That from S to T, you’re related
according to the ST relation, right? So vertical compositionality, we
are compositional with respect to other layers of
the compiler, right? We shouldn’t have to know
what the target languages are. I should only know what’s
happening in my path. But this is not the
world we live in. This is not what we have. Instead, compositional
CompCert and our own work has essentially taken
the approach that we’re going to come up with one big– in our case, SIT multi-language
contextual equivalence, in their case, the
structured simulation, which is doing the same thing. And use the same
one across passes. So we get the
technical condition of transitivity, which is
enough for multi-pass compiler correctness. SIT, SIT, and that
gives you that. Great. But I would claim that it would
be nice if we could actually sit, build compositionally
correct compilers that sit in these two squares, that
are vertically compositional and support source
independent linking. That’s not what we have. Pilsner sits here. The other two sit there. So, open research question,
how do we sit here? Because if we can
do that, then we’re good on many different fronts. And maybe proof
efforts will be less. Whatever is proof will
be easier, especially in the mechanized setting. I don’t know. That’s speculation. And then let’s
talk about what you need to know in
order to understand if the theorem is correct. So for Pilsner,
your other reviewer, you need to understand the
whole source-target PILS. It’s not even in the paper. Compositional CompCert,
you need to understand the interaction semantics
and structured simulation. It’s not in the paper. For multi-language
ST, right, you need to understand the entire
source targeted multi-language. A lot of it is not in the paper. So, I bring a
little bit of like, we need social processes
into this part of the talk. That we as reviewers do need to
understand whether some group has got the theorem right. Comp can’t check that for us. It can check that we proved
the theorem we wrote down, but it can’t check that for us. So, a question that my
student Daniel Patterson and I started asking is, is
there a generic CCC theorem? Is there a generic way
of saying what we want? And this is what
we’ve come up with. I’ll show you what it is. I don’t know if it’s
definitive or the final word. But let me sort of
walk you through it. OK, so I want to express
compiler correctness. es compiled to et. How do express that? Let me bring out all of
the ingredients that I’ve showed you so far in this talk. In order to specify
compiler correctness, we need to sort of– I’m going to parametrize my
CCC with a bunch of things. You first need to
specify what you’re allowed to link with, OK? What et primes you’re
allowed to link with. And I’ll write this as
every compositional compiler correctness result needs
to specify a linking set. There’s a mysterious phi here. Normally, you need to say what
et primes you can link with. But for certain frameworks–
in particular, Pilsner– you will need some more
evidence that this et prime is OK to link. And I’ll come back to that. The next piece, what links
with our source component at that level? Pilsner allows you
to link source. Compositional CompCert allows
you to link with pure target. And we allow you to
link with things wrapped in multi-language boundary. So, for whatever
you are, whatever framework you’re using, you
need to specify what we call a source-target linking medium. Pick your choice. This is the framework. I’m putting in pieces
of the framework that you get to instantiate
as the compositional compiler writers. OK? For however you want
to specify this. You will need to specify a
lift function, which allows you to go from et primes– maybe you’re taking this into
account in your linking set– to your source-target
linking medium. For multi-language, that’s
wrapping a boundary around it. That’s the lift function. For Pilsner, where do
you get the es prime? We’ll come back to it. OK. You need to specify what S
hat, the source-target linking medium linking with S, is about. You need a formal spec. You need a formal spec
of target level linking, but we always do, right? OK. So those are the things that
my framework definition is going to be parameterized with. And then here’s my theorem. Just going to wave my hands
and sort of walk through it. CCC says that there should
exist a lift function. That’s where we start. There should exist
a lift function such that if you compile e
at some es to some et, then given any et prime,
with some evidence from the linking set, if we lift
that et prime up to this level, then linking es
with the lifted et prime is the same as linking– when you run it,
it behaves the same as et linked with et prime. So what this compiler
correctness theorem is doing, statement is doing, is what
I’ve been doing from the start. Taking correct
compilation of components and trying to state it in
terms of whole program. We run them, and
they behave the same. Whole program correctness. And there’s an extra
technical condition, which is actually really important. That this lift function– I just said there
exists one, right? It needs to be seen in some way. And we tried to come up with
a minimal requirement on this. And we simply said
that this lift has to be the inverse
of compile on code that your compiler has produced. So if your code can produce
this particular et that’s sitting in the linking set,
then lifting it back up to source level should
give you something that’s equivalent to the
original source program that you compiled. Yeah. AUDIENCE: Sorry. Compile is a
function from S to T? AMAL AHMED: Compile is
a function from S to T. AUDIENCE: Lift is from P to S? AMAL AHMED: This is my compile. And lift is– actually, it’s
T paired with this thing to S hat. Yes. Yeah. AUDIENCE: So it’s not
(inaudible question) AMAL AHMED: Yeah. My inverse is– actually,
the word inverse should be in quotes,
not the other two. Yeah. And it’s not literal inverse. I just want something– if you compile
and then you lift, you should be in the
same equivalence class as the original code
that you compiled. Inverse in that sense. Yeah. AUDIENCE: So, is it possible
that I have some S hat that can be linked
with S, but then there is no et prime that can
be lifted to S prime? AMAL AHMED: Say that again? AUDIENCE: Lifting is
only one direction. AMAL AHMED: Is it
possible that– just say it again? AUDIENCE: So here, we
have some et prime, and then we can always– so, if there is just– AMAL AHMED: If it didn’t
come from your compiler, then I’m not telling
you anything about what lifting it is giving you. As in I’m not building
any conditions on my lift operator, which
tells you anything about that. And that kind of
makes sense, right? Because if it’s code that
didn’t come from your compiler and didn’t come
from your language, what can I say about it, right? OK. All right, so once
we have CCC, it implies these nice things,
like whole-program compiler correctness and separate
compilation correctness. And you can instantiate it
with different formalisms. And that’s, I think, where
the value of this framework emerges. So if you look at how Pilsner
would have instantiated this, here’s what you’d see. You see that your linking
set would not just have et primes in it, but
also some evidence where you actually put that es
prime in the proof that it’s equivalent to et prime. You have to do that. OK? Only then can your lift
function take that. It ignores the proof, but it
just spits out the es prime. OK? That’s how we do lifting. Their source
language, S, is S hat. For the multi-language approach,
et prime is in the linking set. I should’ve written that
it’s in the linking set if it’s a translation type. And then lifting it is just
wrapping it in a boundary. OK? But of course, S hat
here is more complicated because it’s the multi-language. Here’s the nice thing
that falls out, right? Vertical compositionality,
when do we get it? We’re able to show that
if your lift function can take arbitrary et primes
without looking at the phi and back translate
to the source, OK? You can actually back translate
arbitrary target programs to the source program. Then you basically get vertical
compositionality for free, OK? Now, why is that? Well, essentially because–
so for those of you who know what fully
abstract compilers are. Fully abstract
compilers are when– I’ll show in the next slide. I’ll just say here. Fully abstract compilers have
such back translation, right? Now, this is saying that
everything from the target should be expressible
in the source, that you’re able to do that,
which is pretty strict, right? Because if you can
do that, then how will you link with code
that’s not in your language? So, here’s what fully
abstract compilers are. They’re equivalence
preserving, so if two things are equivalent in the source,
then their compiled versions are equivalent in the target. That’s the definition. And vice versa. The backwards direction is easy. But here’s the thing. Fully abstract
compilers, by definition, make sure that compiled
components cannot interact with any target behavior that cannot
be expressed in the source. Now you would think that
I’ve sort of defeated my whole multi-language
software mantra, right? I’m saying, oh, I want
fully abstract compilers to get vertical
compositionality. But now, that means I can’t
link with code that was written in another language and cannot
be expressed in my own, right? OK. It does give you
the nice property that programmers can
reason at the source level. But OK. So, and really, the question
that arises at this point is do we want to
link with behavior that’s not expressible? And that’s where do we want
fully abstract compilers. Giving programmers
the ability to reason in their own language. And I would say
that we want both. And that’s really
the sort of agenda that my group and
I have been trying to figure out an answer to. And I’ll just take
two more minutes of your time to sort of
tell you what we’re doing. So, I want to sort of step
back and say something about the current
state of PL design. Right now, we designed these
nice type safe languages. We go off to prove theorems
about them or about subsets of them. But the thing to note here
is that these language specifications– right,
standard ML has a full book. It’s fully specified. But these language
specifications are incomplete. They don’t account for linking. And when we go off and try
to implement these languages, we immediately realize
that we must allow linking with some other code. And so we open up these gigantic
escape hatches, all right? With ML, there’s C FFI. With Rust, you can link with
unsafe code that’s sitting in the center libraries. With Java, there’s
the JNI, right? So, there’s a
sense in which if I say I want my programmer to be
able to reason in their source language, which I said on
the previous slide, right? How? It’s not part of
the spec, right? So, something that we
are trying to think about is how can we take our current
language specifications, and instead of opening up
these gigantic escape hatches, instead, think about designing
more principled escape hatches, OK? In particular, we’re calling
this idea linking type, but it’s at the
source level, not at the target level, where
linking happens, all right? It’s about interoperability. So we want to take existing–
a language that you design. You should be able to think
about what extensions might you want to add to
this language that give your programmers
a way to reason about how they wish to interact
with features that aren’t available in their language. I know it’s kind of like
growing your language. But if we do it in a
systematic way, where we do it only with type annotations
and not with the ability to just write arbitrary
code, I think that you get very nice structure. We have a paper on it. You can read that. I won’t go into detail. But the key idea is
safer interoperability with features that don’t
exist in your language. So you could do this with ML. Let’s say your ML did not
have first class control. But you want it to
interact with scheme, which does have first class control. Can you make continuation
types of first class construct in your ML? OK? Enrich it with that. And what it allows
programmers to do is just at– 90% of the code base
will remain as it was in the original language. But in a couple of
spots where they need to interact with code
that is doing something with first class
continuations, maybe they can sprinkle that here,
I’m happy being more permissive. That’s the idea. But this is a massive
design problem. We don’t fully know how to do
it for all sorts of features. We’ve tried it with state
and a couple of other things. And the key idea is you only
need linking type extensions to interact with
behavior that’s not expressible in your language. OK? And you should only
have to do it once. So if you want to link with
five different languages that all have some sort of
linearity or affine types, you should just need to
properly add affine types to your language once, OK? Now, there’s Rust. You might add
affine types to ML, because ML doesn’t understand
Rust’s notion of ownership. Rust already interacts
with unsafe code. How could you make
sense of that? You could take the
core Rust type system and enrich it with some idea
about fine-grained capability, which would capture the
essence of loans or borrowing. Gallina is pure. And we extract from
it and put it into ML. Well, maybe ML should have
a linking type extension that says something about
sprinkling purity annotation. If I accept a Gallina
high order function here, I will treat it as
pure, and I will only pass it pure functions as input. If your programmer
could annotate that, we’d have safe interoperability
between ML and Gallina. So what we’re really
doing is we’re trying to investigate
this whole space. And on top of
WebAssembly, we’re trying to design a richly typed
intermediate representation. The key to this is
the type system. We want to make sure that we
have good state and effect encapsulation. For that, you end up needing
dependent type theory or type theory, so that you
can express things. And in the extreme
case, you do need this, so that you can express
all the invariance that you need with respect to
memory disjointness and so on, for instance. That’s important. But now that I’ve
sort of cheated and put linking types
into my language, I can build fully
abstract compilers. Because it’s no longer the case
that features that were not here aren’t here anymore. I added linking types
extensions to my source. So I can build a fully abstract
compiler, which says, oh yeah, you can’t link with anything
that the programmer cannot talk about. Now the programmer can
talk about those features and how he or she wants to
link with those features. Then there’s this question of– I put Scheme here. There’s C. You ultimately
have to design this IR, and this is something we
are not doing right now. This is totally future. How does this IR,
this WebAssembly, which is more richly typed,
interact with untyped assembly. This is a gradual
typing problem. We stay away from that
because gradual typing is not performance. We don’t know how to solve it. So we let the gradual
typing community solve it and then maybe come back to it. And but that’s basically it. So linking types,
we hope, are an idea that would allow
programmers to reason in almost their own language
and allow compilers then to be fully abstract, right? So the compiler
is just listening to what the programmer
wants to do, instead of inventing its
own rules about what you should be allowed to link with. So yeah, that’s my last slide. CompCert started a renaissance. I think most of us
went off and focused on mechanized proofs, which
was really important and really essential. But I think the
next frontier is how do we do compositional compiler
correctness because there is just like this
slew of open problems that are sitting there, and
nobody really has great answers to any of them. OK? Yeah. Oh, and I’ll end with my
students’ faces because they do all the work. But yes. AUDIENCE: I was going to
ask about the linking types. So what is sort of the, I
guess, expressivity bound maybe is a way to put it, like
on that kind of typed system? Because one way that
I see it, right, is it can give people arbitrary
types of the extension by just dropping it into dependent
type theory and being like, good luck. AMAL AHMED: So this is exactly
the problem that we’ve had, right? I’ve been pitching
it as sort of you are the designer for
language x, right? You get to decide what you wish
to give to your programmers. And then you’ll
build compilers that obey those instructions, right? Of course, over
time, you might want to keep giving them
more, and more, and more. And let’s say you start with ML. Ultimately maybe you have F
Star or something even richer, right? You have very, very rich
dependent typed theory if you really want
to allow everything. You might even want
type systems that say something about
cost resources for writing crypto code, right? So it could keep going. But I don’t think we
should tackle it as– I don’t think we should think
of this as, oh, but in the end, we need everything. I think we should think of it
as we have existing languages. What are the major use
cases for interoperability? What are the features that
my programmers wish to have and don’t? And slowly, grow them over time. Our linking types paper
actually also talks about whether this should scale. So if you keep adding
features over time, what do you need to do? Do you have backwards
compatibility? And we explain that you do. Which is? AUDIENCE: One question
I kind of have is– l worked in Rust for a while. And one of the
problems that we had was that people would come
up with new domains of where they would imply unsafe. And they would track
some invariants, like Aaron has this cross
beam block free algorithm thing, which uses one
set of invariants. Which are very different
than maybe invariants that you use in parallel code. They’re different
than you’re using for building data structures. And one of the questions– I think the challenge
that I have is like. I was talking to Nico, he’s
one of the other language designers, about this,
is like, how do you allow users to understand the
invariants are communicating at different points in the code. And obviously, you
could write this– AMAL AHMED: And they keep
getting richer and more complicated because there
are all sorts of crazy things that you might want to
do in that unsafe code. And Aaron’s– AUDIENCE: Question AMAL AHMED: We
don’t know, right? And the Rust team is
actually very much– there’s this whole initiative
about unsafe code guidelines, right? They wanted to be able to
somehow tell programmers what code is safe to write, what code
is within the bounds of safe, right? As in it’s marked unsafe,
but don’t do certain things. How do you know when
you’re within that set? But they’re very much against
giving them rich type systems. Because then they feel that the
language would become unusable. I don’t know. There’s a lot I can say
about this, but none of it is a clear answer. In the end, there are all
sorts of sophisticated things you could do. So if you look at the
Rust team paper, right? And what they had to do. They had to devise
this lifetime logic to verify the unsafe code that’s
sitting in the center library. It’s pretty
incomprehensible already. Right? There’s a lot you
can do, but I’m sure there’s other things
that they can’t do still, which would require
richer logic. MODERATOR: But we
should all thank Amal for a wonderful talk. AMAL AHMED: Thank you. [CLAPPING]

Leave a Reply

Your email address will not be published. Required fields are marked *