Consolidate and fix code blocks

This commit is contained in:
varkor 2018-04-05 13:24:08 +01:00 committed by Who? Me?!
parent cacdd73802
commit ad35712ddc
34 changed files with 582 additions and 424 deletions

View File

@ -30,8 +30,8 @@ To help prevent accidentally introducing broken links, we use the
invoke this link checker, otherwise it will emit a warning saying it couldn't
be found.
```
$ cargo install mdbook-linkcheck
```bash
> cargo install mdbook-linkcheck
```
You will need `mdbook` version `>= 0.1`. `linkcheck` will be run automatically
when you run `mdbook build`.

View File

@ -21,7 +21,7 @@ all the remainder. Only at the end of the block is there the
possibility of branching to more than one place (in MIR, we call that
final statement the **terminator**):
```
```mir
bb0: {
statement0;
statement1;
@ -34,7 +34,7 @@ bb0: {
Many expressions that you are used to in Rust compile down to multiple
basic blocks. For example, consider an if statement:
```rust
```rust,ignore
a = 1;
if some_variable {
b = 1;
@ -46,7 +46,7 @@ d = 1;
This would compile into four basic blocks:
```
```mir
BB0: {
a = 1;
if some_variable { goto BB1 } else { goto BB2 }

View File

@ -3,7 +3,7 @@
> **Note:** This is a copy of `@nrc`'s amazing [stupid-stats]. You should find
> a copy of the code on the GitHub repository although due to the compiler's
> constantly evolving nature, there is no guarantee it'll compile on the first
> go.
> go.
Many tools benefit from being a drop-in replacement for a compiler. By this, I
mean that any user of the tool can use `mytool` in all the ways they would
@ -177,7 +177,7 @@ foo.rs` (assuming you have a Rust program called `foo.rs`. You can also pass any
command line arguments that you would normally pass to rustc). When you run it
you'll see output similar to
```
```txt
In crate: foo,
Found 12 uses of `println!`;
@ -205,7 +205,7 @@ should dump stupid-stats' stdout to Cargo's stdout).
Let's start with the `main` function for our tool, it is pretty simple:
```
```rust,ignore
fn main() {
let args: Vec<_> = std::env::args().collect();
rustc_driver::run_compiler(&args, &mut StupidCalls::new());
@ -223,7 +223,7 @@ this tool different from rustc.
`StupidCalls` is a mostly empty struct:
```
```rust,ignore
struct StupidCalls {
default_calls: RustcDefaultCalls,
}
@ -238,7 +238,7 @@ to keep Cargo happy.
Most of the rest of the impl of `CompilerCalls` is trivial:
```
```rust,ignore
impl<'a> CompilerCalls<'a> for StupidCalls {
fn early_callback(&mut self,
_: &getopts::Matches,
@ -300,7 +300,7 @@ tool does it's actual work by walking the AST. We do that by creating an AST
visitor and making it walk the AST from the top (the crate root). Once we've
walked the crate, we print the stats we've collected:
```
```rust,ignore
fn build_controller(&mut self, _: &Session) -> driver::CompileController<'a> {
// We mostly want to do what rustc does, which is what basic() will return.
let mut control = driver::CompileController::basic();
@ -340,7 +340,7 @@ That is all it takes to create your own drop-in compiler replacement or custom
compiler! For the sake of completeness I'll go over the rest of the stupid-stats
tool.
```
```rust
struct StupidVisitor {
println_count: usize,
arg_counts: Vec<usize>,
@ -355,7 +355,7 @@ methods, these walk the AST taking no action. We override `visit_item` and
functions, modules, traits, structs, and so forth, we're only interested in
functions) and macros:
```
```rust,ignore
impl<'v> visit::Visitor<'v> for StupidVisitor {
fn visit_item(&mut self, i: &'v ast::Item) {
match i.node {

View File

@ -61,7 +61,8 @@ which takes a single argument (which, in this case is a value of 1).
(rather than the current Rust default of 101 at the time of this writing). The
header command and the argument list (if present) are typically separated by a
colon:
```
```rust,ignore
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.

View File

@ -21,7 +21,7 @@ tidy script runs automatically when you do `./x.py test`.
All files must begin with the following copyright notice:
```
```rust
// Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
@ -48,7 +48,7 @@ tests -- it can be necessary to exempt yourself from this limit. In
that case, you can add a comment towards the top of the file (after
the copyright notice) like so:
```
```rust
// ignore-tidy-linelength
```
@ -61,7 +61,7 @@ Prefer 4-space indent.
# Coding for correctness
Beyond formatting, there are a few other tips that are worth
following.
following.
## Prefer exhaustive matches

View File

@ -19,7 +19,7 @@ compilation improves, that may change.)
The dependency structure of these crates is roughly a diamond:
```
```txt
rustc_driver
/ | \
/ | \

View File

@ -12,8 +12,8 @@ This chapter covers the main concepts of the HIR.
You can view the HIR representation of your code by passing the
`-Zunpretty=hir-tree` flag to rustc:
```
cargo rustc -- -Zunpretty=hir-tree
```bash
> cargo rustc -- -Zunpretty=hir-tree
```
### Out-of-band storage and the `Crate` type

View File

@ -70,8 +70,8 @@ Once you've created a config.toml, you are now ready to run
`x.py`. There are a lot of options here, but let's start with what is
probably the best "go to" command for building a local rust:
```
./x.py build -i --stage 1 src/libstd
```bash
> ./x.py build -i --stage 1 src/libstd
```
What this command will do is the following:
@ -106,7 +106,7 @@ will execute the stage2 compiler (which we did not build, but which
you will likely need to build at some point; for example, if you want
to run the entire test suite).
```
```bash
> rustup toolchain link stage1 build/<host-triple>/stage1
> rustup toolchain link stage2 build/<host-triple>/stage2
```
@ -115,7 +115,7 @@ Now you can run the rustc you built with. If you run with `-vV`, you
should see a version number ending in `-dev`, indicating a build from
your local environment:
```
```bash
> rustc +stage1 -vV
rustc 1.25.0-dev
binary: rustc

View File

@ -10,7 +10,7 @@ As an example, see `src/test/compile-fail/dep-graph-caller-callee.rs`.
The idea is that you can annotate a test like:
```rust
```rust,ignore
#[rustc_if_this_changed]
fn foo() { }
@ -48,7 +48,7 @@ the graph. You can filter in three ways:
To filter, use the `RUST_DEP_GRAPH_FILTER` environment variable, which should
look like one of the following:
```
```txt
source_filter // nodes originating from source_filter
-> target_filter // nodes that can reach target_filter
source_filter -> target_filter // nodes in between source_filter and target_filter
@ -58,14 +58,14 @@ source_filter -> target_filter // nodes in between source_filter and target_filt
A node is considered to match a filter if all of those strings appear in its
label. So, for example:
```
```txt
RUST_DEP_GRAPH_FILTER='-> TypeckTables'
```
would select the predecessors of all `TypeckTables` nodes. Usually though you
want the `TypeckTables` node for some particular fn, so you might write:
```
```txt
RUST_DEP_GRAPH_FILTER='-> TypeckTables & bar'
```
@ -75,7 +75,7 @@ with `bar` in their name.
Perhaps you are finding that when you change `foo` you need to re-type-check
`bar`, but you don't think you should have to. In that case, you might do:
```
```txt
RUST_DEP_GRAPH_FILTER='Hir & foo -> TypeckTables & bar'
```
@ -105,8 +105,10 @@ check of `bar` and you don't think there should be. You dump the
dep-graph as described in the previous section and open `dep-graph.txt`
to see something like:
Hir(foo) -> Collect(bar)
Collect(bar) -> TypeckTables(bar)
```txt
Hir(foo) -> Collect(bar)
Collect(bar) -> TypeckTables(bar)
```
That first edge looks suspicious to you. So you set
`RUST_FORBID_DEP_GRAPH_EDGE` to `Hir&foo -> Collect&bar`, re-run, and

View File

@ -15,7 +15,7 @@ expansion works.
It's helpful to have an example to refer to. For the remainder of this chapter,
whenever we refer to the "example _definition_", we mean the following:
```rust
```rust,ignore
macro_rules! printer {
(print $mvar:ident) => {
println!("{}", $mvar);
@ -45,7 +45,7 @@ worrying about _where_. For more information about tokens, see the
Whenever we refer to the "example _invocation_", we mean the following snippet:
```rust
```rust,ignore
printer!(print foo); // Assume `foo` is a variable defined somewhere else...
```
@ -65,7 +65,7 @@ defined in [`src/libsyntax/ext/tt/macro_parser.rs`][code_mp].
The interface of the macro parser is as follows (this is slightly simplified):
```rust
```rust,ignore
fn parse(
sess: ParserSession,
tts: TokenStream,

View File

@ -8,13 +8,13 @@ the code itself, naturally.
One way to think of method lookup is that we convert an expression of
the form:
```rust
```rust,ignore
receiver.method(...)
```
into a more explicit [UFCS] form:
```rust
```rust,ignore
Trait::method(ADJ(receiver), ...) // for a trait call
ReceiverType::method(ADJ(receiver), ...) // for an inherent method call
```
@ -24,7 +24,7 @@ autoderefs and then possibly an autoref (e.g., `&**receiver`). However
we sometimes do other adjustments and coercions along the way, in
particular unsizing (e.g., converting from `[T; n]` to `[T]`).
Method lookup is divided into two major phases:
Method lookup is divided into two major phases:
1. Probing ([`probe.rs`][probe]). The probe phase is when we decide what method
to call and how to adjust the receiver.
@ -51,7 +51,7 @@ until it cannot be deref'd anymore, as well as applying an optional
"unsize" step. So if the receiver has type `Rc<Box<[T; 3]>>`, this
might yield:
```rust
```rust,ignore
Rc<Box<[T; 3]>>
Box<[T; 3]>
[T; 3]
@ -99,9 +99,10 @@ So, let's continue our example. Imagine that we were calling a method
that defines it with `&self` for the type `Rc<U>` as well as a method
on the type `Box` that defines `Foo` but with `&mut self`. Then we
might have two candidates:
&Rc<Box<[T; 3]>> from the impl of `Foo` for `Rc<U>` where `U=Box<T; 3]>
&mut Box<[T; 3]>> from the inherent impl on `Box<U>` where `U=[T; 3]`
```txt
&Rc<Box<[T; 3]>> from the impl of `Foo` for `Rc<U>` where `U=Box<T; 3]>
&mut Box<[T; 3]>> from the inherent impl on `Box<U>` where `U=[T; 3]`
```
### Candidate search

View File

@ -52,13 +52,13 @@ fn main() {
The files have names like `rustc.main.000-000.CleanEndRegions.after.mir`. These
names have a number of parts:
```
```txt
rustc.main.000-000.CleanEndRegions.after.mir
---- --- --- --------------- ----- either before or after
| | | name of the pass
| | index of dump within the pass (usually 0, but some passes dump intermediate states)
| index of the pass
def-path to the function etc being dumped
def-path to the function etc being dumped
```
You can also make more selective filters. For example, `main & CleanEndRegions`
@ -159,7 +159,7 @@ ensuring that the reads have already happened (remember that
[queries are memoized](./query.html), so executing a query twice
simply loads from a cache the second time):
```
```txt
mir_const(D) --read-by--> mir_const_qualif(D)
| ^
stolen-by |

View File

@ -34,7 +34,7 @@ The MIR-based region analysis consists of two major functions:
are used.
- More details to come, though the [NLL RFC] also includes fairly thorough
(and hopefully readable) coverage.
[fvb]: appendix-background.html#free-vs-bound
[NLL RFC]: http://rust-lang.github.io/rfcs/2094-nll.html
@ -82,7 +82,7 @@ The kinds of region elements are as follows:
corresponds (intuitively) to some unknown set of other elements --
for details on skolemization, see the section
[skolemization and universes](#skol).
## Causal tracking
*to be written* -- describe how we can extend the values of a variable
@ -97,7 +97,7 @@ The kinds of region elements are as follows:
From time to time we have to reason about regions that we can't
concretely know. For example, consider this program:
```rust
```rust,ignore
// A function that needs a static reference
fn foo(x: &'static u32) { }
@ -122,10 +122,12 @@ stack, for example). But *how* do we reject it and *why*?
When we type-check `main`, and in particular the call `bar(foo)`, we
are going to wind up with a subtyping relationship like this one:
fn(&'static u32) <: for<'a> fn(&'a u32)
---------------- -------------------
the type of `foo` the type `bar` expects
```txt
fn(&'static u32) <: for<'a> fn(&'a u32)
---------------- -------------------
the type of `foo` the type `bar` expects
```
We handle this sort of subtyping by taking the variables that are
bound in the supertype and **skolemizing** them: this means that we
replace them with
@ -135,8 +137,10 @@ regions" -- they represent, basically, "some unknown region".
Once we've done that replacement, we have the following relation:
fn(&'static u32) <: fn(&'!1 u32)
```txt
fn(&'static u32) <: fn(&'!1 u32)
```
The key idea here is that this unknown region `'!1` is not related to
any other regions. So if we can prove that the subtyping relationship
is true for `'!1`, then it ought to be true for any region, which is
@ -147,7 +151,9 @@ subtypes, we check if their arguments have the desired relationship
(fn arguments are [contravariant](./appendix-background.html#variance), so
we swap the left and right here):
&'!1 u32 <: &'static u32
```txt
&'!1 u32 <: &'static u32
```
According to the basic subtyping rules for a reference, this will be
true if `'!1: 'static`. That is -- if "some unknown region `!1`" lives
@ -168,7 +174,7 @@ put generic type parameters into this root universe (in this sense,
there is not just one root universe, but one per item). So consider
this function `bar`:
```rust
```rust,ignore
struct Foo { }
fn bar<'a, T>(t: &'a T) {
@ -185,7 +191,7 @@ Basically, the root universe contains all the names that
Now let's extend `bar` a bit by adding a variable `x`:
```rust
```rust,ignore
fn bar<'a, T>(t: &'a T) {
let x: for<'b> fn(&'b u32) = ...;
}
@ -195,7 +201,7 @@ Here, the name `'b` is not part of the root universe. Instead, when we
"enter" into this `for<'b>` (e.g., by skolemizing it), we will create
a child universe of the root, let's call it U1:
```
```txt
U0 (root universe)
└─ U1 (child universe)
@ -207,7 +213,7 @@ with a new name, which we are identifying by its universe number:
Now let's extend `bar` a bit by adding one more variable, `y`:
```rust
```rust,ignore
fn bar<'a, T>(t: &'a T) {
let x: for<'b> fn(&'b u32) = ...;
let y: for<'c> fn(&'b u32) = ...;
@ -218,7 +224,7 @@ When we enter *this* type, we will again create a new universe, which
we'll call `U2`. Its parent will be the root universe, and U1 will be
its sibling:
```
```txt
U0 (root universe)
├─ U1 (child universe)
@ -257,11 +263,11 @@ children, that inference variable X would have to be in U0. And since
X is in U0, it cannot name anything from U1 (or U2). This is perhaps easiest
to see by using a kind of generic "logic" example:
```
```txt
exists<X> {
forall<Y> { ... /* Y is in U1 ... */ }
forall<Z> { ... /* Z is in U2 ... */ }
}
}
```
Here, the only way for the two foralls to interact would be through X,
@ -290,8 +296,10 @@ does not say region elements **will** appear.
In the region inference engine, outlives constraints have the form:
V1: V2 @ P
```txt
V1: V2 @ P
```
where `V1` and `V2` are region indices, and hence map to some region
variable (which may be universally or existentially quantified). The
`P` here is a "point" in the control-flow graph; it's not important
@ -338,8 +346,10 @@ for universal regions from the fn signature.)
Put another way, the "universal regions" check can be considered to be
checking constraints like:
{skol(1)}: V1
```txt
{skol(1)}: V1
```
where `{skol(1)}` is like a constant set, and V1 is the variable we
made to represent the `!1` region.
@ -348,30 +358,40 @@ made to represent the `!1` region.
OK, so far so good. Now let's walk through what would happen with our
first example:
fn(&'static u32) <: fn(&'!1 u32) @ P // this point P is not imp't here
```txt
fn(&'static u32) <: fn(&'!1 u32) @ P // this point P is not imp't here
```
The region inference engine will create a region element domain like this:
{ CFG; end('static); skol(1) }
--- ------------ ------- from the universe `!1`
| 'static is always in scope
all points in the CFG; not especially relevant here
```txt
{ CFG; end('static); skol(1) }
--- ------------ ------- from the universe `!1`
| 'static is always in scope
all points in the CFG; not especially relevant here
```
It will always create two universal variables, one representing
`'static` and one representing `'!1`. Let's call them Vs and V1. They
will have initial values like so:
Vs = { CFG; end('static) } // it is in U0, so can't name anything else
V1 = { skol(1) }
```txt
Vs = { CFG; end('static) } // it is in U0, so can't name anything else
V1 = { skol(1) }
```
From the subtyping constraint above, we would have an outlives constraint like
'!1: 'static @ P
```txt
'!1: 'static @ P
```
To process this, we would grow the value of V1 to include all of Vs:
Vs = { CFG; end('static) }
V1 = { CFG; end('static), skol(1) }
```txt
Vs = { CFG; end('static) }
V1 = { CFG; end('static), skol(1) }
```
At that point, constraint propagation is complete, because all the
outlives relationships are satisfied. Then we would go to the "check
@ -385,34 +405,44 @@ In this case, `V1` *did* grow too large -- it is not known to outlive
What about this subtyping relationship?
for<'a> fn(&'a u32, &'a u32)
<:
for<'b, 'c> fn(&'b u32, &'c u32)
Here we would skolemize the supertype, as before, yielding:
```txt
for<'a> fn(&'a u32, &'a u32)
<:
for<'b, 'c> fn(&'b u32, &'c u32)
```
Here we would skolemize the supertype, as before, yielding:
```txt
for<'a> fn(&'a u32, &'a u32)
<:
fn(&'!1 u32, &'!2 u32)
```
for<'a> fn(&'a u32, &'a u32)
<:
fn(&'!1 u32, &'!2 u32)
then we instantiate the variable on the left-hand side with an
existential in universe U2, yielding the following (`?n` is a notation
for an existential variable):
fn(&'?3 u32, &'?3 u32)
<:
fn(&'!1 u32, &'!2 u32)
```txt
fn(&'?3 u32, &'?3 u32)
<:
fn(&'!1 u32, &'!2 u32)
```
Then we break this down further:
&'!1 u32 <: &'?3 u32
&'!2 u32 <: &'?3 u32
```txt
&'!1 u32 <: &'?3 u32
&'!2 u32 <: &'?3 u32
```
and even further, yield up our region constraints:
'!1: '?3
'!2: '?3
```txt
'!1: '?3
'!2: '?3
```
Note that, in this case, both `'!1` and `'!2` have to outlive the
variable `'?3`, but the variable `'?3` is not forced to outlive
anything else. Therefore, it simply starts and ends as the empty set
@ -430,15 +460,17 @@ common lifetime of our arguments. -nmatsakis)
[ohdeargoditsallbroken]: https://github.com/rust-lang/rust/issues/32330#issuecomment-202536977
## Final example
## Final example
Let's look at one last example. We'll extend the previous one to have
a return type:
for<'a> fn(&'a u32, &'a u32) -> &'a u32
<:
for<'b, 'c> fn(&'b u32, &'c u32) -> &'b u32
```txt
for<'a> fn(&'a u32, &'a u32) -> &'a u32
<:
for<'b, 'c> fn(&'b u32, &'c u32) -> &'b u32
```
Despite seeming very similar to the previous example, this case is going to get
an error. That's good: the problem is that we've gone from a fn that promises
to return one of its two arguments, to a fn that is promising to return the
@ -446,45 +478,59 @@ first one. That is unsound. Let's see how it plays out.
First, we skolemize the supertype:
for<'a> fn(&'a u32, &'a u32) -> &'a u32
<:
fn(&'!1 u32, &'!2 u32) -> &'!1 u32
```txt
for<'a> fn(&'a u32, &'a u32) -> &'a u32
<:
fn(&'!1 u32, &'!2 u32) -> &'!1 u32
```
Then we instantiate the subtype with existentials (in U2):
fn(&'?3 u32, &'?3 u32) -> &'?3 u32
<:
fn(&'!1 u32, &'!2 u32) -> &'!1 u32
```txt
fn(&'?3 u32, &'?3 u32) -> &'?3 u32
<:
fn(&'!1 u32, &'!2 u32) -> &'!1 u32
```
And now we create the subtyping relationships:
&'!1 u32 <: &'?3 u32 // arg 1
&'!2 u32 <: &'?3 u32 // arg 2
&'?3 u32 <: &'!1 u32 // return type
```txt
&'!1 u32 <: &'?3 u32 // arg 1
&'!2 u32 <: &'?3 u32 // arg 2
&'?3 u32 <: &'!1 u32 // return type
```
And finally the outlives relationships. Here, let V1, V2, and V3 be the
variables we assign to `!1`, `!2`, and `?3` respectively:
V1: V3
V2: V3
V3: V1
```txt
V1: V3
V2: V3
V3: V1
```
Those variables will have these initial values:
V1 in U1 = {skol(1)}
V2 in U2 = {skol(2)}
V3 in U2 = {}
```txt
V1 in U1 = {skol(1)}
V2 in U2 = {skol(2)}
V3 in U2 = {}
```
Now because of the `V3: V1` constraint, we have to add `skol(1)` into `V3` (and
indeed it is visible from `V3`), so we get:
V3 in U2 = {skol(1)}
```txt
V3 in U2 = {skol(1)}
```
then we have this constraint `V2: V3`, so we wind up having to enlarge
`V2` to include `skol(1)` (which it can also see):
V2 in U2 = {skol(1), skol(2)}
```txt
V2 in U2 = {skol(1), skol(2)}
```
Now contraint propagation is done, but when we check the outlives
relationships, we find that `V2` includes this new element `skol(1)`,
so we report an error.

View File

@ -13,7 +13,7 @@ To implement a visitor, you have to create a type that represents
your visitor. Typically, this type wants to "hang on" to whatever
state you will need while processing MIR:
```rust
```rust,ignore
struct MyVisitor<...> {
tcx: TyCtxt<'cx, 'tcx, 'tcx>,
...
@ -22,10 +22,10 @@ struct MyVisitor<...> {
and you then implement the `Visitor` or `MutVisitor` trait for that type:
```rust
```rust,ignore
impl<'tcx> MutVisitor<'tcx> for NoLandingPads {
fn visit_foo(&mut self, ...) {
// ...
...
self.super_foo(...);
}
}

View File

@ -69,12 +69,12 @@ fn main() {
You should see something like:
```
```mir
// WARNING: This output format is intended for human consumers only
// and is subject to change without notice. Knock yourself out.
fn main() -> () {
...
}
}
```
This is the MIR format for the `main` function.
@ -82,7 +82,7 @@ This is the MIR format for the `main` function.
**Variable declarations.** If we drill in a bit, we'll see it begins
with a bunch of variable declarations. They look like this:
```
```mir
let mut _0: (); // return place
scope 1 {
let mut _1: std::vec::Vec<i32>; // "vec" in scope 1 at src/main.rs:2:9: 2:16
@ -107,8 +107,8 @@ program (which names were in scope when).
it may look slightly different when you view it, and I am ignoring some of the
comments):
```
bb0: {
```mir
bb0: {
StorageLive(_1);
_1 = const <std::vec::Vec<T>>::new() -> bb2;
}
@ -117,7 +117,7 @@ bb0: {
A basic block is defined by a series of **statements** and a final
**terminator**. In this case, there is one statement:
```
```mir
StorageLive(_1);
```
@ -129,7 +129,7 @@ allocate stack space.
The **terminator** of the block `bb0` is the call to `Vec::new`:
```
```mir
_1 = const <std::vec::Vec<T>>::new() -> bb2;
```
@ -142,8 +142,8 @@ possible, and hence we list only one succssor block, `bb2`.
If we look ahead to `bb2`, we will see it looks like this:
```
bb2: {
```mir
bb2: {
StorageLive(_3);
_3 = &mut _1;
_2 = const <std::vec::Vec<T>>::push(move _3, const 1i32) -> [return: bb3, unwind: bb4];
@ -153,13 +153,13 @@ bb2: {
Here there are two statements: another `StorageLive`, introducing the `_3`
temporary, and then an assignment:
```
```mir
_3 = &mut _1;
```
Assignments in general have the form:
```
```txt
<Place> = <Rvalue>
```
@ -169,7 +169,7 @@ value: in this case, the rvalue is a mutable borrow expression, which
looks like `&mut <Place>`. So we can kind of define a grammar for
rvalues like so:
```
```txt
<Rvalue> = & (mut)? <Place>
| <Operand> + <Operand>
| <Operand> - <Operand>
@ -178,7 +178,7 @@ rvalues like so:
<Operand> = Constant
| copy Place
| move Place
```
```
As you can see from this grammar, rvalues cannot be nested -- they can
only reference places and constants. Moreover, when you use a place,
@ -188,7 +188,7 @@ for a place of any type). So, for example, if we had the expression `x
= a + b + c` in Rust, that would get compile to two statements and a
temporary:
```
```mir
TMP1 = a + b
x = TMP1 + c
```
@ -214,14 +214,14 @@ but [you can read about those below](#promoted)).
we pass around `BasicBlock` values, which are
[newtype'd] indices into this vector.
- **Statements** are represented by the type `Statement`.
- **Terminators** are represented by the `Terminator`.
- **Terminators** are represented by the `Terminator`.
- **Locals** are represented by a [newtype'd] index type `Local`. The
data for a local variable is found in the `Mir` (the `local_decls`
vector). There is also a special constant `RETURN_PLACE` identifying
the special "local" representing the return value.
- **Places** are identified by the enum `Place`. There are a few variants:
- Local variables like `_1`
- Static variables `FOO`
- Static variables `FOO`
- **Projections**, which are fields or other things that "project
out" from a base place. So e.g. the place `_1.f` is a projection,
with `f` being the "projection element and `_1` being the base

View File

@ -14,7 +14,7 @@ placed into metadata.
Once you have a use-site like
```rust
```rust,ignore
type Foo = [u8; FOO - 42];
```
@ -24,7 +24,7 @@ create items that use the type (locals, constants, function arguments, ...).
To obtain the (in this case empty) parameter environment, one can call
`let param_env = tcx.param_env(length_def_id);`. The `GlobalId` needed is
```rust
```rust,ignore
let gid = GlobalId {
promoted: None,
instance: Instance::mono(length_def_id),

View File

@ -41,7 +41,7 @@ To invoke a query is simple. The tcx ("type context") offers a method
for each defined query. So, for example, to invoke the `type_of`
query, you would just do this:
```rust
```rust,ignore
let ty = tcx.type_of(some_def_id);
```
@ -59,7 +59,7 @@ better user experience. In order to recover from a cycle, you don't
get to use the nice method-call-style syntax. Instead, you invoke
using the `try_get` method, which looks roughly like this:
```rust
```rust,ignore
use ty::maps::queries;
...
match queries::type_of::try_get(tcx, DUMMY_SP, self.did) {
@ -87,7 +87,7 @@ will be reported due to this cycle by some other bit of code. In that
case, you can invoke `err.cancel()` to not emit any error. It is
traditional to then invoke:
```
```rust,ignore
tcx.sess.delay_span_bug(some_span, "some message")
```
@ -126,7 +126,7 @@ on how that works).
Providers always have the same signature:
```rust
```rust,ignore
fn provider<'cx, 'tcx>(tcx: TyCtxt<'cx, 'tcx, 'tcx>,
key: QUERY_KEY)
-> QUERY_RESULT
@ -146,7 +146,7 @@ When the tcx is created, it is given the providers by its creator using
the `Providers` struct. This struct is generated by the macros here, but it
is basically a big list of function pointers:
```rust
```rust,ignore
struct Providers {
type_of: for<'cx, 'tcx> fn(TyCtxt<'cx, 'tcx, 'tcx>, DefId) -> Ty<'tcx>,
...
@ -163,7 +163,7 @@ throughout the other `rustc_*` crates. This is done by invoking
various `provide` functions. These functions tend to look something
like this:
```rust
```rust,ignore
pub fn provide(providers: &mut Providers) {
*providers = Providers {
type_of,
@ -180,7 +180,7 @@ before.) So, if we want to add a provider for some other query,
let's call it `fubar`, into the crate above, we might modify the `provide()`
function like so:
```rust
```rust,ignore
pub fn provide(providers: &mut Providers) {
*providers = Providers {
type_of,
@ -189,7 +189,7 @@ pub fn provide(providers: &mut Providers) {
};
}
fn fubar<'cx, 'tcx>(tcx: TyCtxt<'cx, 'tcx>, key: DefId) -> Fubar<'tcx> { .. }
fn fubar<'cx, 'tcx>(tcx: TyCtxt<'cx, 'tcx>, key: DefId) -> Fubar<'tcx> { ... }
```
N.B. Most of the `rustc_*` crates only provide **local
@ -218,7 +218,7 @@ something like:
[maps-mod]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/ty/maps/index.html
```
```rust,ignore
define_maps! { <'tcx>
/// Records the type of every item.
[] fn type_of: TypeOfItem(DefId) -> Ty<'tcx>,
@ -229,7 +229,7 @@ define_maps! { <'tcx>
Each line of the macro defines one query. The name is broken up like this:
```
```rust,ignore
[] fn type_of: TypeOfItem(DefId) -> Ty<'tcx>,
^^ ^^^^^^^ ^^^^^^^^^^ ^^^^^ ^^^^^^^^
| | | | |
@ -288,7 +288,7 @@ describing the query. Each such struct implements the
key/value of that particular query. Basically the code generated looks something
like this:
```rust
```rust,ignore
// Dummy struct representing a particular kind of query:
pub struct type_of<'tcx> { phantom: PhantomData<&'tcx ()> }
@ -306,7 +306,7 @@ this trait is optional if the query key is `DefId`, but if you *don't*
implement it, you get a pretty generic error ("processing `foo`...").
You can put new impls into the `config` module. They look something like this:
```rust
```rust,ignore
impl<'tcx> QueryDescription for queries::type_of<'tcx> {
fn describe(tcx: TyCtxt, key: DefId) -> String {
format!("computing the type of `{}`", tcx.item_path_str(key))

View File

@ -70,7 +70,7 @@ The main process of crate crawling is done in `clean/mod.rs` through several
implementations of the `Clean` trait defined within. This is a conversion
trait, which defines one method:
```rust
```rust,ignore
pub trait Clean<T> {
fn clean(&self, cx: &DocContext) -> T;
}

View File

@ -100,7 +100,7 @@ are normally put after the short comment that explains the point of
this test. For example, this test uses the `// compile-flags` command
to specify a custom flag to give to rustc when the test is compiled:
```rust
```rust,ignore
// Copyright 2017 The Rust Project Developers. blah blah blah.
// ...
// except according to those terms.
@ -198,7 +198,7 @@ incremental, though incremental tests are somewhat
different). Revisions allow a single test file to be used for multiple
tests. This is done by adding a special header at the top of the file:
```
```rust
// revisions: foo bar baz
```
@ -211,7 +211,7 @@ You can also customize headers and expected error messages to a particular
revision. To do this, add `[foo]` (or `bar`, `baz`, etc) after the `//`
comment, like so:
```
```rust
// A flag to pass in only for cfg `foo`:
//[foo]compile-flags: -Z verbose
@ -284,7 +284,7 @@ between platforms, mainly about filenames:
Sometimes these built-in normalizations are not enough. In such cases, you
may provide custom normalization rules using the header commands, e.g.
```
```rust
// normalize-stdout-test: "foo" -> "bar"
// normalize-stderr-32bit: "fn\(\) \(32 bits\)" -> "fn\(\) \($$PTR bits\)"
// normalize-stderr-64bit: "fn\(\) \(64 bits\)" -> "fn\(\) \($$PTR bits\)"
@ -298,7 +298,7 @@ default regex flavor provided by `regex` crate.
The corresponding reference file will use the normalized output to test both
32-bit and 64-bit platforms:
```
```txt
...
|
= note: source type: fn() ($PTR bits)

View File

@ -3,8 +3,8 @@
You can run the tests using `x.py`. The most basic command -- which
you will almost never want to use! -- is as follows:
```
./x.py test
```bash
> ./x.py test
```
This will build the full stage 2 compiler and then run the whole test
@ -17,7 +17,7 @@ The test results are cached and previously successful tests are
`ignored` during testing. The stdout/stderr contents as well as a
timestamp file for every test can be found under `build/ARCH/test/`.
To force-rerun a test (e.g. in case the test runner fails to notice
a change) you can simply remove the timestamp file.
a change) you can simply remove the timestamp file.
## Running a subset of the test suites
@ -27,7 +27,7 @@ test" that can be used after modifying rustc to see if things are
generally working correctly would be the following:
```bash
./x.py test --stage 1 src/test/{ui,compile-fail,run-pass}
> ./x.py test --stage 1 src/test/{ui,compile-fail,run-pass}
```
This will run the `ui`, `compile-fail`, and `run-pass` test suites,
@ -37,7 +37,7 @@ example, if you are hacking on debuginfo, you may be better off with
the debuginfo test suite:
```bash
./x.py test --stage 1 src/test/debuginfo
> ./x.py test --stage 1 src/test/debuginfo
```
**Warning:** Note that bors only runs the tests with the full stage 2
@ -51,8 +51,8 @@ Another common thing that people want to do is to run an **individual
test**, often the test they are trying to fix. One way to do this is
to invoke `x.py` with the `--test-args` option:
```
./x.py test --stage 1 src/test/ui --test-args issue-1234
```bash
> ./x.py test --stage 1 src/test/ui --test-args issue-1234
```
Under the hood, the test runner invokes the standard rust test runner
@ -62,8 +62,8 @@ filtering for tests that include "issue-1234" in the name.
Often, though, it's easier to just run the test by hand. Most tests are
just `rs` files, so you can do something like
```
rustc +stage1 src/test/ui/issue-1234.rs
```bash
> rustc +stage1 src/test/ui/issue-1234.rs
```
This is much faster, but doesn't always work. For example, some tests

View File

@ -26,7 +26,7 @@ possible impl is this one, with def-id 22:
[selection process]: ./trait-resolution.html#selection
```rust
```rust,ignore
impl Foo<isize> for usize { ... } // Impl #22
```

View File

@ -18,14 +18,14 @@ trait Foo<X> {
Let's say we have a function `want_hrtb` that wants a type which
implements `Foo<&'a isize>` for any `'a`:
```rust
```rust,ignore
fn want_hrtb<T>() where T : for<'a> Foo<&'a isize> { ... }
```
Now we have a struct `AnyInt` that implements `Foo<&'a isize>` for any
`'a`:
```rust
```rust,ignore
struct AnyInt;
impl<'a> Foo<&'a isize> for AnyInt { }
```
@ -71,7 +71,7 @@ set for `'0` is `{'0, '$a}`, and hence the check will succeed.
Let's consider a failure case. Imagine we also have a struct
```rust
```rust,ignore
struct StaticInt;
impl Foo<&'static isize> for StaticInt;
```

View File

@ -13,13 +13,13 @@ see [*this* traits chapter](./traits.html).
Trait resolution is the process of pairing up an impl with each
reference to a trait. So, for example, if there is a generic function like:
```rust
fn clone_slice<T:Clone>(x: &[T]) -> Vec<T> { /*...*/ }
```rust,ignore
fn clone_slice<T:Clone>(x: &[T]) -> Vec<T> { ... }
```
and then a call to that function:
```rust
```rust,ignore
let v: Vec<isize> = clone_slice(&[1, 2, 3])
```
@ -30,7 +30,7 @@ Note that in some cases, like generic functions, we may not be able to
find a specific impl, but we can figure out that the caller must
provide an impl. For example, consider the body of `clone_slice`:
```rust
```rust,ignore
fn clone_slice<T:Clone>(x: &[T]) -> Vec<T> {
let mut v = Vec::new();
for e in &x {
@ -143,7 +143,7 @@ otherwise the result is considered ambiguous.
This process is easier if we work through some examples. Consider
the following trait:
```rust
```rust,ignore
trait Convert<Target> {
fn convert(&self) -> Target;
}
@ -154,14 +154,14 @@ converts from the (implicit) `Self` type to the `Target` type. If we
wanted to permit conversion between `isize` and `usize`, we might
implement `Convert` like so:
```rust
impl Convert<usize> for isize { /*...*/ } // isize -> usize
impl Convert<isize> for usize { /*...*/ } // usize -> isize
```rust,ignore
impl Convert<usize> for isize { ... } // isize -> usize
impl Convert<isize> for usize { ... } // usize -> isize
```
Now imagine there is some code like the following:
```rust
```rust,ignore
let x: isize = ...;
let y = x.convert();
```
@ -186,7 +186,7 @@ inference?
But what happens if there are multiple impls where all the types
unify? Consider this example:
```rust
```rust,ignore
trait Get {
fn get(&self) -> Self;
}
@ -224,11 +224,11 @@ the same trait (or some subtrait) and which can match against the obligation.
Consider this simple example:
```rust
```rust,ignore
trait A1 {
fn do_a1(&self);
}
trait A2 : A1 { /*...*/ }
trait A2 : A1 { ... }
trait B {
fn do_b(&self);
@ -256,13 +256,13 @@ values found in the obligation, possibly yielding a type error.
Suppose we have the following variation of the `Convert` example in the
previous section:
```rust
```rust,ignore
trait Convert<Target> {
fn convert(&self) -> Target;
}
impl Convert<usize> for isize { /*...*/ } // isize -> usize
impl Convert<isize> for usize { /*...*/ } // usize -> isize
impl Convert<usize> for isize { ... } // isize -> usize
impl Convert<isize> for usize { ... } // usize -> isize
let x: isize = ...;
let y: char = x.convert(); // NOTE: `y: char` now!
@ -296,11 +296,11 @@ everything out.
Here is an example:
```rust
trait Foo { /*...*/ }
impl<U,T:Bar<U>> Foo for Vec<T> { /*...*/ }
```rust,ignore
trait Foo { ... }
impl<U, T:Bar<U>> Foo for Vec<T> { ... }
impl Bar<usize> for isize { /*...*/ }
impl Bar<usize> for isize { ... }
```
After one shallow round of selection for an obligation like `Vec<isize>

View File

@ -29,10 +29,10 @@ that is, simplified -- based on the types given in an impl. So, to
continue with our example, the impl of `IntoIterator` for `Option<T>`
declares (among other things) that `Item = T`:
```rust
```rust,ignore
impl<T> IntoIterator for Option<T> {
type Item = T;
..
...
}
```
@ -51,9 +51,11 @@ In our logic, normalization is defined by a predicate
impls. For example, the `impl` of `IntoIterator` for `Option<T>` that
we saw above would be lowered to a program clause like so:
forall<T> {
Normalize(<Option<T> as IntoIterator>::Item -> T)
}
```txt
forall<T> {
Normalize(<Option<T> as IntoIterator>::Item -> T)
}
```
(An aside: since we do not permit quantification over traits, this is
really more like a family of predicates, one for each associated
@ -67,7 +69,7 @@ we've seen so far.
Sometimes however we want to work with associated types that cannot be
normalized. For example, consider this function:
```rust
```rust,ignore
fn foo<T: IntoIterator>(...) { ... }
```
@ -99,20 +101,24 @@ consider an associated type projection equal to another type?":
We now introduce the `ProjectionEq` predicate to bring those two cases
together. The `ProjectionEq` predicate looks like so:
ProjectionEq(<T as IntoIterator>::Item = U)
```txt
ProjectionEq(<T as IntoIterator>::Item = U)
```
and we will see that it can be proven *either* via normalization or
skolemization. As part of lowering an associated type declaration from
some trait, we create two program clauses for `ProjectionEq`:
forall<T, U> {
ProjectionEq(<T as IntoIterator>::Item = U) :-
Normalize(<T as IntoIterator>::Item -> U)
}
```txt
forall<T, U> {
ProjectionEq(<T as IntoIterator>::Item = U) :-
Normalize(<T as IntoIterator>::Item -> U)
}
forall<T> {
ProjectionEq(<T as IntoIterator>::Item = (IntoIterator::Item)<T>)
}
forall<T> {
ProjectionEq(<T as IntoIterator>::Item = (IntoIterator::Item)<T>)
}
```
These are the only two `ProjectionEq` program clauses we ever make for
any given associated item.
@ -124,7 +130,9 @@ with unification. As described in the
[type inference](./type-inference.html) section, unification is
basically a procedure with a signature like this:
Unify(A, B) = Result<(Subgoals, RegionConstraints), NoSolution>
```txt
Unify(A, B) = Result<(Subgoals, RegionConstraints), NoSolution>
```
In other words, we try to unify two things A and B. That procedure
might just fail, in which case we get back `Err(NoSolution)`. This

View File

@ -19,12 +19,16 @@ In a traditional Prolog system, when you start a query, the solver
will run off and start supplying you with every possible answer it can
find. So given something like this:
?- Vec<i32>: AsRef<?U>
```txt
?- Vec<i32>: AsRef<?U>
```
The solver might answer:
Vec<i32>: AsRef<[i32]>
continue? (y/n)
```txt
Vec<i32>: AsRef<[i32]>
continue? (y/n)
```
This `continue` bit is interesting. The idea in Prolog is that the
solver is finding **all possible** instantiations of your query that
@ -35,34 +39,42 @@ response with our original query -- Rust's solver gives back a
substitution instead). If we were to hit `y`, the solver might then
give us another possible answer:
Vec<i32>: AsRef<Vec<i32>>
continue? (y/n)
```txt
Vec<i32>: AsRef<Vec<i32>>
continue? (y/n)
```
This answer derives from the fact that there is a reflexive impl
(`impl<T> AsRef<T> for T`) for `AsRef`. If were to hit `y` again,
then we might get back a negative response:
no
```txt
no
```
Naturally, in some cases, there may be no possible answers, and hence
the solver will just give me back `no` right away:
?- Box<i32>: Copy
no
```txt
?- Box<i32>: Copy
no
```
In some cases, there might be an infinite number of responses. So for
example if I gave this query, and I kept hitting `y`, then the solver
would never stop giving me back answers:
?- Vec<?U>: Clone
Vec<i32>: Clone
continue? (y/n)
Vec<Box<i32>>: Clone
continue? (y/n)
Vec<Box<Box<i32>>>: Clone
continue? (y/n)
Vec<Box<Box<Box<i32>>>>: Clone
continue? (y/n)
```txt
?- Vec<?U>: Clone
Vec<i32>: Clone
continue? (y/n)
Vec<Box<i32>>: Clone
continue? (y/n)
Vec<Box<Box<i32>>>: Clone
continue? (y/n)
Vec<Box<Box<Box<i32>>>>: Clone
continue? (y/n)
```
As you can imagine, the solver will gleefully keep adding another
layer of `Box` until we ask it to stop, or it runs out of memory.
@ -70,12 +82,16 @@ layer of `Box` until we ask it to stop, or it runs out of memory.
Another interesting thing is that queries might still have variables
in them. For example:
?- Rc<?T>: Clone
```txt
?- Rc<?T>: Clone
```
might produce the answer:
Rc<?T>: Clone
continue? (y/n)
```txt
Rc<?T>: Clone
continue? (y/n)
```
After all, `Rc<?T>` is true **no matter what type `?T` is**.
@ -132,7 +148,7 @@ impls; among them, there are these two (for clarify, I've written the
[borrow]: https://doc.rust-lang.org/std/borrow/trait.Borrow.html
```rust
```rust,ignore
impl<T> Borrow<T> for T where T: ?Sized
impl<T> Borrow<[T]> for Vec<T> where T: Sized
```
@ -140,7 +156,7 @@ impl<T> Borrow<[T]> for Vec<T> where T: Sized
**Example 1.** Imagine we are type-checking this (rather artificial)
bit of code:
```rust
```rust,ignore
fn foo<A, B>(a: A, vec_b: Option<B>) where A: Borrow<B> { }
fn main() {
@ -185,7 +201,7 @@ other sources, in which case we can try the trait query again.
**Example 2.** We can now extend our previous example a bit,
and assign a value to `u`:
```rust
```rust,ignore
fn foo<A, B>(a: A, vec_b: Option<B>) where A: Borrow<B> { }
fn main() {
@ -210,11 +226,15 @@ Let's suppose that the type checker decides to revisit the
Borrow<?U>`. `?U` is no longer an unbound inference variable; it now
has a value, `Vec<?V>`. So, if we "refresh" the query with that value, we get:
Vec<?T>: Borrow<Vec<?V>>
```txt
Vec<?T>: Borrow<Vec<?V>>
```
This time, there is only one impl that applies, the reflexive impl:
impl<T> Borrow<T> for T where T: ?Sized
```txt
impl<T> Borrow<T> for T where T: ?Sized
```
Therefore, the trait checker will answer:

View File

@ -42,14 +42,18 @@ This query contains two unbound variables, but it also contains the
lifetime `'static`. The trait system generally ignores all lifetimes
and treats them equally, so when canonicalizing, we will *also*
replace any [free lifetime](./appendix-background.html#free-vs-bound) with a
canonical variable. Therefore, we get the following result:
canonical variable. Therefore, we get the following result:
?0: Foo<'?1, ?2>
Sometimes we write this differently, like so:
```txt
?0: Foo<'?1, ?2>
```
Sometimes we write this differently, like so:
```txt
for<T,L,T> { ?0: Foo<'?1, ?2> }
```
for<T,L,T> { ?0: Foo<'?1, ?2> }
This `for<>` gives some information about each of the canonical
variables within. In this case, each `T` indicates a type variable,
so `?0` and `?2` are types; the `L` indicates a lifetime varibale, so
@ -57,8 +61,10 @@ so `?0` and `?2` are types; the `L` indicates a lifetime varibale, so
`CanonicalVarValues` array OV with the "original values" for each
canonicalized variable:
[?A, 'static, ?B]
```txt
[?A, 'static, ?B]
```
We'll need this vector OV later, when we process the query response.
## Executing the query
@ -70,18 +76,24 @@ we create a substitution S from the canonical form containing a fresh
inference variable (of suitable kind) for each canonical variable.
So, for our example query:
for<T,L,T> { ?0: Foo<'?1, ?2> }
```txt
for<T,L,T> { ?0: Foo<'?1, ?2> }
```
the substitution S might be:
S = [?A, '?B, ?C]
```txt
S = [?A, '?B, ?C]
```
We can then replace the bound canonical variables (`?0`, etc) with
these inference variables, yielding the following fully instantiated
query:
?A: Foo<'?B, ?C>
```txt
?A: Foo<'?B, ?C>
```
Remember that substitution S though! We're going to need it later.
OK, now that we have a fresh inference context and an instantiated
@ -93,7 +105,7 @@ created. For example, if there were only one impl of `Foo`, like so:
[cqqr]: ./traits-canonical-queries.html#query-response
```
```rust,ignore
impl<'a, X> Foo<'a, X> for Vec<X>
where X: 'a
{ ... }
@ -123,39 +135,49 @@ result substitution `var_values`, and some region constraints. To
create this, we wind up re-using the substitution S that we created
when first instantiating our query. To refresh your memory, we had a query
for<T,L,T> { ?0: Foo<'?1, ?2> }
```txt
for<T,L,T> { ?0: Foo<'?1, ?2> }
```
for which we made a substutition S:
S = [?A, '?B, ?C]
```txt
S = [?A, '?B, ?C]
```
We then did some work which unified some of those variables with other things.
If we "refresh" S with the latest results, we get:
S = [Vec<?E>, '?D, ?E]
```txt
S = [Vec<?E>, '?D, ?E]
```
These are precisely the new values for the three input variables from
our original query. Note though that they include some new variables
(like `?E`). We can make those go away by canonicalizing again! We don't
just canonicalize S, though, we canonicalize the whole query response QR:
QR = {
certainty: Proven, // or whatever
var_values: [Vec<?E>, '?D, ?E] // this is S
region_constraints: [?E: '?D], // from the impl
value: (), // for our purposes, just (), but
// in some cases this might have
// a type or other info
}
```txt
QR = {
certainty: Proven, // or whatever
var_values: [Vec<?E>, '?D, ?E] // this is S
region_constraints: [?E: '?D], // from the impl
value: (), // for our purposes, just (), but
// in some cases this might have
// a type or other info
}
```
The result would be as follows:
Canonical(QR) = for<T, L> {
certainty: Proven,
var_values: [Vec<?0>, '?1, ?2]
region_constraints: [?2: '?1],
value: (),
}
```txt
Canonical(QR) = for<T, L> {
certainty: Proven,
var_values: [Vec<?0>, '?1, ?2]
region_constraints: [?2: '?1],
value: (),
}
```
(One subtle point: when we canonicalize the query **result**, we do not
use any special treatment for free lifetimes. Note that both
@ -172,20 +194,26 @@ In the previous section we produced a canonical query result. We now have
to apply that result in our original context. If you recall, way back in the
beginning, we were trying to prove this query:
?A: Foo<'static, ?B>
```txt
?A: Foo<'static, ?B>
```
We canonicalized that into this:
for<T,L,T> { ?0: Foo<'?1, ?2> }
```txt
for<T,L,T> { ?0: Foo<'?1, ?2> }
```
and now we got back a canonical response:
for<T, L> {
certainty: Proven,
var_values: [Vec<?0>, '?1, ?2]
region_constraints: [?2: '?1],
value: (),
}
```txt
for<T, L> {
certainty: Proven,
var_values: [Vec<?0>, '?1, ?2]
region_constraints: [?2: '?1],
value: (),
}
```
We now want to apply that response to our context. Conceptually, how
we do that is to (a) instantiate each of the canonical variables in
@ -193,19 +221,19 @@ the result with a fresh inference variable, (b) unify the values in
the result with the original values, and then (c) record the region
constraints for later. Doing step (a) would yield a result of
```
```txt
{
certainty: Proven,
var_values: [Vec<?C>, '?D, ?C]
^^ ^^^ fresh inference variables
region_constraints: [?C: '?D],
value: (),
}
}
```
Step (b) would then unify:
```
```txt
?A with Vec<?C>
'static with '?D
?B with ?C

View File

@ -12,22 +12,24 @@ a few new superpowers.
In Rust's solver, **goals** and **clauses** have the following forms
(note that the two definitions reference one another):
Goal = DomainGoal // defined in the section below
| Goal && Goal
| Goal || Goal
| exists<K> { Goal } // existential quantification
| forall<K> { Goal } // universal quantification
| if (Clause) { Goal } // implication
| true // something that's trivially true
| ambiguous // something that's never provable
```txt
Goal = DomainGoal // defined in the section below
| Goal && Goal
| Goal || Goal
| exists<K> { Goal } // existential quantification
| forall<K> { Goal } // universal quantification
| if (Clause) { Goal } // implication
| true // something that's trivially true
| ambiguous // something that's never provable
Clause = DomainGoal
| Clause :- Goal // if can prove Goal, then Clause is true
| Clause && Clause
| forall<K> { Clause }
K = <type> // a "kind"
| <lifetime>
Clause = DomainGoal
| Clause :- Goal // if can prove Goal, then Clause is true
| Clause && Clause
| forall<K> { Clause }
K = <type> // a "kind"
| <lifetime>
```
The proof procedure for these sorts of goals is actually quite
straightforward. Essentially, it's a form of depth-first search. The
@ -47,8 +49,10 @@ To define the set of *domain goals* in our system, we need to first
introduce a few simple formulations. A **trait reference** consists of
the name of a trait along with a suitable set of inputs P0..Pn:
TraitRef = P0: TraitName<P1..Pn>
```txt
TraitRef = P0: TraitName<P1..Pn>
```
So, for example, `u32: Display` is a trait reference, as is `Vec<T>:
IntoIterator`. Note that Rust surface syntax also permits some extra
things, like associated type bindings (`Vec<T>: IntoIterator<Item =
@ -59,20 +63,24 @@ T>`), that are not part of a trait reference.
A **projection** consists of an associated item reference along with
its inputs P0..Pm:
Projection = <P0 as TraitName<P1..Pn>>::AssocItem<Pn+1..Pm>
```txt
Projection = <P0 as TraitName<P1..Pn>>::AssocItem<Pn+1..Pm>
```
Given that, we can define a `DomainGoal` as follows:
DomainGoal = Implemented(TraitRef)
| ProjectionEq(Projection = Type)
| Normalize(Projection -> Type)
| FromEnv(TraitRef)
| FromEnv(Projection = Type)
| WellFormed(Type)
| WellFormed(TraitRef)
| WellFormed(Projection = Type)
| Outlives(Type, Region)
| Outlives(Region, Region)
```txt
DomainGoal = Implemented(TraitRef)
| ProjectionEq(Projection = Type)
| Normalize(Projection -> Type)
| FromEnv(TraitRef)
| FromEnv(Projection = Type)
| WellFormed(Type)
| WellFormed(TraitRef)
| WellFormed(Projection = Type)
| Outlives(Type, Region)
| Outlives(Region, Region)
```
- `Implemented(TraitRef)` -- true if the given trait is
implemented for the given input types and lifetimes
@ -104,9 +112,11 @@ Given that, we can define a `DomainGoal` as follows:
Most goals in our system are "inductive". In an inductive goal,
circular reasoning is disallowed. Consider this example clause:
```txt
Implemented(Foo: Bar) :-
Implemented(Foo: Bar).
```
Considered inductively, this clause is useless: if we are trying to
prove `Implemented(Foo: Bar)`, we would then recursively have to prove
`Implemented(Foo: Bar)`, and that cycle would continue ad infinitum
@ -130,8 +140,10 @@ struct Foo {
The default rules for auto traits say that `Foo` is `Send` if the
types of its fields are `Send`. Therefore, we would have a rule like
Implemented(Foo: Send) :-
Implemented(Option<Box<Foo>>: Send).
```txt
Implemented(Foo: Send) :-
Implemented(Option<Box<Foo>>: Send).
```
As you can probably imagine, proving that `Option<Box<Foo>>: Send` is
going to wind up circularly requiring us to prove that `Foo: Send`

View File

@ -49,7 +49,7 @@ standard [ui test] mechanisms to check them. In this case, there is a
need only be a prefix of the error), but [the stderr file] contains
the full details:
```
```txt
error: Implemented(T: Foo) :- ProjectionEq(<T as std::iter::Iterator>::Item == i32), TypeOutlives(T \
: 'static), Implemented(T: std::iter::Iterator), Implemented(T: std::marker::Sized).
--> $DIR/lower_impl.rs:15:1

View File

@ -67,7 +67,7 @@ but Chalk isn't modeling those right now.
Given a trait definition
```rust
```rust,ignore
trait Trait<P1..Pn> // P0 == Self
where WC
{
@ -87,10 +87,12 @@ relationships between different kinds of domain goals. The first such
rule from the trait header creates the mapping between the `FromEnv`
and `Implemented` predicates:
// Rule Implemented-From-Env
forall<Self, P1..Pn> {
Implemented(Self: Trait<P1..Pn>) :- FromEnv(Self: Trait<P1..Pn>)
}
```txt
// Rule Implemented-From-Env
forall<Self, P1..Pn> {
Implemented(Self: Trait<P1..Pn>) :- FromEnv(Self: Trait<P1..Pn>)
}
```
<a name="implied-bounds">
@ -101,17 +103,19 @@ The next few clauses have to do with implied bounds (see also
[RFC 2089]: https://rust-lang.github.io/rfcs/2089-implied-bounds.html
// Rule Implied-Bound-From-Trait
//
// For each where clause WC:
forall<Self, P1..Pn> {
FromEnv(WC) :- FromEnv(Self: Trait<P1..Pn)
}
```txt
// Rule Implied-Bound-From-Trait
//
// For each where clause WC:
forall<Self, P1..Pn> {
FromEnv(WC) :- FromEnv(Self: Trait<P1..Pn)
}
```
This clause says that if we are assuming that the trait holds, then we can also
assume that it's where-clauses hold. It's perhaps useful to see an example:
```rust
```rust,ignore
trait Eq: PartialEq { ... }
```
@ -145,7 +149,7 @@ all the where clauses that are transitively implied by `T: Trait`.
An example:
```rust
```rust,ignore
trait Foo: A + Bar { }
trait Bar: B + Foo { }
trait A { }
@ -180,7 +184,7 @@ items.
Given a trait that declares a (possibly generic) associated type:
```rust
```rust,ignore
trait Trait<P1..Pn> // P0 == Self
where WC
{
@ -190,39 +194,43 @@ where WC
We will produce a number of program clauses. The first two define
the rules by which `ProjectionEq` can succeed; these two clauses are discussed
in detail in the [section on associated types](./traits-associated-types.html),,
in detail in the [section on associated types](./traits-associated-types.html),
but reproduced here for reference:
// Rule ProjectionEq-Normalize
//
// ProjectionEq can succeed by normalizing:
forall<Self, P1..Pn, Pn+1..Pm, U> {
ProjectionEq(<Self as Trait<P1..Pn>>::AssocType<Pn+1..Pm> = U) :-
Normalize(<Self as Trait<P1..Pn>>::AssocType<Pn+1..Pm> -> U)
}
```txt
// Rule ProjectionEq-Normalize
//
// ProjectionEq can succeed by normalizing:
forall<Self, P1..Pn, Pn+1..Pm, U> {
ProjectionEq(<Self as Trait<P1..Pn>>::AssocType<Pn+1..Pm> = U) :-
Normalize(<Self as Trait<P1..Pn>>::AssocType<Pn+1..Pm> -> U)
}
// Rule ProjectionEq-Skolemize
//
// ProjectionEq can succeed by skolemizing, see "associated type"
// chapter for more:
forall<Self, P1..Pn, Pn+1..Pm> {
ProjectionEq(
<Self as Trait<P1..Pn>>::AssocType<Pn+1..Pm> =
(Trait::AssocType)<Self, P1..Pn, Pn+1..Pm>
) :-
// But only if the trait is implemented, and the conditions from
// the associated type are met as well:
Implemented(Self: Trait<P1..Pn>)
&& WC1
}
// Rule ProjectionEq-Skolemize
//
// ProjectionEq can succeed by skolemizing, see "associated type"
// chapter for more:
forall<Self, P1..Pn, Pn+1..Pm> {
ProjectionEq(
<Self as Trait<P1..Pn>>::AssocType<Pn+1..Pm> =
(Trait::AssocType)<Self, P1..Pn, Pn+1..Pm>
) :-
// But only if the trait is implemented, and the conditions from
// the associated type are met as well:
Implemented(Self: Trait<P1..Pn>)
&& WC1
}
```
The next rule covers implied bounds for the projection. In particular,
the `Bounds` declared on the associated type must be proven to hold to
show that the impl is well-formed, and hence we can rely on them
elsewhere.
// XXX how exactly should we set this up? Have to be careful;
// presumably this has to be a kind of `FromEnv` setup.
```txt
// XXX how exactly should we set this up? Have to be careful;
// presumably this has to be a kind of `FromEnv` setup.
```
### Lowering function and constant declarations
@ -234,7 +242,7 @@ values below](#constant-vals) for more details.
Given an impl of a trait:
```rust
```rust,ignore
impl<P0..Pn> Trait<A1..An> for A0
where WC
{
@ -245,10 +253,12 @@ where WC
Let `TraitRef` be the trait reference `A0: Trait<A1..An>`. Then we
will create the following rules:
// Rule Implemented-From-Impl
forall<P0..Pn> {
Implemented(TraitRef) :- WC
}
```txt
// Rule Implemented-From-Impl
forall<P0..Pn> {
Implemented(TraitRef) :- WC
}
```
In addition, we will lower all of the *impl items*.
@ -258,7 +268,7 @@ In addition, we will lower all of the *impl items*.
Given an impl that contains:
```rust
```rust,ignore
impl<P0..Pn> Trait<A1..An> for A0
where WC
{
@ -268,13 +278,15 @@ where WC
We produce the following rule:
// Rule Normalize-From-Impl
forall<P0..Pm> {
forall<Pn+1..Pm> {
Normalize(<A0 as Trait<A1..An>>::AssocType<Pn+1..Pm> -> T) :-
WC && WC1
}
}
```txt
// Rule Normalize-From-Impl
forall<P0..Pm> {
forall<Pn+1..Pm> {
Normalize(<A0 as Trait<A1..An>>::AssocType<Pn+1..Pm> -> T) :-
WC && WC1
}
}
```
Note that `WC` and `WC1` both encode where-clauses that the impl can
rely on.

View File

@ -30,7 +30,7 @@ impl<T> Clone for Vec<T> where T: Clone { }
We could map these declarations to some Horn clauses, written in a
Prolog-like notation, as follows:
```
```txt
Clone(usize).
Clone(Vec<?T>) :- Clone(?T).
@ -51,18 +51,18 @@ so by applying the rules recursively:
- `Clone(Vec<Vec<usize>>)` is provable if:
- `Clone(Vec<usize>)` is provable if:
- `Clone(usize)` is provable. (Which is is, so we're all good.)
But now suppose we tried to prove that `Clone(Vec<Bar>)`. This would
fail (after all, I didn't give an impl of `Clone` for `Bar`):
- `Clone(Vec<Bar>)` is provable if:
- `Clone(Bar)` is provable. (But it is not, as there are no applicable rules.)
We can easily extend the example above to cover generic traits with
more than one input type. So imagine the `Eq<T>` trait, which declares
that `Self` is equatable with a value of type `T`:
```rust
```rust,ignore
trait Eq<T> { ... }
impl Eq<usize> for usize { }
impl<T: Eq<U>> Eq<Vec<U>> for Vec<T> { }
@ -70,12 +70,12 @@ impl<T: Eq<U>> Eq<Vec<U>> for Vec<T> { }
That could be mapped as follows:
```
```txt
Eq(usize, usize).
Eq(Vec<?T>, Vec<?U>) :- Eq(?T, ?U).
```
So far so good.
So far so good.
## Type-checking normal functions
@ -90,7 +90,7 @@ that we need to prove, and those come from type-checking.
Consider type-checking the function `foo()` here:
```rust
```rust,ignore
fn foo() { bar::<usize>() }
fn bar<U: Eq<U>>() { }
```
@ -105,7 +105,7 @@ If we wanted, we could write a Prolog predicate that defines the
conditions under which `bar()` can be called. We'll say that those
conditions are called being "well-formed":
```
```txt
barWellFormed(?U) :- Eq(?U, ?U).
```
@ -113,7 +113,7 @@ Then we can say that `foo()` type-checks if the reference to
`bar::<usize>` (that is, `bar()` applied to the type `usize`) is
well-formed:
```
```txt
fooTypeChecks :- barWellFormed(usize).
```
@ -122,7 +122,7 @@ If we try to prove the goal `fooTypeChecks`, it will succeed:
- `fooTypeChecks` is provable if:
- `barWellFormed(usize)`, which is provable if:
- `Eq(usize, usize)`, which is provable because of an impl.
Ok, so far so good. Let's move on to type-checking a more complex function.
## Type-checking generic functions: beyond Horn clauses
@ -134,7 +134,7 @@ a generic function, it turns out we need a stronger notion of goal than Prolog
can be provide. To see what I'm talking about, let's revamp our previous
example to make `foo` generic:
```rust
```rust,ignore
fn foo<T: Eq<T>>() { bar::<T>() }
fn bar<U: Eq<U>>() { }
```
@ -144,7 +144,7 @@ To type-check the body of `foo`, we need to be able to hold the type
type-safe *for all types `T`*, not just for some specific type. We might express
this like so:
```
```txt
fooTypeChecks :-
// for all types T...
forall<T> {

View File

@ -10,7 +10,7 @@ The `tcx` ("typing context") is the central data structure in the
compiler. It is the context that you use to perform all manner of
queries. The struct `TyCtxt` defines a reference to this shared context:
```rust
```rust,ignore
tcx: TyCtxt<'a, 'gcx, 'tcx>
// -- ---- ----
// | | |
@ -47,7 +47,7 @@ for the `'gcx` and `'tcx` parameters of `TyCtxt`. Just to be a touch
confusing, we tend to use the name `'tcx` in such contexts. Here is an
example:
```rust
```rust,ignore
fn not_in_inference<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) {
// ---- ----
// Using the same lifetime here asserts
@ -59,7 +59,7 @@ fn not_in_inference<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) {
In contrast, if we want to code that can be usable during type inference, then
you need to declare a distinct `'gcx` and `'tcx` lifetime parameter:
```rust
```rust,ignore
fn maybe_in_inference<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, def_id: DefId) {
// ---- ----
// Using different lifetimes here means that
@ -74,7 +74,7 @@ Rust types are represented using the `Ty<'tcx>` defined in the `ty`
module (not to be confused with the `Ty` struct from [the HIR]). This
is in fact a simple type alias for a reference with `'tcx` lifetime:
```rust
```rust,ignore
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
@ -89,7 +89,7 @@ the rustc arenas (never e.g. on the stack).
One common operation on types is to **match** and see what kinds of
types they are. This is done by doing `match ty.sty`, sort of like this:
```rust
```rust,ignore
fn test_type<'tcx>(ty: Ty<'tcx>) {
match ty.sty {
ty::TyArray(elem_ty, len) => { ... }
@ -111,7 +111,7 @@ To allocate a new type, you can use the various `mk_` methods defined
on the `tcx`. These have names that correpond mostly to the various kinds
of type variants. For example:
```rust
```rust,ignore
let array_ty = tcx.mk_array(elem_ty, len * 2);
```
@ -158,7 +158,7 @@ module. Here are a few examples:
Although there is no hard and fast rule, the `ty` module tends to be used like
so:
```rust
```rust,ignore
use ty::{self, Ty, TyCtxt};
```

View File

@ -17,9 +17,9 @@ similar conversions for where-clauses and other bits of the function signature.
To try and get a sense for the difference, consider this function:
```rust
```rust,ignore
struct Foo { }
fn foo(x: Foo, y: self::Foo) { .. }
fn foo(x: Foo, y: self::Foo) { ... }
// ^^^ ^^^^^^^^^
```

View File

@ -21,7 +21,7 @@ signature, such as the `'a` in `for<'a> fn(&'a u32)`. A region is
You create and "enter" an inference context by doing something like
the following:
```rust
```rust,ignore
tcx.infer_ctxt().enter(|infcx| {
// Use the inference context `infcx` here.
})
@ -88,7 +88,7 @@ The most basic operations you can perform in the type inferencer is
recommended way to add an equality constraint is to use the `at`
method, roughly like so:
```rust
```rust,ignore
infcx.at(...).eq(t, u);
```
@ -159,7 +159,9 @@ is to first "generalize" `&'a i32` into a type with a region variable:
`&'?b i32`, and then unify `?T` with that (`?T = &'?b i32`). We then
relate this new variable with the original bound:
&'?b i32 <: &'a i32
```txt
&'?b i32 <: &'a i32
```
This will result in a region constraint (see below) of `'?b: 'a`.
@ -176,12 +178,16 @@ eagerly unifying things, we simply collect constraints as we go, but
make (almost) no attempt to solve regions. These constraints have the
form of an "outlives" constraint:
'a: 'b
```txt
'a: 'b
```
Actually the code tends to view them as a subregion relation, but it's the same
idea:
'b <= 'a
```txt
'b <= 'a
```
(There are various other kinds of constriants, such as "verifys"; see
the `region_constraints` module for details.)
@ -189,7 +195,9 @@ the `region_constraints` module for details.)
There is one case where we do some amount of eager unification. If you have an
equality constraint between two regions
'a = 'b
```txt
'a = 'b
```
we will record that fact in a unification table. You can then use
`opportunistic_resolve_var` to convert `'b` to `'a` (or vice

View File

@ -54,7 +54,7 @@ constraints will be satisfied.
As a simple example, consider:
```rust
```rust,ignore
enum Option<A> { Some(A), None }
enum OptionalFn<B> { Some(|B|), None }
enum OptionalMap<C> { Some(|C| -> C), None }
@ -62,19 +62,23 @@ enum OptionalMap<C> { Some(|C| -> C), None }
Here, we will generate the constraints:
1. V(A) <= +
2. V(B) <= -
3. V(C) <= +
4. V(C) <= -
```txt
1. V(A) <= +
2. V(B) <= -
3. V(C) <= +
4. V(C) <= -
```
These indicate that (1) the variance of A must be at most covariant;
(2) the variance of B must be at most contravariant; and (3, 4) the
variance of C must be at most covariant *and* contravariant. All of these
results are based on a variance lattice defined as follows:
* Top (bivariant)
- +
o Bottom (invariant)
```txt
* Top (bivariant)
- +
o Bottom (invariant)
```txt
Based on this lattice, the solution `V(A)=+`, `V(B)=-`, `V(C)=o` is the
optimal solution. Note that there is always a naive solution which
@ -85,8 +89,10 @@ is that the variance of a use site may itself be a function of the
variance of other type parameters. In full generality, our constraints
take the form:
V(X) <= Term
Term := + | - | * | o | V(X) | Term x Term
```txt
V(X) <= Term
Term := + | - | * | o | V(X) | Term x Term
```
Here the notation `V(X)` indicates the variance of a type/region
parameter `X` with respect to its defining class. `Term x Term`
@ -101,7 +107,7 @@ represents the "variance transform" as defined in the paper:
If I have a struct or enum with where clauses:
```rust
```rust,ignore
struct Foo<T: Bar> { ... }
```
@ -170,9 +176,11 @@ another.
To see what I mean, consider a trait like so:
trait ConvertTo<A> {
fn convertTo(&self) -> A;
}
```rust
trait ConvertTo<A> {
fn convertTo(&self) -> A;
}
```
Intuitively, If we had one object `O=&ConvertTo<Object>` and another
`S=&ConvertTo<String>`, then `S <: O` because `String <: Object`
@ -200,20 +208,24 @@ But traits aren't only used with objects. They're also used when
deciding whether a given impl satisfies a given trait bound. To set the
scene here, imagine I had a function:
fn convertAll<A,T:ConvertTo<A>>(v: &[T]) {
...
}
```rust,ignore
fn convertAll<A,T:ConvertTo<A>>(v: &[T]) { ... }
```
Now imagine that I have an implementation of `ConvertTo` for `Object`:
impl ConvertTo<i32> for Object { ... }
```rust,ignore
impl ConvertTo<i32> for Object { ... }
```
And I want to call `convertAll` on an array of strings. Suppose
further that for whatever reason I specifically supply the value of
`String` for the type parameter `T`:
let mut vector = vec!["string", ...];
convertAll::<i32, String>(vector);
```rust,ignore
let mut vector = vec!["string", ...];
convertAll::<i32, String>(vector);
```
Is this legal? To put another way, can we apply the `impl` for
`Object` to the type `String`? The answer is yes, but to see why
@ -222,11 +234,9 @@ we have to expand out what will happen:
- `convertAll` will create a pointer to one of the entries in the
vector, which will have type `&String`
- It will then call the impl of `convertTo()` that is intended
for use with objects. This has the type:
for use with objects. This has the type `fn(self: &Object) -> i32`.
fn(self: &Object) -> i32
It is ok to provide a value for `self` of type `&String` because
It is OK to provide a value for `self` of type `&String` because
`&String <: &Object`.
OK, so intuitively we want this to be legal, so let's bring this back
@ -238,11 +248,15 @@ Maybe it's helpful to think of a dictionary-passing implementation of
type classes. In that case, `convertAll()` takes an implicit parameter
representing the impl. In short, we *have* an impl of type:
V_O = ConvertTo<i32> for Object
```txt
V_O = ConvertTo<i32> for Object
```
and the function prototype expects an impl of type:
V_S = ConvertTo<i32> for String
```txt
V_S = ConvertTo<i32> for String
```
As with any argument, this is legal if the type of the value given
(`V_O`) is a subtype of the type expected (`V_S`). So is `V_O <: V_S`?
@ -250,9 +264,11 @@ The answer will depend on the variance of the various parameters. In
this case, because the `Self` parameter is contravariant and `A` is
covariant, it means that:
V_O <: V_S iff
i32 <: i32
String <: Object
```txt
V_O <: V_S iff
i32 <: i32
String <: Object
```
These conditions are satisfied and so we are happy.
@ -263,7 +279,9 @@ expressions -- must be invariant with respect to all of their
inputs. To see why this makes sense, consider what subtyping for a
trait reference means:
<T as Trait> <: <U as Trait>
```txt
<T as Trait> <: <U as Trait>
```
means that if I know that `T as Trait`, I also know that `U as
Trait`. Moreover, if you think of it as dictionary passing style,
@ -279,7 +297,7 @@ Another related reason is that if we didn't make traits with
associated types invariant, then projection is no longer a
function with a single result. Consider:
```
```rust,ignore
trait Identity { type Out; fn foo(&self); }
impl<T> Identity for T { type Out = T; ... }
```
@ -287,9 +305,11 @@ impl<T> Identity for T { type Out = T; ... }
Now if I have `<&'static () as Identity>::Out`, this can be
validly derived as `&'a ()` for any `'a`:
<&'a () as Identity> <: <&'static () as Identity>
if &'static () < : &'a () -- Identity is contravariant in Self
if 'static : 'a -- Subtyping rules for relations
```txt
<&'a () as Identity> <: <&'static () as Identity>
if &'static () < : &'a () -- Identity is contravariant in Self
if 'static : 'a -- Subtyping rules for relations
```
This change otoh means that `<'static () as Identity>::Out` is
always `&'static ()` (which might then be upcast to `'a ()`,