Replace txt with text

This commit is contained in:
varkor 2018-04-06 19:35:21 +01:00 committed by Who? Me?!
parent 43341c8894
commit 8889dcd15d
17 changed files with 101 additions and 101 deletions

View File

@ -177,7 +177,7 @@ foo.rs` (assuming you have a Rust program called `foo.rs`. You can also pass any
command line arguments that you would normally pass to rustc). When you run it command line arguments that you would normally pass to rustc). When you run it
you'll see output similar to you'll see output similar to
```txt ```text
In crate: foo, In crate: foo,
Found 12 uses of `println!`; Found 12 uses of `println!`;

View File

@ -19,7 +19,7 @@ compilation improves, that may change.)
The dependency structure of these crates is roughly a diamond: The dependency structure of these crates is roughly a diamond:
```txt ```text
rustc_driver rustc_driver
/ | \ / | \
/ | \ / | \

View File

@ -48,7 +48,7 @@ the graph. You can filter in three ways:
To filter, use the `RUST_DEP_GRAPH_FILTER` environment variable, which should To filter, use the `RUST_DEP_GRAPH_FILTER` environment variable, which should
look like one of the following: look like one of the following:
```txt ```text
source_filter // nodes originating from source_filter source_filter // nodes originating from source_filter
-> target_filter // nodes that can reach target_filter -> target_filter // nodes that can reach target_filter
source_filter -> target_filter // nodes in between source_filter and target_filter source_filter -> target_filter // nodes in between source_filter and target_filter
@ -58,14 +58,14 @@ source_filter -> target_filter // nodes in between source_filter and target_filt
A node is considered to match a filter if all of those strings appear in its A node is considered to match a filter if all of those strings appear in its
label. So, for example: label. So, for example:
```txt ```text
RUST_DEP_GRAPH_FILTER='-> TypeckTables' RUST_DEP_GRAPH_FILTER='-> TypeckTables'
``` ```
would select the predecessors of all `TypeckTables` nodes. Usually though you would select the predecessors of all `TypeckTables` nodes. Usually though you
want the `TypeckTables` node for some particular fn, so you might write: want the `TypeckTables` node for some particular fn, so you might write:
```txt ```text
RUST_DEP_GRAPH_FILTER='-> TypeckTables & bar' RUST_DEP_GRAPH_FILTER='-> TypeckTables & bar'
``` ```
@ -75,7 +75,7 @@ with `bar` in their name.
Perhaps you are finding that when you change `foo` you need to re-type-check Perhaps you are finding that when you change `foo` you need to re-type-check
`bar`, but you don't think you should have to. In that case, you might do: `bar`, but you don't think you should have to. In that case, you might do:
```txt ```text
RUST_DEP_GRAPH_FILTER='Hir & foo -> TypeckTables & bar' RUST_DEP_GRAPH_FILTER='Hir & foo -> TypeckTables & bar'
``` ```
@ -105,7 +105,7 @@ check of `bar` and you don't think there should be. You dump the
dep-graph as described in the previous section and open `dep-graph.txt` dep-graph as described in the previous section and open `dep-graph.txt`
to see something like: to see something like:
```txt ```text
Hir(foo) -> Collect(bar) Hir(foo) -> Collect(bar)
Collect(bar) -> TypeckTables(bar) Collect(bar) -> TypeckTables(bar)
``` ```

View File

@ -99,7 +99,7 @@ So, let's continue our example. Imagine that we were calling a method
that defines it with `&self` for the type `Rc<U>` as well as a method that defines it with `&self` for the type `Rc<U>` as well as a method
on the type `Box` that defines `Foo` but with `&mut self`. Then we on the type `Box` that defines `Foo` but with `&mut self`. Then we
might have two candidates: might have two candidates:
```txt ```text
&Rc<Box<[T; 3]>> from the impl of `Foo` for `Rc<U>` where `U=Box<T; 3]> &Rc<Box<[T; 3]>> from the impl of `Foo` for `Rc<U>` where `U=Box<T; 3]>
&mut Box<[T; 3]>> from the inherent impl on `Box<U>` where `U=[T; 3]` &mut Box<[T; 3]>> from the inherent impl on `Box<U>` where `U=[T; 3]`
``` ```

View File

@ -52,7 +52,7 @@ fn main() {
The files have names like `rustc.main.000-000.CleanEndRegions.after.mir`. These The files have names like `rustc.main.000-000.CleanEndRegions.after.mir`. These
names have a number of parts: names have a number of parts:
```txt ```text
rustc.main.000-000.CleanEndRegions.after.mir rustc.main.000-000.CleanEndRegions.after.mir
---- --- --- --------------- ----- either before or after ---- --- --- --------------- ----- either before or after
| | | name of the pass | | | name of the pass
@ -159,7 +159,7 @@ ensuring that the reads have already happened (remember that
[queries are memoized](./query.html), so executing a query twice [queries are memoized](./query.html), so executing a query twice
simply loads from a cache the second time): simply loads from a cache the second time):
```txt ```text
mir_const(D) --read-by--> mir_const_qualif(D) mir_const(D) --read-by--> mir_const_qualif(D)
| ^ | ^
stolen-by | stolen-by |

View File

@ -122,7 +122,7 @@ stack, for example). But *how* do we reject it and *why*?
When we type-check `main`, and in particular the call `bar(foo)`, we When we type-check `main`, and in particular the call `bar(foo)`, we
are going to wind up with a subtyping relationship like this one: are going to wind up with a subtyping relationship like this one:
```txt ```text
fn(&'static u32) <: for<'a> fn(&'a u32) fn(&'static u32) <: for<'a> fn(&'a u32)
---------------- ------------------- ---------------- -------------------
the type of `foo` the type `bar` expects the type of `foo` the type `bar` expects
@ -137,7 +137,7 @@ regions" -- they represent, basically, "some unknown region".
Once we've done that replacement, we have the following relation: Once we've done that replacement, we have the following relation:
```txt ```text
fn(&'static u32) <: fn(&'!1 u32) fn(&'static u32) <: fn(&'!1 u32)
``` ```
@ -151,7 +151,7 @@ subtypes, we check if their arguments have the desired relationship
(fn arguments are [contravariant](./appendix-background.html#variance), so (fn arguments are [contravariant](./appendix-background.html#variance), so
we swap the left and right here): we swap the left and right here):
```txt ```text
&'!1 u32 <: &'static u32 &'!1 u32 <: &'static u32
``` ```
@ -201,7 +201,7 @@ Here, the name `'b` is not part of the root universe. Instead, when we
"enter" into this `for<'b>` (e.g., by skolemizing it), we will create "enter" into this `for<'b>` (e.g., by skolemizing it), we will create
a child universe of the root, let's call it U1: a child universe of the root, let's call it U1:
```txt ```text
U0 (root universe) U0 (root universe)
└─ U1 (child universe) └─ U1 (child universe)
@ -224,7 +224,7 @@ When we enter *this* type, we will again create a new universe, which
we'll call `U2`. Its parent will be the root universe, and U1 will be we'll call `U2`. Its parent will be the root universe, and U1 will be
its sibling: its sibling:
```txt ```text
U0 (root universe) U0 (root universe)
├─ U1 (child universe) ├─ U1 (child universe)
@ -263,7 +263,7 @@ children, that inference variable X would have to be in U0. And since
X is in U0, it cannot name anything from U1 (or U2). This is perhaps easiest X is in U0, it cannot name anything from U1 (or U2). This is perhaps easiest
to see by using a kind of generic "logic" example: to see by using a kind of generic "logic" example:
```txt ```text
exists<X> { exists<X> {
forall<Y> { ... /* Y is in U1 ... */ } forall<Y> { ... /* Y is in U1 ... */ }
forall<Z> { ... /* Z is in U2 ... */ } forall<Z> { ... /* Z is in U2 ... */ }
@ -296,7 +296,7 @@ does not say region elements **will** appear.
In the region inference engine, outlives constraints have the form: In the region inference engine, outlives constraints have the form:
```txt ```text
V1: V2 @ P V1: V2 @ P
``` ```
@ -346,7 +346,7 @@ for universal regions from the fn signature.)
Put another way, the "universal regions" check can be considered to be Put another way, the "universal regions" check can be considered to be
checking constraints like: checking constraints like:
```txt ```text
{skol(1)}: V1 {skol(1)}: V1
``` ```
@ -358,13 +358,13 @@ made to represent the `!1` region.
OK, so far so good. Now let's walk through what would happen with our OK, so far so good. Now let's walk through what would happen with our
first example: first example:
```txt ```text
fn(&'static u32) <: fn(&'!1 u32) @ P // this point P is not imp't here fn(&'static u32) <: fn(&'!1 u32) @ P // this point P is not imp't here
``` ```
The region inference engine will create a region element domain like this: The region inference engine will create a region element domain like this:
```txt ```text
{ CFG; end('static); skol(1) } { CFG; end('static); skol(1) }
--- ------------ ------- from the universe `!1` --- ------------ ------- from the universe `!1`
| 'static is always in scope | 'static is always in scope
@ -375,20 +375,20 @@ It will always create two universal variables, one representing
`'static` and one representing `'!1`. Let's call them Vs and V1. They `'static` and one representing `'!1`. Let's call them Vs and V1. They
will have initial values like so: will have initial values like so:
```txt ```text
Vs = { CFG; end('static) } // it is in U0, so can't name anything else Vs = { CFG; end('static) } // it is in U0, so can't name anything else
V1 = { skol(1) } V1 = { skol(1) }
``` ```
From the subtyping constraint above, we would have an outlives constraint like From the subtyping constraint above, we would have an outlives constraint like
```txt ```text
'!1: 'static @ P '!1: 'static @ P
``` ```
To process this, we would grow the value of V1 to include all of Vs: To process this, we would grow the value of V1 to include all of Vs:
```txt ```text
Vs = { CFG; end('static) } Vs = { CFG; end('static) }
V1 = { CFG; end('static), skol(1) } V1 = { CFG; end('static), skol(1) }
``` ```
@ -405,7 +405,7 @@ In this case, `V1` *did* grow too large -- it is not known to outlive
What about this subtyping relationship? What about this subtyping relationship?
```txt ```text
for<'a> fn(&'a u32, &'a u32) for<'a> fn(&'a u32, &'a u32)
<: <:
for<'b, 'c> fn(&'b u32, &'c u32) for<'b, 'c> fn(&'b u32, &'c u32)
@ -413,7 +413,7 @@ for<'b, 'c> fn(&'b u32, &'c u32)
Here we would skolemize the supertype, as before, yielding: Here we would skolemize the supertype, as before, yielding:
```txt ```text
for<'a> fn(&'a u32, &'a u32) for<'a> fn(&'a u32, &'a u32)
<: <:
fn(&'!1 u32, &'!2 u32) fn(&'!1 u32, &'!2 u32)
@ -423,7 +423,7 @@ then we instantiate the variable on the left-hand side with an
existential in universe U2, yielding the following (`?n` is a notation existential in universe U2, yielding the following (`?n` is a notation
for an existential variable): for an existential variable):
```txt ```text
fn(&'?3 u32, &'?3 u32) fn(&'?3 u32, &'?3 u32)
<: <:
fn(&'!1 u32, &'!2 u32) fn(&'!1 u32, &'!2 u32)
@ -431,14 +431,14 @@ fn(&'!1 u32, &'!2 u32)
Then we break this down further: Then we break this down further:
```txt ```text
&'!1 u32 <: &'?3 u32 &'!1 u32 <: &'?3 u32
&'!2 u32 <: &'?3 u32 &'!2 u32 <: &'?3 u32
``` ```
and even further, yield up our region constraints: and even further, yield up our region constraints:
```txt ```text
'!1: '?3 '!1: '?3
'!2: '?3 '!2: '?3
``` ```
@ -465,7 +465,7 @@ common lifetime of our arguments. -nmatsakis)
Let's look at one last example. We'll extend the previous one to have Let's look at one last example. We'll extend the previous one to have
a return type: a return type:
```txt ```text
for<'a> fn(&'a u32, &'a u32) -> &'a u32 for<'a> fn(&'a u32, &'a u32) -> &'a u32
<: <:
for<'b, 'c> fn(&'b u32, &'c u32) -> &'b u32 for<'b, 'c> fn(&'b u32, &'c u32) -> &'b u32
@ -478,7 +478,7 @@ first one. That is unsound. Let's see how it plays out.
First, we skolemize the supertype: First, we skolemize the supertype:
```txt ```text
for<'a> fn(&'a u32, &'a u32) -> &'a u32 for<'a> fn(&'a u32, &'a u32) -> &'a u32
<: <:
fn(&'!1 u32, &'!2 u32) -> &'!1 u32 fn(&'!1 u32, &'!2 u32) -> &'!1 u32
@ -486,7 +486,7 @@ fn(&'!1 u32, &'!2 u32) -> &'!1 u32
Then we instantiate the subtype with existentials (in U2): Then we instantiate the subtype with existentials (in U2):
```txt ```text
fn(&'?3 u32, &'?3 u32) -> &'?3 u32 fn(&'?3 u32, &'?3 u32) -> &'?3 u32
<: <:
fn(&'!1 u32, &'!2 u32) -> &'!1 u32 fn(&'!1 u32, &'!2 u32) -> &'!1 u32
@ -494,7 +494,7 @@ fn(&'!1 u32, &'!2 u32) -> &'!1 u32
And now we create the subtyping relationships: And now we create the subtyping relationships:
```txt ```text
&'!1 u32 <: &'?3 u32 // arg 1 &'!1 u32 <: &'?3 u32 // arg 1
&'!2 u32 <: &'?3 u32 // arg 2 &'!2 u32 <: &'?3 u32 // arg 2
&'?3 u32 <: &'!1 u32 // return type &'?3 u32 <: &'!1 u32 // return type
@ -503,7 +503,7 @@ And now we create the subtyping relationships:
And finally the outlives relationships. Here, let V1, V2, and V3 be the And finally the outlives relationships. Here, let V1, V2, and V3 be the
variables we assign to `!1`, `!2`, and `?3` respectively: variables we assign to `!1`, `!2`, and `?3` respectively:
```txt ```text
V1: V3 V1: V3
V2: V3 V2: V3
V3: V1 V3: V1
@ -511,7 +511,7 @@ V3: V1
Those variables will have these initial values: Those variables will have these initial values:
```txt ```text
V1 in U1 = {skol(1)} V1 in U1 = {skol(1)}
V2 in U2 = {skol(2)} V2 in U2 = {skol(2)}
V3 in U2 = {} V3 in U2 = {}
@ -520,14 +520,14 @@ V3 in U2 = {}
Now because of the `V3: V1` constraint, we have to add `skol(1)` into `V3` (and Now because of the `V3: V1` constraint, we have to add `skol(1)` into `V3` (and
indeed it is visible from `V3`), so we get: indeed it is visible from `V3`), so we get:
```txt ```text
V3 in U2 = {skol(1)} V3 in U2 = {skol(1)}
``` ```
then we have this constraint `V2: V3`, so we wind up having to enlarge then we have this constraint `V2: V3`, so we wind up having to enlarge
`V2` to include `skol(1)` (which it can also see): `V2` to include `skol(1)` (which it can also see):
```txt ```text
V2 in U2 = {skol(1), skol(2)} V2 in U2 = {skol(1), skol(2)}
``` ```

View File

@ -159,7 +159,7 @@ _3 = &mut _1;
Assignments in general have the form: Assignments in general have the form:
```txt ```text
<Place> = <Rvalue> <Place> = <Rvalue>
``` ```
@ -169,7 +169,7 @@ value: in this case, the rvalue is a mutable borrow expression, which
looks like `&mut <Place>`. So we can kind of define a grammar for looks like `&mut <Place>`. So we can kind of define a grammar for
rvalues like so: rvalues like so:
```txt ```text
<Rvalue> = & (mut)? <Place> <Rvalue> = & (mut)? <Place>
| <Operand> + <Operand> | <Operand> + <Operand>
| <Operand> - <Operand> | <Operand> - <Operand>

View File

@ -298,7 +298,7 @@ default regex flavor provided by `regex` crate.
The corresponding reference file will use the normalized output to test both The corresponding reference file will use the normalized output to test both
32-bit and 64-bit platforms: 32-bit and 64-bit platforms:
```txt ```text
... ...
| |
= note: source type: fn() ($PTR bits) = note: source type: fn() ($PTR bits)

View File

@ -51,7 +51,7 @@ In our logic, normalization is defined by a predicate
impls. For example, the `impl` of `IntoIterator` for `Option<T>` that impls. For example, the `impl` of `IntoIterator` for `Option<T>` that
we saw above would be lowered to a program clause like so: we saw above would be lowered to a program clause like so:
```txt ```text
forall<T> { forall<T> {
Normalize(<Option<T> as IntoIterator>::Item -> T) Normalize(<Option<T> as IntoIterator>::Item -> T)
} }
@ -101,7 +101,7 @@ consider an associated type projection equal to another type?":
We now introduce the `ProjectionEq` predicate to bring those two cases We now introduce the `ProjectionEq` predicate to bring those two cases
together. The `ProjectionEq` predicate looks like so: together. The `ProjectionEq` predicate looks like so:
```txt ```text
ProjectionEq(<T as IntoIterator>::Item = U) ProjectionEq(<T as IntoIterator>::Item = U)
``` ```
@ -109,7 +109,7 @@ and we will see that it can be proven *either* via normalization or
skolemization. As part of lowering an associated type declaration from skolemization. As part of lowering an associated type declaration from
some trait, we create two program clauses for `ProjectionEq`: some trait, we create two program clauses for `ProjectionEq`:
```txt ```text
forall<T, U> { forall<T, U> {
ProjectionEq(<T as IntoIterator>::Item = U) :- ProjectionEq(<T as IntoIterator>::Item = U) :-
Normalize(<T as IntoIterator>::Item -> U) Normalize(<T as IntoIterator>::Item -> U)
@ -130,7 +130,7 @@ with unification. As described in the
[type inference](./type-inference.html) section, unification is [type inference](./type-inference.html) section, unification is
basically a procedure with a signature like this: basically a procedure with a signature like this:
```txt ```text
Unify(A, B) = Result<(Subgoals, RegionConstraints), NoSolution> Unify(A, B) = Result<(Subgoals, RegionConstraints), NoSolution>
``` ```

View File

@ -19,13 +19,13 @@ In a traditional Prolog system, when you start a query, the solver
will run off and start supplying you with every possible answer it can will run off and start supplying you with every possible answer it can
find. So given something like this: find. So given something like this:
```txt ```text
?- Vec<i32>: AsRef<?U> ?- Vec<i32>: AsRef<?U>
``` ```
The solver might answer: The solver might answer:
```txt ```text
Vec<i32>: AsRef<[i32]> Vec<i32>: AsRef<[i32]>
continue? (y/n) continue? (y/n)
``` ```
@ -39,7 +39,7 @@ response with our original query -- Rust's solver gives back a
substitution instead). If we were to hit `y`, the solver might then substitution instead). If we were to hit `y`, the solver might then
give us another possible answer: give us another possible answer:
```txt ```text
Vec<i32>: AsRef<Vec<i32>> Vec<i32>: AsRef<Vec<i32>>
continue? (y/n) continue? (y/n)
``` ```
@ -48,14 +48,14 @@ This answer derives from the fact that there is a reflexive impl
(`impl<T> AsRef<T> for T`) for `AsRef`. If were to hit `y` again, (`impl<T> AsRef<T> for T`) for `AsRef`. If were to hit `y` again,
then we might get back a negative response: then we might get back a negative response:
```txt ```text
no no
``` ```
Naturally, in some cases, there may be no possible answers, and hence Naturally, in some cases, there may be no possible answers, and hence
the solver will just give me back `no` right away: the solver will just give me back `no` right away:
```txt ```text
?- Box<i32>: Copy ?- Box<i32>: Copy
no no
``` ```
@ -64,7 +64,7 @@ In some cases, there might be an infinite number of responses. So for
example if I gave this query, and I kept hitting `y`, then the solver example if I gave this query, and I kept hitting `y`, then the solver
would never stop giving me back answers: would never stop giving me back answers:
```txt ```text
?- Vec<?U>: Clone ?- Vec<?U>: Clone
Vec<i32>: Clone Vec<i32>: Clone
continue? (y/n) continue? (y/n)
@ -82,13 +82,13 @@ layer of `Box` until we ask it to stop, or it runs out of memory.
Another interesting thing is that queries might still have variables Another interesting thing is that queries might still have variables
in them. For example: in them. For example:
```txt ```text
?- Rc<?T>: Clone ?- Rc<?T>: Clone
``` ```
might produce the answer: might produce the answer:
```txt ```text
Rc<?T>: Clone Rc<?T>: Clone
continue? (y/n) continue? (y/n)
``` ```
@ -226,13 +226,13 @@ Let's suppose that the type checker decides to revisit the
Borrow<?U>`. `?U` is no longer an unbound inference variable; it now Borrow<?U>`. `?U` is no longer an unbound inference variable; it now
has a value, `Vec<?V>`. So, if we "refresh" the query with that value, we get: has a value, `Vec<?V>`. So, if we "refresh" the query with that value, we get:
```txt ```text
Vec<?T>: Borrow<Vec<?V>> Vec<?T>: Borrow<Vec<?V>>
``` ```
This time, there is only one impl that applies, the reflexive impl: This time, there is only one impl that applies, the reflexive impl:
```txt ```text
impl<T> Borrow<T> for T where T: ?Sized impl<T> Borrow<T> for T where T: ?Sized
``` ```

View File

@ -44,13 +44,13 @@ and treats them equally, so when canonicalizing, we will *also*
replace any [free lifetime](./appendix-background.html#free-vs-bound) with a replace any [free lifetime](./appendix-background.html#free-vs-bound) with a
canonical variable. Therefore, we get the following result: canonical variable. Therefore, we get the following result:
```txt ```text
?0: Foo<'?1, ?2> ?0: Foo<'?1, ?2>
``` ```
Sometimes we write this differently, like so: Sometimes we write this differently, like so:
```txt ```text
for<T,L,T> { ?0: Foo<'?1, ?2> } for<T,L,T> { ?0: Foo<'?1, ?2> }
``` ```
@ -61,7 +61,7 @@ so `?0` and `?2` are types; the `L` indicates a lifetime varibale, so
`CanonicalVarValues` array OV with the "original values" for each `CanonicalVarValues` array OV with the "original values" for each
canonicalized variable: canonicalized variable:
```txt ```text
[?A, 'static, ?B] [?A, 'static, ?B]
``` ```
@ -76,13 +76,13 @@ we create a substitution S from the canonical form containing a fresh
inference variable (of suitable kind) for each canonical variable. inference variable (of suitable kind) for each canonical variable.
So, for our example query: So, for our example query:
```txt ```text
for<T,L,T> { ?0: Foo<'?1, ?2> } for<T,L,T> { ?0: Foo<'?1, ?2> }
``` ```
the substitution S might be: the substitution S might be:
```txt ```text
S = [?A, '?B, ?C] S = [?A, '?B, ?C]
``` ```
@ -90,7 +90,7 @@ We can then replace the bound canonical variables (`?0`, etc) with
these inference variables, yielding the following fully instantiated these inference variables, yielding the following fully instantiated
query: query:
```txt ```text
?A: Foo<'?B, ?C> ?A: Foo<'?B, ?C>
``` ```
@ -135,20 +135,20 @@ result substitution `var_values`, and some region constraints. To
create this, we wind up re-using the substitution S that we created create this, we wind up re-using the substitution S that we created
when first instantiating our query. To refresh your memory, we had a query when first instantiating our query. To refresh your memory, we had a query
```txt ```text
for<T,L,T> { ?0: Foo<'?1, ?2> } for<T,L,T> { ?0: Foo<'?1, ?2> }
``` ```
for which we made a substutition S: for which we made a substutition S:
```txt ```text
S = [?A, '?B, ?C] S = [?A, '?B, ?C]
``` ```
We then did some work which unified some of those variables with other things. We then did some work which unified some of those variables with other things.
If we "refresh" S with the latest results, we get: If we "refresh" S with the latest results, we get:
```txt ```text
S = [Vec<?E>, '?D, ?E] S = [Vec<?E>, '?D, ?E]
``` ```
@ -157,7 +157,7 @@ our original query. Note though that they include some new variables
(like `?E`). We can make those go away by canonicalizing again! We don't (like `?E`). We can make those go away by canonicalizing again! We don't
just canonicalize S, though, we canonicalize the whole query response QR: just canonicalize S, though, we canonicalize the whole query response QR:
```txt ```text
QR = { QR = {
certainty: Proven, // or whatever certainty: Proven, // or whatever
var_values: [Vec<?E>, '?D, ?E] // this is S var_values: [Vec<?E>, '?D, ?E] // this is S
@ -170,7 +170,7 @@ QR = {
The result would be as follows: The result would be as follows:
```txt ```text
Canonical(QR) = for<T, L> { Canonical(QR) = for<T, L> {
certainty: Proven, certainty: Proven,
var_values: [Vec<?0>, '?1, ?2] var_values: [Vec<?0>, '?1, ?2]
@ -194,19 +194,19 @@ In the previous section we produced a canonical query result. We now have
to apply that result in our original context. If you recall, way back in the to apply that result in our original context. If you recall, way back in the
beginning, we were trying to prove this query: beginning, we were trying to prove this query:
```txt ```text
?A: Foo<'static, ?B> ?A: Foo<'static, ?B>
``` ```
We canonicalized that into this: We canonicalized that into this:
```txt ```text
for<T,L,T> { ?0: Foo<'?1, ?2> } for<T,L,T> { ?0: Foo<'?1, ?2> }
``` ```
and now we got back a canonical response: and now we got back a canonical response:
```txt ```text
for<T, L> { for<T, L> {
certainty: Proven, certainty: Proven,
var_values: [Vec<?0>, '?1, ?2] var_values: [Vec<?0>, '?1, ?2]
@ -221,7 +221,7 @@ the result with a fresh inference variable, (b) unify the values in
the result with the original values, and then (c) record the region the result with the original values, and then (c) record the region
constraints for later. Doing step (a) would yield a result of constraints for later. Doing step (a) would yield a result of
```txt ```text
{ {
certainty: Proven, certainty: Proven,
var_values: [Vec<?C>, '?D, ?C] var_values: [Vec<?C>, '?D, ?C]
@ -233,7 +233,7 @@ constraints for later. Doing step (a) would yield a result of
Step (b) would then unify: Step (b) would then unify:
```txt ```text
?A with Vec<?C> ?A with Vec<?C>
'static with '?D 'static with '?D
?B with ?C ?B with ?C

View File

@ -12,7 +12,7 @@ a few new superpowers.
In Rust's solver, **goals** and **clauses** have the following forms In Rust's solver, **goals** and **clauses** have the following forms
(note that the two definitions reference one another): (note that the two definitions reference one another):
```txt ```text
Goal = DomainGoal // defined in the section below Goal = DomainGoal // defined in the section below
| Goal && Goal | Goal && Goal
| Goal || Goal | Goal || Goal
@ -49,7 +49,7 @@ To define the set of *domain goals* in our system, we need to first
introduce a few simple formulations. A **trait reference** consists of introduce a few simple formulations. A **trait reference** consists of
the name of a trait along with a suitable set of inputs P0..Pn: the name of a trait along with a suitable set of inputs P0..Pn:
```txt ```text
TraitRef = P0: TraitName<P1..Pn> TraitRef = P0: TraitName<P1..Pn>
``` ```
@ -63,13 +63,13 @@ T>`), that are not part of a trait reference.
A **projection** consists of an associated item reference along with A **projection** consists of an associated item reference along with
its inputs P0..Pm: its inputs P0..Pm:
```txt ```text
Projection = <P0 as TraitName<P1..Pn>>::AssocItem<Pn+1..Pm> Projection = <P0 as TraitName<P1..Pn>>::AssocItem<Pn+1..Pm>
``` ```
Given that, we can define a `DomainGoal` as follows: Given that, we can define a `DomainGoal` as follows:
```txt ```text
DomainGoal = Implemented(TraitRef) DomainGoal = Implemented(TraitRef)
| ProjectionEq(Projection = Type) | ProjectionEq(Projection = Type)
| Normalize(Projection -> Type) | Normalize(Projection -> Type)
@ -112,7 +112,7 @@ DomainGoal = Implemented(TraitRef)
Most goals in our system are "inductive". In an inductive goal, Most goals in our system are "inductive". In an inductive goal,
circular reasoning is disallowed. Consider this example clause: circular reasoning is disallowed. Consider this example clause:
```txt ```text
Implemented(Foo: Bar) :- Implemented(Foo: Bar) :-
Implemented(Foo: Bar). Implemented(Foo: Bar).
``` ```
@ -140,7 +140,7 @@ struct Foo {
The default rules for auto traits say that `Foo` is `Send` if the The default rules for auto traits say that `Foo` is `Send` if the
types of its fields are `Send`. Therefore, we would have a rule like types of its fields are `Send`. Therefore, we would have a rule like
```txt ```text
Implemented(Foo: Send) :- Implemented(Foo: Send) :-
Implemented(Option<Box<Foo>>: Send). Implemented(Option<Box<Foo>>: Send).
``` ```

View File

@ -49,7 +49,7 @@ standard [ui test] mechanisms to check them. In this case, there is a
need only be a prefix of the error), but [the stderr file] contains need only be a prefix of the error), but [the stderr file] contains
the full details: the full details:
```txt ```text
error: Implemented(T: Foo) :- ProjectionEq(<T as std::iter::Iterator>::Item == i32), TypeOutlives(T \ error: Implemented(T: Foo) :- ProjectionEq(<T as std::iter::Iterator>::Item == i32), TypeOutlives(T \
: 'static), Implemented(T: std::iter::Iterator), Implemented(T: std::marker::Sized). : 'static), Implemented(T: std::iter::Iterator), Implemented(T: std::marker::Sized).
--> $DIR/lower_impl.rs:15:1 --> $DIR/lower_impl.rs:15:1

View File

@ -87,7 +87,7 @@ relationships between different kinds of domain goals. The first such
rule from the trait header creates the mapping between the `FromEnv` rule from the trait header creates the mapping between the `FromEnv`
and `Implemented` predicates: and `Implemented` predicates:
```txt ```text
// Rule Implemented-From-Env // Rule Implemented-From-Env
forall<Self, P1..Pn> { forall<Self, P1..Pn> {
Implemented(Self: Trait<P1..Pn>) :- FromEnv(Self: Trait<P1..Pn>) Implemented(Self: Trait<P1..Pn>) :- FromEnv(Self: Trait<P1..Pn>)
@ -103,7 +103,7 @@ The next few clauses have to do with implied bounds (see also
[RFC 2089]: https://rust-lang.github.io/rfcs/2089-implied-bounds.html [RFC 2089]: https://rust-lang.github.io/rfcs/2089-implied-bounds.html
```txt ```text
// Rule Implied-Bound-From-Trait // Rule Implied-Bound-From-Trait
// //
// For each where clause WC: // For each where clause WC:
@ -130,7 +130,7 @@ clauses** but also things that follow from them.
The next rule is related; it defines what it means for a trait reference The next rule is related; it defines what it means for a trait reference
to be **well-formed**: to be **well-formed**:
```txt ```text
// Rule WellFormed-TraitRef // Rule WellFormed-TraitRef
// //
// For each where clause WC: // For each where clause WC:
@ -197,7 +197,7 @@ the rules by which `ProjectionEq` can succeed; these two clauses are discussed
in detail in the [section on associated types](./traits-associated-types.html), in detail in the [section on associated types](./traits-associated-types.html),
but reproduced here for reference: but reproduced here for reference:
```txt ```text
// Rule ProjectionEq-Normalize // Rule ProjectionEq-Normalize
// //
// ProjectionEq can succeed by normalizing: // ProjectionEq can succeed by normalizing:
@ -227,7 +227,7 @@ the `Bounds` declared on the associated type must be proven to hold to
show that the impl is well-formed, and hence we can rely on them show that the impl is well-formed, and hence we can rely on them
elsewhere. elsewhere.
```txt ```text
// XXX how exactly should we set this up? Have to be careful; // XXX how exactly should we set this up? Have to be careful;
// presumably this has to be a kind of `FromEnv` setup. // presumably this has to be a kind of `FromEnv` setup.
``` ```
@ -253,7 +253,7 @@ where WC
Let `TraitRef` be the trait reference `A0: Trait<A1..An>`. Then we Let `TraitRef` be the trait reference `A0: Trait<A1..An>`. Then we
will create the following rules: will create the following rules:
```txt ```text
// Rule Implemented-From-Impl // Rule Implemented-From-Impl
forall<P0..Pn> { forall<P0..Pn> {
Implemented(TraitRef) :- WC Implemented(TraitRef) :- WC
@ -278,7 +278,7 @@ where WC
We produce the following rule: We produce the following rule:
```txt ```text
// Rule Normalize-From-Impl // Rule Normalize-From-Impl
forall<P0..Pm> { forall<P0..Pm> {
forall<Pn+1..Pm> { forall<Pn+1..Pm> {

View File

@ -30,7 +30,7 @@ impl<T> Clone for Vec<T> where T: Clone { }
We could map these declarations to some Horn clauses, written in a We could map these declarations to some Horn clauses, written in a
Prolog-like notation, as follows: Prolog-like notation, as follows:
```txt ```text
Clone(usize). Clone(usize).
Clone(Vec<?T>) :- Clone(?T). Clone(Vec<?T>) :- Clone(?T).
@ -70,7 +70,7 @@ impl<T: Eq<U>> Eq<Vec<U>> for Vec<T> { }
That could be mapped as follows: That could be mapped as follows:
```txt ```text
Eq(usize, usize). Eq(usize, usize).
Eq(Vec<?T>, Vec<?U>) :- Eq(?T, ?U). Eq(Vec<?T>, Vec<?U>) :- Eq(?T, ?U).
``` ```
@ -105,7 +105,7 @@ If we wanted, we could write a Prolog predicate that defines the
conditions under which `bar()` can be called. We'll say that those conditions under which `bar()` can be called. We'll say that those
conditions are called being "well-formed": conditions are called being "well-formed":
```txt ```text
barWellFormed(?U) :- Eq(?U, ?U). barWellFormed(?U) :- Eq(?U, ?U).
``` ```
@ -113,7 +113,7 @@ Then we can say that `foo()` type-checks if the reference to
`bar::<usize>` (that is, `bar()` applied to the type `usize`) is `bar::<usize>` (that is, `bar()` applied to the type `usize`) is
well-formed: well-formed:
```txt ```text
fooTypeChecks :- barWellFormed(usize). fooTypeChecks :- barWellFormed(usize).
``` ```
@ -144,7 +144,7 @@ To type-check the body of `foo`, we need to be able to hold the type
type-safe *for all types `T`*, not just for some specific type. We might express type-safe *for all types `T`*, not just for some specific type. We might express
this like so: this like so:
```txt ```text
fooTypeChecks :- fooTypeChecks :-
// for all types T... // for all types T...
forall<T> { forall<T> {

View File

@ -159,7 +159,7 @@ is to first "generalize" `&'a i32` into a type with a region variable:
`&'?b i32`, and then unify `?T` with that (`?T = &'?b i32`). We then `&'?b i32`, and then unify `?T` with that (`?T = &'?b i32`). We then
relate this new variable with the original bound: relate this new variable with the original bound:
```txt ```text
&'?b i32 <: &'a i32 &'?b i32 <: &'a i32
``` ```
@ -178,14 +178,14 @@ eagerly unifying things, we simply collect constraints as we go, but
make (almost) no attempt to solve regions. These constraints have the make (almost) no attempt to solve regions. These constraints have the
form of an "outlives" constraint: form of an "outlives" constraint:
```txt ```text
'a: 'b 'a: 'b
``` ```
Actually the code tends to view them as a subregion relation, but it's the same Actually the code tends to view them as a subregion relation, but it's the same
idea: idea:
```txt ```text
'b <= 'a 'b <= 'a
``` ```
@ -195,7 +195,7 @@ the `region_constraints` module for details.)
There is one case where we do some amount of eager unification. If you have an There is one case where we do some amount of eager unification. If you have an
equality constraint between two regions equality constraint between two regions
```txt ```text
'a = 'b 'a = 'b
``` ```

View File

@ -62,7 +62,7 @@ enum OptionalMap<C> { Some(|C| -> C), None }
Here, we will generate the constraints: Here, we will generate the constraints:
```txt ```text
1. V(A) <= + 1. V(A) <= +
2. V(B) <= - 2. V(B) <= -
3. V(C) <= + 3. V(C) <= +
@ -74,11 +74,11 @@ These indicate that (1) the variance of A must be at most covariant;
variance of C must be at most covariant *and* contravariant. All of these variance of C must be at most covariant *and* contravariant. All of these
results are based on a variance lattice defined as follows: results are based on a variance lattice defined as follows:
```txt ```text
* Top (bivariant) * Top (bivariant)
- + - +
o Bottom (invariant) o Bottom (invariant)
```txt ```text
Based on this lattice, the solution `V(A)=+`, `V(B)=-`, `V(C)=o` is the Based on this lattice, the solution `V(A)=+`, `V(B)=-`, `V(C)=o` is the
optimal solution. Note that there is always a naive solution which optimal solution. Note that there is always a naive solution which
@ -89,7 +89,7 @@ is that the variance of a use site may itself be a function of the
variance of other type parameters. In full generality, our constraints variance of other type parameters. In full generality, our constraints
take the form: take the form:
```txt ```text
V(X) <= Term V(X) <= Term
Term := + | - | * | o | V(X) | Term x Term Term := + | - | * | o | V(X) | Term x Term
``` ```
@ -248,13 +248,13 @@ Maybe it's helpful to think of a dictionary-passing implementation of
type classes. In that case, `convertAll()` takes an implicit parameter type classes. In that case, `convertAll()` takes an implicit parameter
representing the impl. In short, we *have* an impl of type: representing the impl. In short, we *have* an impl of type:
```txt ```text
V_O = ConvertTo<i32> for Object V_O = ConvertTo<i32> for Object
``` ```
and the function prototype expects an impl of type: and the function prototype expects an impl of type:
```txt ```text
V_S = ConvertTo<i32> for String V_S = ConvertTo<i32> for String
``` ```
@ -264,7 +264,7 @@ The answer will depend on the variance of the various parameters. In
this case, because the `Self` parameter is contravariant and `A` is this case, because the `Self` parameter is contravariant and `A` is
covariant, it means that: covariant, it means that:
```txt ```text
V_O <: V_S iff V_O <: V_S iff
i32 <: i32 i32 <: i32
String <: Object String <: Object
@ -279,7 +279,7 @@ expressions -- must be invariant with respect to all of their
inputs. To see why this makes sense, consider what subtyping for a inputs. To see why this makes sense, consider what subtyping for a
trait reference means: trait reference means:
```txt ```text
<T as Trait> <: <U as Trait> <T as Trait> <: <U as Trait>
``` ```
@ -305,7 +305,7 @@ impl<T> Identity for T { type Out = T; ... }
Now if I have `<&'static () as Identity>::Out`, this can be Now if I have `<&'static () as Identity>::Out`, this can be
validly derived as `&'a ()` for any `'a`: validly derived as `&'a ()` for any `'a`:
```txt ```text
<&'a () as Identity> <: <&'static () as Identity> <&'a () as Identity> <: <&'static () as Identity>
if &'static () < : &'a () -- Identity is contravariant in Self if &'static () < : &'a () -- Identity is contravariant in Self
if 'static : 'a -- Subtyping rules for relations if 'static : 'a -- Subtyping rules for relations