fix all the not-en-dashes

This commit is contained in:
mark 2018-07-08 19:49:34 -05:00 committed by Who? Me?!
parent b52a3f6659
commit 8a49eb7686
24 changed files with 100 additions and 100 deletions

View File

@ -18,8 +18,8 @@ Once it gets more complete, the plan is probably to move it into the
If you'd like to help finish the guide, we'd love to have you! The
main tracking issue for the guide
[can be found here](https://github.com/rust-lang-nursery/rustc-guide/issues/6). From
there, you can find a list of all the planned chapters and subsections
-- if you think something is missing, please open an issue about it!
there, you can find a list of all the planned chapters and subsections.
If you think something is missing, please open an issue about it!
Otherwise, find a chapter that sounds interesting to you and then go
to its associated issue. There should be a list of things to do.

View File

@ -15,7 +15,7 @@ exposes the underlying control flow in a very clear way.
A control-flow graph is structured as a set of **basic blocks**
connected by edges. The key idea of a basic block is that it is a set
of statements that execute "together" -- that is, whenever you branch
of statements that execute "together" that is, whenever you branch
to a basic block, you start at the first statement and then execute
all the remainder. Only at the end of the block is there the
possibility of branching to more than one place (in MIR, we call that
@ -119,7 +119,7 @@ variables, since that's the thing we're most familiar with.
So there you have it: a variable "appears free" in some
expression/statement/whatever if it refers to something defined
outside of that expressions/statement/whatever. Equivalently, we can
then refer to the "free variables" of an expression -- which is just
then refer to the "free variables" of an expression which is just
the set of variables that "appear free".
So what does this have to do with regions? Well, we can apply the

View File

@ -16,7 +16,7 @@ expect, and more. If you are unfamiliar with the compiler testing framework,
see [this chapter](./tests/intro.html) for additional background.
The tests themselves are typically (but not always) organized into
"suites"--for example, `run-pass`, a folder representing tests that should
"suites" for example, `run-pass`, a folder representing tests that should
succeed, `run-fail`, a folder holding tests that should compile successfully,
but return a failure (non-zero status), `compile-fail`, a folder holding tests
that should fail to compile, and many more. The various suites are defined in

View File

@ -44,8 +44,8 @@ current year if you like, but you don't have to.
Lines should be at most 100 characters. It's even better if you can
keep things to 80.
**Ignoring the line length limit.** Sometimes -- in particular for
tests -- it can be necessary to exempt yourself from this limit. In
**Ignoring the line length limit.** Sometimes in particular for
tests it can be necessary to exempt yourself from this limit. In
that case, you can add a comment towards the top of the file (after
the copyright notice) like so:
@ -141,7 +141,7 @@ command like `git rebase -i rust-lang/master` (presuming you use the
name `rust-lang` for your remote).
**Individual commits do not have to build (but it's nice).** We do not
require that every intermediate commit successfully builds -- we only
require that every intermediate commit successfully builds we only
expect to be able to bisect at a PR level. However, if you *can* make
individual commits build, that is always helpful.

View File

@ -90,7 +90,7 @@ path that should not exist, but you will not be quite sure how it came
to be. **When the compiler is built with debug assertions,** it can
help you track that down. Simply set the `RUST_FORBID_DEP_GRAPH_EDGE`
environment variable to a filter. Every edge created in the dep-graph
will be tested against that filter -- if it matches, a `bug!` is
will be tested against that filter if it matches, a `bug!` is
reported, so you can easily see the backtrace (`RUST_BACKTRACE=1`).
The syntax for these filters is the same as described in the previous

View File

@ -1,6 +1,6 @@
# MIR borrow check
The borrow check is Rust's "secret sauce" -- it is tasked with
The borrow check is Rust's "secret sauce" it is tasked with
enforcing a number of properties:
- That all variables are initialized before they are used.

View File

@ -3,7 +3,7 @@
MIR is Rust's _Mid-level Intermediate Representation_. It is
constructed from [HIR](./hir.html). MIR was introduced in
[RFC 1211]. It is a radically simplified form of Rust that is used for
certain flow-sensitive safety checks -- notably the borrow checker! --
certain flow-sensitive safety checks notably the borrow checker!
and also for optimization and code generation.
If you'd like a very high-level introduction to MIR, as well as some
@ -122,7 +122,7 @@ StorageLive(_1);
```
This statement indicates that the variable `_1` is "live", meaning
that it may be used later -- this will persist until we encounter a
that it may be used later this will persist until we encounter a
`StorageDead(_1)` statement, which indicates that the variable `_1` is
done being used. These "storage statements" are used by LLVM to
allocate stack space.
@ -134,7 +134,7 @@ _1 = const <std::vec::Vec<T>>::new() -> bb2;
```
Terminators are different from statements because they can have more
than one successor -- that is, control may flow to different
than one successor that is, control may flow to different
places. Function calls like the call to `Vec::new` are always
terminators because of the possibility of unwinding, although in the
case of `Vec::new` we are able to see that indeed unwinding is not
@ -163,7 +163,7 @@ Assignments in general have the form:
<Place> = <Rvalue>
```
A place is an expression like `_3`, `_3.f` or `*_3` -- it denotes a
A place is an expression like `_3`, `_3.f` or `*_3` it denotes a
location in memory. An **Rvalue** is an expression that creates a
value: in this case, the rvalue is a mutable borrow expression, which
looks like `&mut <Place>`. So we can kind of define a grammar for
@ -180,7 +180,7 @@ rvalues like so:
| move Place
```
As you can see from this grammar, rvalues cannot be nested -- they can
As you can see from this grammar, rvalues cannot be nested they can
only reference places and constants. Moreover, when you use a place,
we indicate whether we are **copying it** (which requires that the
place have a type `T` where `T: Copy`) or **moving it** (which works

View File

@ -100,8 +100,8 @@ that appeared within the `main` function.)
### Implementing and registering a pass
A `MirPass` is some bit of code that processes the MIR, typically --
but not always -- transforming it along the way somehow. For example,
A `MirPass` is some bit of code that processes the MIR, typically
but not always transforming it along the way somehow. For example,
it might perform an optimization. The `MirPass` trait itself is found
in in [the `rustc_mir::transform` module][mirtransform], and it
basically consists of one method, `run_pass`, that simply gets an
@ -110,7 +110,7 @@ came from). The MIR is therefore modified in place (which helps to
keep things efficient).
A good example of a basic MIR pass is [`NoLandingPads`], which walks
the MIR and removes all edges that are due to unwinding -- this is
the MIR and removes all edges that are due to unwinding this is
used when configured with `panic=abort`, which never unwinds. As you
can see from its source, a MIR pass is defined by first defining a
dummy type, a struct with no fields, something like:

View File

@ -12,13 +12,13 @@ The MIR-based region analysis consists of two major functions:
- `replace_regions_in_mir`, invoked first, has two jobs:
- First, it finds the set of regions that appear within the
signature of the function (e.g., `'a` in `fn foo<'a>(&'a u32) {
... }`). These are called the "universal" or "free" regions -- in
... }`). These are called the "universal" or "free" regions in
particular, they are the regions that [appear free][fvb] in the
function body.
- Second, it replaces all the regions from the function body with
fresh inference variables. This is because (presently) those
regions are the results of lexical region inference and hence are
not of much interest. The intention is that -- eventually -- they
not of much interest. The intention is that eventually they
will be "erased regions" (i.e., no information at all), since we
won't be doing lexical region inference at all.
- `compute_regions`, invoked second: this is given as argument the
@ -40,11 +40,11 @@ The MIR-based region analysis consists of two major functions:
## Universal regions
*to be written* -- explain the `UniversalRegions` type
*to be written* explain the `UniversalRegions` type
## Region variables and constraints
*to be written* -- describe the `RegionInferenceContext` and
*to be written* describe the `RegionInferenceContext` and
the role of `liveness_constraints` vs other `constraints`, plus
## Closures
@ -79,13 +79,13 @@ The kinds of region elements are as follows:
- Similarly, there is an element denoted `end('static)` corresponding
to the remainder of program execution after this function returns.
- There is an element `!1` for each skolemized region `!1`. This
corresponds (intuitively) to some unknown set of other elements --
corresponds (intuitively) to some unknown set of other elements
for details on skolemization, see the section
[skolemization and universes](#skol).
## Causal tracking
*to be written* -- describe how we can extend the values of a variable
*to be written* describe how we can extend the values of a variable
with causal tracking etc
<a name="skol"></a>
@ -133,7 +133,7 @@ bound in the supertype and **skolemizing** them: this means that we
replace them with
[universally quantified](appendix/background.html#quantified)
representatives, written like `!1`. We call these regions "skolemized
regions" -- they represent, basically, "some unknown region".
regions" they represent, basically, "some unknown region".
Once we've done that replacement, we have the following relation:
@ -156,9 +156,9 @@ we swap the left and right here):
```
According to the basic subtyping rules for a reference, this will be
true if `'!1: 'static`. That is -- if "some unknown region `!1`" lives
outlives `'static`. Now, this *might* be true -- after all, `'!1`
could be `'static` -- but we don't *know* that it's true. So this
true if `'!1: 'static`. That is if "some unknown region `!1`" lives
outlives `'static`. Now, this *might* be true after all, `'!1`
could be `'static` but we don't *know* that it's true. So this
should yield up an error (eventually).
### What is a universe
@ -238,8 +238,8 @@ not U1.
**Giving existential variables a universe.** Now that we have this
notion of universes, we can use it to extend our type-checker and
things to prevent illegal names from leaking out. The idea is that we
give each inference (existential) variable -- whether it be a type or
a lifetime -- a universe. That variable's value can then only
give each inference (existential) variable whether it be a type or
a lifetime a universe. That variable's value can then only
reference names visible from that universe. So for example is a
lifetime variable is created in U0, then it cannot be assigned a value
of `!1` or `!2`, because those names are not visible from the universe
@ -247,7 +247,7 @@ U0.
**Representing universes with just a counter.** You might be surprised
to see that the compiler doesn't keep track of a full tree of
universes. Instead, it just keeps a counter -- and, to determine if
universes. Instead, it just keeps a counter and, to determine if
one universe can see another one, it just checks if the index is
greater. For example, U2 can see U0 because 2 >= 0. But U0 cannot see
U2, because 0 >= 2 is false.
@ -323,12 +323,12 @@ Now there are two ways that could happen. First, if `U(V1)` can see
the universe `x` (i.e., `x <= U(V1)`), then we can just add `skol(x)`
to `value(V1)` and be done. But if not, then we have to approximate:
we may not know what set of elements `skol(x)` represents, but we
should be able to compute some sort of **upper bound** B for it --
should be able to compute some sort of **upper bound** B for it
some region B that outlives `skol(x)`. For now, we'll just use
`'static` for that (since it outlives everything) -- in the future, we
`'static` for that (since it outlives everything) in the future, we
can sometimes be smarter here (and in fact we have code for doing this
already in other contexts). Moreover, since `'static` is in the root
universe U0, we know that all variables can see it -- so basically if
universe U0, we know that all variables can see it so basically if
we find that `value(V2)` contains `skol(x)` for some universe `x`
that `V1` can't see, then we force `V1` to `'static`.
@ -398,8 +398,8 @@ outlives relationships are satisfied. Then we would go to the "check
universal regions" portion of the code, which would test that no
universal region grew too large.
In this case, `V1` *did* grow too large -- it is not known to outlive
`end('static)`, nor any of the CFG -- so we would report an error.
In this case, `V1` *did* grow too large it is not known to outlive
`end('static)`, nor any of the CFG so we would report an error.
## Another example

View File

@ -2,7 +2,7 @@
The MIR visitor is a convenient tool for traversing the MIR and either
looking for things or making changes to it. The visitor traits are
defined in [the `rustc::mir::visit` module][m-v] -- there are two of
defined in [the `rustc::mir::visit` module][m-v] there are two of
them, generated via a single macro: `Visitor` (which operates on a
`&Mir` and gives back shared references) and `MutVisitor` (which
operates on a `&mut Mir` and gives back mutable references).

View File

@ -5,7 +5,7 @@ Rust compiler is current transitioning from a traditional "pass-based"
setup to a "demand-driven" system. **The Compiler Query System is the
key to our new demand-driven organization.** The idea is pretty
simple. You have various queries that compute things about the input
-- for example, there is a query called `type_of(def_id)` that, given
for example, there is a query called `type_of(def_id)` that, given
the def-id of some item, will compute the type of that item and return
it to you.

View File

@ -49,8 +49,8 @@ considered an ideal setup.
[`src/test/run-pass`]: https://github.com/rust-lang/rust/tree/master/src/test/run-pass/
For regression tests -- basically, some random snippet of code that
came in from the internet -- we often just name the test after the
For regression tests basically, some random snippet of code that
came in from the internet we often just name the test after the
issue. For example, `src/test/run-pass/issue-12345.rs`. If possible,
though, it is better if you can put the test into a directory that
helps identify what piece of code is being tested here (e.g.,
@ -267,9 +267,9 @@ can also make UI tests where compilation is expected to succeed, and
you can even run the resulting program. Just add one of the following
[header commands](#header_commands):
- `// compile-pass` -- compilation should succeed but do
- `// compile-pass` compilation should succeed but do
not run the resulting binary
- `// run-pass` -- compilation should succeed and we should run the
- `// run-pass` compilation should succeed and we should run the
resulting binary
<a name="bless"></a>

View File

@ -15,7 +15,7 @@ on [how to run tests](./tests/running.html#ui) as well as
The compiletest tests are located in the tree in the [`src/test`]
directory. Immediately within you will see a series of subdirectories
(e.g. `ui`, `run-make`, and so forth). Each of those directories is
called a **test suite** -- they house a group of tests that are run in
called a **test suite** they house a group of tests that are run in
a distinct mode.
[`src/test`]: https://github.com/rust-lang/rust/tree/master/src/test
@ -24,31 +24,31 @@ Here is a brief summary of the test suites as of this writing and what
they mean. In some cases, the test suites are linked to parts of the manual
that give more details.
- [`ui`](./tests/adding.html#ui) -- tests that check the exact
- [`ui`](./tests/adding.html#ui) tests that check the exact
stdout/stderr from compilation and/or running the test
- `run-pass` -- tests that are expected to compile and execute
- `run-pass` tests that are expected to compile and execute
successfully (no panics)
- `run-pass-valgrind` -- tests that ought to run with valgrind
- `run-fail` -- tests that are expected to compile but then panic
- `run-pass-valgrind` tests that ought to run with valgrind
- `run-fail` tests that are expected to compile but then panic
during execution
- `compile-fail` -- tests that are expected to fail compilation.
- `parse-fail` -- tests that are expected to fail to parse
- `pretty` -- tests targeting the Rust "pretty printer", which
- `compile-fail` tests that are expected to fail compilation.
- `parse-fail` tests that are expected to fail to parse
- `pretty` tests targeting the Rust "pretty printer", which
generates valid Rust code from the AST
- `debuginfo` -- tests that run in gdb or lldb and query the debug info
- `codegen` -- tests that compile and then test the generated LLVM
- `debuginfo` tests that run in gdb or lldb and query the debug info
- `codegen` tests that compile and then test the generated LLVM
code to make sure that the optimizations we want are taking effect.
- `mir-opt` -- tests that check parts of the generated MIR to make
- `mir-opt` tests that check parts of the generated MIR to make
sure we are building things correctly or doing the optimizations we
expect.
- `incremental` -- tests for incremental compilation, checking that
- `incremental` tests for incremental compilation, checking that
when certain modifications are performed, we are able to reuse the
results from previous compilations.
- `run-make` -- tests that basically just execute a `Makefile`; the
- `run-make` tests that basically just execute a `Makefile`; the
ultimate in flexibility but quite annoying to write.
- `rustdoc` -- tests for rustdoc, making sure that the generated files
- `rustdoc` tests for rustdoc, making sure that the generated files
contain the expected documentation.
- `*-fulldeps` -- same as above, but indicates that the test depends
- `*-fulldeps` same as above, but indicates that the test depends
on things other than `libstd` (and hence those things must be built)
## Other Tests
@ -56,44 +56,44 @@ that give more details.
The Rust build system handles running tests for various other things,
including:
- **Tidy** -- This is a custom tool used for validating source code
- **Tidy** This is a custom tool used for validating source code
style and formatting conventions, such as rejecting long lines.
There is more information in the
[section on coding conventions](./conventions.html#formatting).
Example: `./x.py test src/tools/tidy`
- **Unittests** -- The Rust standard library and many of the Rust packages
- **Unittests** The Rust standard library and many of the Rust packages
include typical Rust `#[test]` unittests. Under the hood, `x.py` will run
`cargo test` on each package to run all the tests.
Example: `./x.py test src/libstd`
- **Doctests** -- Example code embedded within Rust documentation is executed
- **Doctests** Example code embedded within Rust documentation is executed
via `rustdoc --test`. Examples:
`./x.py test src/doc` -- Runs `rustdoc --test` for all documentation in
`./x.py test src/doc` Runs `rustdoc --test` for all documentation in
`src/doc`.
`./x.py test --doc src/libstd` -- Runs `rustdoc --test` on the standard
`./x.py test --doc src/libstd` Runs `rustdoc --test` on the standard
library.
- **Linkchecker** -- A small tool for verifying `href` links within
- **Linkchecker** A small tool for verifying `href` links within
documentation.
Example: `./x.py test src/tools/linkchecker`
- **Distcheck** -- This verifies that the source distribution tarball created
- **Distcheck** This verifies that the source distribution tarball created
by the build system will unpack, build, and run all tests.
Example: `./x.py test distcheck`
- **Tool tests** -- Packages that are included with Rust have all of their
- **Tool tests** Packages that are included with Rust have all of their
tests run as well (typically by running `cargo test` within their
directory). This includes things such as cargo, clippy, rustfmt, rls, miri,
bootstrap (testing the Rust build system itself), etc.
- **Cargotest** -- This is a small tool which runs `cargo test` on a few
- **Cargotest** This is a small tool which runs `cargo test` on a few
significant projects (such as `servo`, `ripgrep`, `tokei`, etc.) just to
ensure there aren't any significant regressions.

View File

@ -1,7 +1,7 @@
# Running tests
You can run the tests using `x.py`. The most basic command -- which
you will almost never want to use! -- is as follows:
You can run the tests using `x.py`. The most basic command which
you will almost never want to use! is as follows:
```bash
> ./x.py test

View File

@ -15,7 +15,7 @@ When a trait defines an associated type (e.g.,
[the `Item` type in the `IntoIterator` trait][intoiter-item]), that
type can be referenced by the user using an **associated type
projection** like `<Option<u32> as IntoIterator>::Item`. (Often,
though, people will use the shorthand syntax `T::Item` -- presently,
though, people will use the shorthand syntax `T::Item` presently,
that syntax is expanded during
["type collection"](./type-checking.html) into the explicit form,
though that is something we may want to change in the future.)
@ -24,8 +24,8 @@ though that is something we may want to change in the future.)
<a name="normalize"></a>
In some cases, associated type projections can be **normalized** --
that is, simplified -- based on the types given in an impl. So, to
In some cases, associated type projections can be **normalized**
that is, simplified based on the types given in an impl. So, to
continue with our example, the impl of `IntoIterator` for `Option<T>`
declares (among other things) that `Item = T`:
@ -39,9 +39,9 @@ impl<T> IntoIterator for Option<T> {
This means we can normalize the projection `<Option<u32> as
IntoIterator>::Item` to just `u32`.
In this case, the projection was a "monomorphic" one -- that is, it
In this case, the projection was a "monomorphic" one that is, it
did not have any type parameters. Monomorphic projections are special
because they can **always** be fully normalized -- but often we can
because they can **always** be fully normalized but often we can
normalize other associated type projections as well. For example,
`<Option<?T> as IntoIterator>::Item` (where `?T` is an inference
variable) can be normalized to just `?T`.

View File

@ -1,8 +1,8 @@
# Canonical queries
The "start" of the trait system is the **canonical query** (these are
both queries in the more general sense of the word -- something you
would like to know the answer to -- and in the
both queries in the more general sense of the word something you
would like to know the answer to and in the
[rustc-specific sense](./query.html)). The idea is that the type
checker or other parts of the system, may in the course of doing their
thing want to know whether some trait is implemented for some type
@ -35,7 +35,7 @@ solver is finding **all possible** instantiations of your query that
are true. In this case, if we instantiate `?U = [i32]`, then the query
is true (note that a traditional Prolog interface does not, directly,
tell us a value for `?U`, but we can infer one by unifying the
response with our original query -- Rust's solver gives back a
response with our original query Rust's solver gives back a
substitution instead). If we were to hit `y`, the solver might then
give us another possible answer:
@ -135,7 +135,7 @@ we did find. It consists of four parts:
[section on handling regions in traits](./traits/regions.html) for
more details.
- **Value:** The query result also comes with a value of type `T`. For
some specialized queries -- like normalizing associated types --
some specialized queries like normalizing associated types
this is used to carry back an extra result, but it's often just
`()`.
@ -187,8 +187,8 @@ for example:
Therefore, the result we get back would be as follows (I'm going to
ignore region constraints and the "value"):
- Certainty: `Ambiguous` -- we're not sure yet if this holds
- Var values: `[?T = ?T, ?U = ?U]` -- we learned nothing about the values of
- Certainty: `Ambiguous` we're not sure yet if this holds
- Var values: `[?T = ?T, ?U = ?U]` we learned nothing about the values of
the variables
In short, the query result says that it is too soon to say much about

View File

@ -24,7 +24,7 @@ form of `X` would be `(?0, ?1)`, where `?0` and `?1` represent these
**canonical placeholders**. Note that the type `Y = (?U, ?T)` also
canonicalizes to `(?0, ?1)`. But the type `Z = (?T, ?T)` would
canonicalize to `(?0, ?0)` (as would `(?U, ?U)`). In other words, the
exact identity of the inference variables is not important -- unless
exact identity of the inference variables is not important unless
they are repeated.
We use this to improve caching as well as to detect cycles and other

View File

@ -197,7 +197,7 @@ Implemented(Foo: Send) :-
As you can probably imagine, proving that `Option<Box<Foo>>: Send` is
going to wind up circularly requiring us to prove that `Foo: Send`
again. So this would be an example where we wind up in a cycle -- but
again. So this would be an example where we wind up in a cycle but
that's ok, we *do* consider `Foo: Send` to hold, even though it
references itself.
@ -219,4 +219,4 @@ as described in the section on [implied bounds].
Some topics yet to be written:
- Elaborate on the proof procedure
- SLG solving -- introduce negative reasoning
- SLG solving introduce negative reasoning

View File

@ -28,7 +28,7 @@ Trait solving is based around a few key ideas:
whether types are equal.
- [Region constraints](./traits/regions.html), which are accumulated
during trait solving but mostly ignored. This means that trait
solving effectively ignores the precise regions involved, always --
solving effectively ignores the precise regions involved, always
but we still remember the constraints on them so that those
constraints can be checked by thet type checker.

View File

@ -8,8 +8,8 @@ created in the [`rustc_traits::lowering`][lowering] module.
## The `program_clauses_for` query
The main entry point is the `program_clauses_for` [query], which --
given a def-id -- produces a set of Chalk program clauses. These
The main entry point is the `program_clauses_for` [query], which
given a def-id produces a set of Chalk program clauses. These
queries are tested using a
[dedicated unit-testing mechanism, described below](#unit-tests). The
query is invoked on a `DefId` that identifies something like a trait,

View File

@ -52,11 +52,11 @@ from the Rust syntax into goals.
In addition, in the rules below, we sometimes do some transformations
on the lowered where clauses, as defined here:
- `FromEnv(WC)` -- this indicates that:
- `FromEnv(WC)` this indicates that:
- `Implemented(TraitRef)` becomes `FromEnv(TraitRef)`
- `ProjectionEq(Projection = Ty)` becomes `FromEnv(Projection = Ty)`
- other where-clauses are left intact
- `WellFormed(WC)` -- this indicates that:
- `WellFormed(WC)` this indicates that:
- `Implemented(TraitRef)` becomes `WellFormed(TraitRef)`
- `ProjectionEq(Projection = Ty)` becomes `WellFormed(Projection = Ty)`
@ -121,8 +121,8 @@ trait Eq: PartialEq { ... }
In this case, the `PartialEq` supertrait is equivalent to a `where
Self: PartialEq` where clause, in our simplified model. The program
clause above therefore states that if we can prove `FromEnv(T: Eq)` --
e.g., if we are in some function with `T: Eq` in its where clauses --
clause above therefore states that if we can prove `FromEnv(T: Eq)`
e.g., if we are in some function with `T: Eq` in its where clauses
then we also know that `FromEnv(T: PartialEq)`. Thus the set of things
that follow from the environment are not only the **direct where
clauses** but also things that follow from them.
@ -169,7 +169,7 @@ have to prove each one of those:
- `WellFormed(T: Foo)` -- cycle, true coinductively
This `WellFormed` predicate is only used when proving that impls are
well-formed -- basically, for each impl of some trait ref `TraitRef`,
well-formed basically, for each impl of some trait ref `TraitRef`,
we must show that `WellFormed(TraitRef)`. This in turn justifies the
implied bounds rules that allow us to extend the set of `FromEnv`
items.

View File

@ -15,8 +15,8 @@ One of the first observations is that the Rust trait system is
basically a kind of logic. As such, we can map our struct, trait, and
impl declarations into logical inference rules. For the most part,
these are basically Horn clauses, though we'll see that to capture the
full richness of Rust -- and in particular to support generic
programming -- we have to go a bit further than standard Horn clauses.
full richness of Rust and in particular to support generic
programming we have to go a bit further than standard Horn clauses.
To see how this mapping works, let's start with an example. Imagine
we declare a trait and a few impls, like so:
@ -38,8 +38,8 @@ Clone(Vec<?T>) :- Clone(?T).
// Or, put another way, B implies A.
```
In Prolog terms, we might say that `Clone(Foo)` -- where `Foo` is some
Rust type -- is a *predicate* that represents the idea that the type
In Prolog terms, we might say that `Clone(Foo)` where `Foo` is some
Rust type is a *predicate* that represents the idea that the type
`Foo` implements `Clone`. These rules are **program clauses**; they
state the conditions under which that predicate can be proven (i.e.,
considered true). So the first rule just says "Clone is implemented
@ -162,7 +162,7 @@ notation but a bit Rustified. Anyway, the problem is that standard
Horn clauses don't allow universal quantification (`forall`) or
implication (`if`) in goals (though many Prolog engines do support
them, as an extension). For this reason, we need to accept something
called "first-order hereditary harrop" (FOHH) clauses -- this long
called "first-order hereditary harrop" (FOHH) clauses this long
name basically means "standard Horn clauses with `forall` and `if` in
the body". But it's nice to know the proper name, because there is a
lot of work describing how to efficiently handle FOHH clauses; see for

View File

@ -12,7 +12,7 @@ draws heavily on the [type inference] and [trait solving].)
Type "collection" is the process of converting the types found in the HIR
(`hir::Ty`), which represent the syntactic things that the user wrote, into the
**internal representation** used by the compiler (`Ty<'tcx>`) -- we also do
**internal representation** used by the compiler (`Ty<'tcx>`) we also do
similar conversions for where-clauses and other bits of the function signature.
To try and get a sense for the difference, consider this function:
@ -30,7 +30,7 @@ they encode the path somewhat differently. But once they are "collected" into
Collection is defined as a bundle of [queries] for computing information about
the various functions, traits, and other items in the crate being compiled.
Note that each of these queries is concerned with *interprocedural* things --
Note that each of these queries is concerned with *interprocedural* things
for example, for a function definition, collection will figure out the type and
signature of the function, but it will not visit the *body* of the function in
any way, nor examine type annotations on local variables (that's the job of

View File

@ -167,7 +167,7 @@ one could interpret variance and trait matching.
Just as with structs and enums, we can decide the subtyping
relationship between two object types `&Trait<A>` and `&Trait<B>`
based on the relationship of `A` and `B`. Note that for object
types we ignore the `Self` type parameter -- it is unknown, and
types we ignore the `Self` type parameter it is unknown, and
the nature of dynamic dispatch ensures that we will always call a
function that is expected the appropriate `Self` type. However, we
must be careful with the other type parameters, or else we could
@ -274,8 +274,8 @@ These conditions are satisfied and so we are happy.
### Variance and associated types
Traits with associated types -- or at minimum projection
expressions -- must be invariant with respect to all of their
Traits with associated types or at minimum projection
expressions must be invariant with respect to all of their
inputs. To see why this makes sense, consider what subtyping for a
trait reference means: