Merge pull request #200 from phansch/fix_typos

Fix typos
This commit is contained in:
Niko Matsakis 2018-09-12 14:41:03 -04:00 committed by GitHub
commit 8cc7d2b923
11 changed files with 21 additions and 21 deletions

View File

@ -14,7 +14,7 @@ codegen unit | when we produce LLVM IR, we group the Rust code into
completeness | completeness is a technical term in type theory. Completeness means that every type-safe program also type-checks. Having both soundness and completeness is very hard, and usually soundness is more important. (see "soundness").
control-flow graph | a representation of the control-flow of a program; see [the background chapter for more](./appendix/background.html#cfg)
CTFE | Compile-Time Function Evaluation. This is the ability of the compiler to evaluate `const fn`s at compile time. This is part of the compiler's constant evaluation system. ([see more](./const-eval.html))
cx | we tend to use "cx" as an abbrevation for context. See also `tcx`, `infcx`, etc.
cx | we tend to use "cx" as an abbreviation for context. See also `tcx`, `infcx`, etc.
DAG | a directed acyclic graph is used during compilation to keep track of dependencies between queries. ([see more](incremental-compilation.html))
data-flow analysis | a static analysis that figures out what properties are true at each point in the control-flow of a program; see [the background chapter for more](./appendix/background.html#dataflow)
DefId | an index identifying a definition (see `librustc/hir/def_id.rs`). Uniquely identifies a `DefPath`.
@ -34,7 +34,7 @@ inference variable | when doing type or region inference, an "inference va
infcx | the inference context (see `librustc/infer`)
IR | Intermediate Representation. A general term in compilers. During compilation, the code is transformed from raw source (ASCII text) to various IRs. In Rust, these are primarily HIR, MIR, and LLVM IR. Each IR is well-suited for some set of computations. For example, MIR is well-suited for the borrow checker, and LLVM IR is well-suited for codegen because LLVM accepts it.
local crate | the crate currently being compiled.
LTO | Link-Time Optimizations. A set of optimizations offered by LLVM that occur just before the final binary is linked. These include optmizations like removing functions that are never used in the final program, for example. _ThinLTO_ is a variant of LTO that aims to be a bit more scalable and efficient, but possibly sacrifices some optimizations. You may also read issues in the Rust repo about "FatLTO", which is the loving nickname given to non-Thin LTO. LLVM documentation: [here][lto] and [here][thinlto]
LTO | Link-Time Optimizations. A set of optimizations offered by LLVM that occur just before the final binary is linked. These include optimizations like removing functions that are never used in the final program, for example. _ThinLTO_ is a variant of LTO that aims to be a bit more scalable and efficient, but possibly sacrifices some optimizations. You may also read issues in the Rust repo about "FatLTO", which is the loving nickname given to non-Thin LTO. LLVM documentation: [here][lto] and [here][thinlto]
[LLVM] | (actually not an acronym :P) an open-source compiler backend. It accepts LLVM IR and outputs native binaries. Various languages (e.g. Rust) can then implement a compiler front-end that output LLVM IR and use LLVM to compile to all the platforms LLVM supports.
MIR | the Mid-level IR that is created after type-checking for use by borrowck and codegen ([see more](./mir/index.html))
miri | an interpreter for MIR used for constant evaluation ([see more](./miri.html))

View File

@ -23,7 +23,7 @@ The MIR-based region analysis consists of two major functions:
won't be doing lexical region inference at all.
- `compute_regions`, invoked second: this is given as argument the
results of move analysis. It has the job of computing values for all
the inference variabes that `replace_regions_in_mir` introduced.
the inference variables that `replace_regions_in_mir` introduced.
- To do that, it first runs the [MIR type checker](#mirtypeck). This
is basically a normal type-checker but specialized to MIR, which
is much simpler than full Rust of course. Running the MIR type
@ -531,7 +531,7 @@ then we have this constraint `V2: V3`, so we wind up having to enlarge
V2 in U2 = {skol(1), skol(2)}
```
Now contraint propagation is done, but when we check the outlives
Now constraint propagation is done, but when we check the outlives
relationships, we find that `V2` includes this new element `skol(1)`,
so we report an error.

View File

@ -10,7 +10,7 @@ generates an executable binary. rustc uses LLVM for code generation.
## What is LLVM?
All of the preceeding chapters of this guide have one thing in common: we never
All of the preceding chapters of this guide have one thing in common: we never
generated any executable machine code at all! With this chapter, all of that
changes.
@ -29,14 +29,14 @@ many compiler projects, including the `clang` C compiler and our beloved
LLVM's "format `X`" is called LLVM IR. It is basically assembly code with
additional low-level types and annotations added. These annotations are helpful
for doing optimizations on the LLVM IR and outputed machine code. The end
for doing optimizations on the LLVM IR and outputted machine code. The end
result of all this is (at long last) something executable (e.g. an ELF object
or wasm).
There are a few benefits to using LLVM:
- We don't have to write a whole compiler backend. This reduces implementation
and maintainance burden.
and maintenance burden.
- We benefit from the large suite of advanced optimizations that the LLVM
project has been collecting.
- We automatically can compile Rust to any of the platforms for which LLVM has

View File

@ -115,7 +115,7 @@ which make it simple to parse common patterns like simple presence or not
attribute is defined (`has_cfg_prefix()`) and many more. The low-level parsers
are found near the end of the `impl Config` block; be sure to look through them
and their associated parsers immediately above to see how they are used to
avoid writing additional parsing code unneccessarily.
avoid writing additional parsing code unnecessarily.
As a concrete example, here is the implementation for the
`parse_failure_status()` parser, in

View File

@ -36,7 +36,7 @@ sanity checks in `src/librustc/hir/map/hir_id_validator.rs`:
for you so you also get the `HirId`.
If you are creating new `DefId`s, since each `DefId` needs to have a
corresponding `NodeId`, it is adviseable to add these `NodeId`s to the
corresponding `NodeId`, it is advisable to add these `NodeId`s to the
`AST` so you don't have to generate new ones during lowering. This has
the advantage of creating a way to find the `DefId` of something via its
`NodeId`. If lowering needs this `DefId` in multiple places, you can't

View File

@ -90,15 +90,15 @@ tokens containing the inside of the example invocation `print foo`, while `ms`
might be the sequence of token (trees) `print $mvar:ident`.
The output of the parser is a `NamedParseResult`, which indicates which of
three cases has occured:
three cases has occurred:
- Success: `tts` matches the given matcher `ms`, and we have produced a binding
from metavariables to the corresponding token trees.
- Failure: `tts` does not match `ms`. This results in an error message such as
"No rule expected token _blah_".
- Error: some fatal error has occured _in the parser_. For example, this happens
if there are more than one pattern match, since that indicates the macro is
ambiguous.
- Error: some fatal error has occurred _in the parser_. For example, this
happens if there are more than one pattern match, since that indicates
the macro is ambiguous.
The full interface is defined [here][code_parse_int].
@ -112,7 +112,7 @@ the macro parser. This is extremely non-intuitive and self-referential. The code
to parse macro _definitions_ is in
[`src/libsyntax/ext/tt/macro_rules.rs`][code_mr]. It defines the pattern for
matching for a macro definition as `$( $lhs:tt => $rhs:tt );+`. In other words,
a `macro_rules` defintion should have in its body at least one occurence of a
a `macro_rules` definition should have in its body at least one occurrence of a
token tree followed by `=>` followed by another token tree. When the compiler
comes to a `macro_rules` definition, it uses this pattern to match the two token
trees per rule in the definition of the macro _using the macro parser itself_.

View File

@ -30,7 +30,7 @@ does is call the `main()` that's in this crate's `lib.rs`, though.)
## Cheat sheet
* Use `./x.py build --stage 1 src/libstd src/tools/rustdoc` to make a useable
* Use `./x.py build --stage 1 src/libstd src/tools/rustdoc` to make a usable
rustdoc you can run on other projects.
* Add `src/libtest` to be able to use `rustdoc --test`.
* If you've used `rustup toolchain link local /path/to/build/$TARGET/stage1`

View File

@ -76,9 +76,9 @@ enable it in the `config.toml`, too:
incremental = true
```
Note that incremental compilation will use more disk space than
usual. If disk space is a concern for you, you might want to check the
size of the `build` directory from time to time.
Note that incremental compilation will use more disk space than usual.
If disk space is a concern for you, you might want to check the size
of the `build` directory from time to time.
## Running tests manually

View File

@ -56,7 +56,7 @@ for<T,L,T> { ?0: Foo<'?1, ?2> }
This `for<>` gives some information about each of the canonical
variables within. In this case, each `T` indicates a type variable,
so `?0` and `?2` are types; the `L` indicates a lifetime varibale, so
so `?0` and `?2` are types; the `L` indicates a lifetime variable, so
`?1` is a lifetime. The `canonicalize` method *also* gives back a
`CanonicalVarValues` array OV with the "original values" for each
canonicalized variable:

View File

@ -10,7 +10,7 @@ reference the [domain goals][dg] defined in an earlier section.
## Notation
The nonterminal `Pi` is used to mean some generic *parameter*, either a
named lifetime like `'a` or a type paramter like `A`.
named lifetime like `'a` or a type parameter like `A`.
The nonterminal `Ai` is used to mean some generic *argument*, which
might be a lifetime like `'a` or a type like `Vec<A>`.

View File

@ -117,7 +117,7 @@ know whether an impl/where-clause applies or not this occurs when
the obligation contains unbound inference variables.
The subroutines that decide whether a particular impl/where-clause/etc
applies to a particular obligation are collectively refered to as the
applies to a particular obligation are collectively referred to as the
process of _matching_. At the moment, this amounts to
unifying the `Self` types, but in the future we may also recursively
consider some of the nested obligations, in the case of an impl.