From 9ad3a69332a24e4c98a1c978ba539f921d52d8bf Mon Sep 17 00:00:00 2001 From: Alexander Regueiro Date: Thu, 1 Feb 2018 03:50:28 +0000 Subject: [PATCH] replaced all instances of `--` (double hyphen) with `-` (en-dash) --- src/high-level-overview.md | 8 ++++---- src/hir.md | 6 +++--- src/how-to-build-and-run.md | 10 +++++----- src/macro-expansion.md | 2 +- src/mir.md | 10 +++++----- src/query.md | 4 ++-- src/trait-resolution.md | 18 +++++++++--------- src/ty.md | 12 ++++++------ src/type-inference.md | 12 ++++++------ 9 files changed, 41 insertions(+), 41 deletions(-) diff --git a/src/high-level-overview.md b/src/high-level-overview.md index 519d822e..50ed072b 100644 --- a/src/high-level-overview.md +++ b/src/high-level-overview.md @@ -13,7 +13,7 @@ many more. The source for each crate can be found in a directory like `src/libXXX`, where `XXX` is the crate name. (NB. The names and divisions of these crates are not set in -stone and may change over time -- for the time being, we tend towards +stone and may change over time – for the time being, we tend towards a finer-grained division to help with compilation time, though as incremental improves that may change.) @@ -53,7 +53,7 @@ also contains some amount of the compiler itself, although that is relatively limited. Finally, all the crates in the bulge in the middle define the bulk of -the compiler -- they all depend on `rustc`, so that they can make use +the compiler – they all depend on `rustc`, so that they can make use of the various types defined there, and they export public routines that `rustc_driver` will invoke as needed (more and more, what these crates export are "query definitions", but those are covered later @@ -117,9 +117,9 @@ take: - An important step in processing the HIR is to perform type checking. This process assigns types to every HIR expression, for example, and also is responsible for resolving some - "type-dependent" paths, such as field accesses (`x.f` -- we + "type-dependent" paths, such as field accesses (`x.f` – we can't know what field `f` is being accessed until we know the - type of `x`) and associated type references (`T::Item` -- we + type of `x`) and associated type references (`T::Item` – we can't know what type `Item` is until we know what `T` is). - Type checking creates "side-tables" (`TypeckTables`) that include the types of expressions, the way to resolve methods, and so forth. diff --git a/src/hir.md b/src/hir.md index 5d5e273c..52fda69a 100644 --- a/src/hir.md +++ b/src/hir.md @@ -1,6 +1,6 @@ # The HIR -The HIR -- "High-level IR" -- is the primary IR used in most of +The HIR – "High-level IR" – is the primary IR used in most of rustc. It is a desugared version of the "abstract syntax tree" (AST) that is generated after parsing, macro expansion, and name resolution have completed. Many parts of HIR resemble Rust surface syntax quite @@ -91,7 +91,7 @@ with a HIR node. For example, if you have a `DefId`, and you would like to convert it to a `NodeId`, you can use `tcx.hir.as_local_node_id(def_id)`. This -returns an `Option` -- this will be `None` if the def-id +returns an `Option` – this will be `None` if the def-id refers to something outside of the current crate (since then it has no HIR node), but otherwise returns `Some(n)` where `n` is the node-id of the definition. @@ -100,7 +100,7 @@ Similarly, you can use `tcx.hir.find(n)` to lookup the node for a `NodeId`. This returns a `Option>`, where `Node` is an enum defined in the map; by matching on this you can find out what sort of node the node-id referred to and also get a pointer to the data -itself. Often, you know what sort of node `n` is -- e.g., if you know +itself. Often, you know what sort of node `n` is – e.g., if you know that `n` must be some HIR expression, you can do `tcx.hir.expect_expr(n)`, which will extract and return the `&hir::Expr`, panicking if `n` is not in fact an expression. diff --git a/src/how-to-build-and-run.md b/src/how-to-build-and-run.md index 7992dc98..2657da84 100644 --- a/src/how-to-build-and-run.md +++ b/src/how-to-build-and-run.md @@ -129,9 +129,9 @@ LLVM version: 4.0 Here are a few other useful x.py commands. We'll cover some of them in detail in other sections: - Building things: - - `./x.py clean` -- clean up the build directory (`rm -rf build` works too, but then you have to rebuild LLVM) - - `./x.py build --stage 1` -- builds everything using the stage 1 compiler, not just up to libstd - - `./x.py build` -- builds the stage2 compiler + - `./x.py clean` – clean up the build directory (`rm -rf build` works too, but then you have to rebuild LLVM) + - `./x.py build --stage 1` – builds everything using the stage 1 compiler, not just up to libstd + - `./x.py build` – builds the stage2 compiler - Running tests (see the section [running tests](./running-tests.html) for more details): - - `./x.py test --stage 1 src/libstd` -- runs the `#[test]` tests from libstd - - `./x.py test --stage 1 src/test/run-pass` -- runs the `run-pass` test suite + - `./x.py test --stage 1 src/libstd` – runs the `#[test]` tests from libstd + - `./x.py test --stage 1 src/test/run-pass` – runs the `run-pass` test suite diff --git a/src/macro-expansion.md b/src/macro-expansion.md index 8f94d6a0..55a550b5 100644 --- a/src/macro-expansion.md +++ b/src/macro-expansion.md @@ -33,7 +33,7 @@ a tree of _tokens_. A _token_ is a single "unit" of the grammar, such as an identifier (e.g., `foo`) or punctuation (e.g., `=>`). There are also other special tokens, such as `EOF`, which indicates that there are no more tokens. Token trees resulting from paired parentheses-like characters (`(`...`)`, -`[`...`]`, and `{`...`}`) -- they include the open and close and all the tokens +`[`...`]`, and `{`...`}`) – they include the open and close and all the tokens in between (we do require that parentheses-like characters be balanced). Having macro expansion operate on token streams rather than the raw bytes of a source file abstracts away a lot of complexity. The macro expander (and much of the diff --git a/src/mir.md b/src/mir.md index dbac8eb5..34f6bbb8 100644 --- a/src/mir.md +++ b/src/mir.md @@ -49,17 +49,17 @@ query. Each suite consists of multiple optimizations and transformations. These suites represent useful intermediate points where we want to access the MIR for type checking or other purposes: -- `mir_build(D)` -- not a query, but this constructs the initial MIR -- `mir_const(D)` -- applies some simple transformations to make MIR ready for constant evaluation; -- `mir_validated(D)` -- applies some more transformations, making MIR ready for borrow checking; -- `optimized_mir(D)` -- the final state, after all optimizations have been performed. +- `mir_build(D)` – not a query, but this constructs the initial MIR +- `mir_const(D)` – applies some simple transformations to make MIR ready for constant evaluation; +- `mir_validated(D)` – applies some more transformations, making MIR ready for borrow checking; +- `optimized_mir(D)` – the final state, after all optimizations have been performed. ### Stealing The intermediate queries `mir_const()` and `mir_validated()` yield up a `&'tcx Steal>`, allocated using `tcx.alloc_steal_mir()`. This indicates that the result may be -**stolen** by the next suite of optimizations -- this is an +**stolen** by the next suite of optimizations – this is an optimization to avoid cloning the MIR. Attempting to use a stolen result will cause a panic in the compiler. Therefore, it is important that you do not read directly from these intermediate queries except as diff --git a/src/query.md b/src/query.md index 65d65130..fa17a385 100644 --- a/src/query.md +++ b/src/query.md @@ -11,7 +11,7 @@ it to you. [hl]: high-level-overview.html -Query execution is **memoized** -- so the first time you invoke a +Query execution is **memoized** – so the first time you invoke a query, it will go do the computation, but the next time, the result is returned from a hashtable. Moreover, query execution fits nicely into **incremental computation**; the idea is roughly that, when you do a @@ -98,7 +98,7 @@ message"`. This is basically just a precaution in case you are wrong. So you may be wondering what happens when you invoke a query method. The answer is that, for each query, the compiler maintains a -cache -- if your query has already been executed, then, the answer is +cache – if your query has already been executed, then, the answer is simple: we clone the return value out of the cache and return it (therefore, you should try to ensure that the return types of queries are cheaply cloneable; insert a `Rc` if necessary). diff --git a/src/trait-resolution.md b/src/trait-resolution.md index 58efbd05..63f72b01 100644 --- a/src/trait-resolution.md +++ b/src/trait-resolution.md @@ -73,16 +73,16 @@ resolved and, if so, how it is to be resolved (via impl, where clause, etc). The main interface is the `select()` function, which takes an obligation and returns a `SelectionResult`. There are three possible outcomes: -- `Ok(Some(selection))` -- yes, the obligation can be resolved, and +- `Ok(Some(selection))` – yes, the obligation can be resolved, and `selection` indicates how. If the impl was resolved via an impl, then `selection` may also indicate nested obligations that are required by the impl. -- `Ok(None)` -- we are not yet sure whether the obligation can be +- `Ok(None)` – we are not yet sure whether the obligation can be resolved or not. This happens most commonly when the obligation contains unbound type variables. -- `Err(err)` -- the obligation definitely cannot be resolved due to a +- `Err(err)` – the obligation definitely cannot be resolved due to a type error, or because there are no impls that could possibly apply, etc. @@ -95,7 +95,7 @@ Searches for impls/where-clauses/etc that might possibly be used to satisfy the obligation. Each of those is called a candidate. To avoid ambiguity, we want to find exactly one candidate that is definitively applicable. In some cases, we may not -know whether an impl/where-clause applies or not -- this occurs when +know whether an impl/where-clause applies or not – this occurs when the obligation contains unbound inference variables. The basic idea for candidate assembly is to do a first pass in which @@ -172,11 +172,11 @@ impl Get for Box { ``` What happens when we invoke `get_it(&box 1_u16)`, for example? In this -case, the `Self` type is `Box` -- that unifies with both impls, +case, the `Self` type is `Box` – that unifies with both impls, because the first applies to all types, and the second to all boxes. In the olden days we'd have called this ambiguous. But what we do now is do a second *winnowing* pass that considers where clauses -and attempts to remove candidates -- in this case, the first impl only +and attempts to remove candidates – in this case, the first impl only applies if `Box : Copy`, which doesn't hold. After winnowing, then, we are left with just one candidate, so we can proceed. There is a test of this in `src/test/run-pass/traits-conditional-dispatch.rs`. @@ -326,7 +326,7 @@ to a `TraitRef`. We would then create the `TraitRef` from the impl, using fresh variables for it's bound regions (and thus getting `Foo<&'$a isize>`, where `'$a` is the inference variable for `'a`). Next we relate the two trait refs, yielding a graph with the constraint -that `'0 == '$a`. Finally, we check for skolemization "leaks" -- a +that `'0 == '$a`. Finally, we check for skolemization "leaks" – a leak is basically any attempt to relate a skolemized region to another skolemized region, or to any region that pre-existed the impl match. The leak check is done by searching from the skolemized region to find @@ -457,7 +457,7 @@ and the graph is consulted when propagating defaults down the specialization hierarchy. You might expect that the specialization graph would be used during -selection -- i.e., when actually performing specialization. This is +selection – i.e., when actually performing specialization. This is not done for two reasons: - It's merely an optimization: given a set of candidates that apply, @@ -476,7 +476,7 @@ not done for two reasons: Trait impl selection can succeed even when multiple impls can apply, as long as they are part of the same specialization family. In that -case, it returns a *single* impl on success -- this is the most +case, it returns a *single* impl on success – this is the most specialized impl *known* to apply. However, if there are any inference variables in play, the returned impl may not be the actual impl we will use at trans time. Thus, we take special care to avoid projecting diff --git a/src/ty.md b/src/ty.md index e29ecb5e..0732e88b 100644 --- a/src/ty.md +++ b/src/ty.md @@ -80,9 +80,9 @@ pub type Ty<'tcx> = &'tcx TyS<'tcx>; [the HIR]: ./hir.html -You can basically ignore the `TyS` struct -- you will basically never +You can basically ignore the `TyS` struct – you will basically never access it explicitly. We always pass it by reference using the -`Ty<'tcx>` alias -- the only exception I think is to define inherent +`Ty<'tcx>` alias – the only exception I think is to define inherent methods on types. Instances of `TyS` are only ever allocated in one of the rustc arenas (never e.g. on the stack). @@ -115,7 +115,7 @@ of type variants. For example: let array_ty = tcx.mk_array(elem_ty, len * 2); ``` -These methods all return a `Ty<'tcx>` -- note that the lifetime you +These methods all return a `Ty<'tcx>` – note that the lifetime you get back is the lifetime of the innermost arena that this `tcx` has access to. In fact, types are always canonicalized and interned (so we never allocate exactly the same type twice) and are always allocated @@ -125,7 +125,7 @@ allocated in the global arena). However, the lifetime `'tcx` is always a safe approximation, so that is what you get back. > NB. Because types are interned, it is possible to compare them for -> equality efficiently using `==` -- however, this is almost never what +> equality efficiently using `==` – however, this is almost never what > you want to do unless you happen to be hashing and looking for > duplicates. This is because often in Rust there are multiple ways to > represent the same type, particularly once inference is involved. If @@ -141,10 +141,10 @@ In addition to types, there are a number of other arena-allocated data structures that you can allocate, and which are found in this module. Here are a few examples: -- `Substs`, allocated with `mk_substs` -- this will intern a slice of types, often used to +- `Substs`, allocated with `mk_substs` – this will intern a slice of types, often used to specify the values to be substituted for generics (e.g., `HashMap` would be represented as a slice `&'tcx [tcx.types.i32, tcx.types.u32]`). -- `TraitRef`, typically passed by value -- a **trait reference** +- `TraitRef`, typically passed by value – a **trait reference** consists of a reference to a trait along with its various type parameters (including `Self`), like `i32: Display` (here, the def-id would reference the `Display` trait, and the substs would contain diff --git a/src/type-inference.md b/src/type-inference.md index feb69419..070b7bb4 100644 --- a/src/type-inference.md +++ b/src/type-inference.md @@ -35,7 +35,7 @@ function and disposed after it returns. [ty-readme]: ty.html Within the closure, the infcx will have the type `InferCtxt<'cx, 'gcx, -'tcx>` for some fresh `'cx` and `'tcx` -- the latter corresponds to +'tcx>` for some fresh `'cx` and `'tcx` – the latter corresponds to the lifetime of this temporary arena, and the `'cx` is the lifetime of the `InferCtxt` itself. (Again, see [that ty README][ty-readme] for more details on this setup.) @@ -47,7 +47,7 @@ created. See `InferCtxtBuilder` for more information. ## Inference variables The main purpose of the inference context is to house a bunch of -**inference variables** -- these represent types or regions whose precise +**inference variables** – these represent types or regions whose precise value is not yet known, but will be uncovered as we perform type-checking. If you're familiar with the basic ideas of unification from H-M type @@ -95,15 +95,15 @@ doing this unification, and in what environment, and the `eq` method performs the actual equality constraint. When you equate things, you force them to be precisely equal. Equating -returns a `InferResult` -- if it returns `Err(err)`, then equating +returns a `InferResult` – if it returns `Err(err)`, then equating failed, and the enclosing `TypeError` will tell you what went wrong. The success case is perhaps more interesting. The "primary" return -type of `eq` is `()` -- that is, when it succeeds, it doesn't return a +type of `eq` is `()` – that is, when it succeeds, it doesn't return a value of any particular interest. Rather, it is executed for its side-effects of constraining type variables and so forth. However, the actual return type is not `()`, but rather `InferOk<()>`. The -`InferOk` type is used to carry extra trait obligations -- your job is +`InferOk` type is used to carry extra trait obligations – your job is to ensure that these are fulfilled (typically by enrolling them in a fulfillment context). See the [trait README] for more background here. @@ -117,7 +117,7 @@ basic concepts apply as above. Sometimes you would like to know if it is *possible* to equate two types without error. You can test that with `infcx.can_eq` (or `infcx.can_sub` for subtyping). If this returns `Ok`, then equality -is possible -- but in all cases, any side-effects are reversed. +is possible – but in all cases, any side-effects are reversed. Be aware though that the success or failure of these methods is always **modulo regions**. That is, two types `&'a u32` and `&'b u32` will