Hard-wrapped lines that are too long.

This commit is contained in:
Alexander Regueiro 2018-02-24 01:43:53 +00:00 committed by Who? Me?!
parent 34ec755d27
commit b3d8fba198
15 changed files with 206 additions and 161 deletions

View File

@ -14,7 +14,7 @@ for file in "$@" ; do
(( inside_block = !$inside_block )) (( inside_block = !$inside_block ))
continue continue
fi fi
if ! (( $inside_block )) && ! [[ "$line" =~ " | "|"://"|\[\^[^\ ]+\]: ]] && (( "${#line}" > $MAX_LINE_LENGTH )) ; then if ! (( $inside_block )) && ! [[ "$line" =~ " | "|"-|-"|"://"|\[\^[^\ ]+\]: ]] && (( "${#line}" > $MAX_LINE_LENGTH )) ; then
(( bad_lines++ )) (( bad_lines++ ))
echo -e "\t$line_no : $line" echo -e "\t$line_no : $line"
fi fi

View File

@ -6,7 +6,8 @@
- [The compiler testing framework](./tests/intro.md) - [The compiler testing framework](./tests/intro.md)
- [Running tests](./tests/running.md) - [Running tests](./tests/running.md)
- [Adding new tests](./tests/adding.md) - [Adding new tests](./tests/adding.md)
- [Using `compiletest` + commands to control test execution](./compiletest.md) - [Using `compiletest` + commands to control test
execution](./compiletest.md)
- [Walkthrough: a typical contribution](./walkthrough.md) - [Walkthrough: a typical contribution](./walkthrough.md)
- [High-level overview of the compiler source](./high-level-overview.md) - [High-level overview of the compiler source](./high-level-overview.md)
- [The Rustc Driver](./rustc-driver.md) - [The Rustc Driver](./rustc-driver.md)

View File

@ -1,8 +1,10 @@
# `compiletest` # `compiletest`
## Introduction ## Introduction
`compiletest` is the main test harness of the Rust test suite. It allows test authors to organize large numbers of tests (the `compiletest` is the main test harness of the Rust test suite. It allows
Rust compiler has many thousands), efficient test execution (parallel execution is supported), and allows the test author to test authors to organize large numbers of tests (the Rust compiler has many
configure behavior and expected results of both individual and groups of tests. thousands), efficient test execution (parallel execution is supported), and
allows the test author to configure behavior and expected results of both
individual and groups of tests.
`compiletest` tests may check test code for success, for failure or in some cases, even failure to compile. Tests are `compiletest` tests may check test code for success, for failure or in some cases, even failure to compile. Tests are
typically organized as a Rust source file with annotations in comments before and/or within the test code, which serve to typically organized as a Rust source file with annotations in comments before and/or within the test code, which serve to
@ -12,17 +14,17 @@ testing framework, see [`this chapter`](./tests/intro.html) for additional backg
The tests themselves are typically (but not always) organized into "suites"--for example, `run-pass`, a folder The tests themselves are typically (but not always) organized into "suites"--for example, `run-pass`, a folder
representing tests that should succeed, `run-fail`, a folder holding tests that should compile successfully, but return representing tests that should succeed, `run-fail`, a folder holding tests that should compile successfully, but return
a failure (non-zero status), `compile-fail`, a folder holding tests that should fail to compile, and many more. The various a failure (non-zero status), `compile-fail`, a folder holding tests that should fail to compile, and many more. The various
suites are defined in [src/tools/compiletest/src/common.rs](https://github.com/rust-lang/rust/tree/master/src/tools/compiletest/src/common.rs) in the `pub struct Config` declaration. And a very good suites are defined in [src/tools/compiletest/src/common.rs](https://github.com/rust-lang/rust/tree/master/src/tools/compiletest/src/common.rs) in the `pub struct Config` declaration. And a very good
introduction to the different suites of compiler tests along with details about them can be found in [`Adding new tests`](./tests/adding.html). introduction to the different suites of compiler tests along with details about them can be found in [`Adding new tests`](./tests/adding.html).
## Adding a new test file ## Adding a new test file
Briefly, simply create your new test in the appropriate location under [src/test](https://github.com/rust-lang/rust/tree/master/src/test). No registration of test files is necessary as Briefly, simply create your new test in the appropriate location under [src/test](https://github.com/rust-lang/rust/tree/master/src/test). No registration of test files is necessary as
`compiletest` will scan the [src/test](https://github.com/rust-lang/rust/tree/master/src/test) subfolder recursively, and will execute any Rust source files it finds as tests. `compiletest` will scan the [src/test](https://github.com/rust-lang/rust/tree/master/src/test) subfolder recursively, and will execute any Rust source files it finds as tests.
See [`Adding new tests`](./tests/adding.html) for a complete guide on how to adding new tests. See [`Adding new tests`](./tests/adding.html) for a complete guide on how to adding new tests.
## Header Commands ## Header Commands
Source file annotations which appear in comments near the top of the source file *before* any test code are known as header Source file annotations which appear in comments near the top of the source file *before* any test code are known as header
commands. These commands can instruct `compiletest` to ignore this test, set expectations on whether it is expected to commands. These commands can instruct `compiletest` to ignore this test, set expectations on whether it is expected to
succeed at compiling, or what the test's return code is expected to be. Header commands (and their inline counterparts, succeed at compiling, or what the test's return code is expected to be. Header commands (and their inline counterparts,
Error Info commands) are described more fully [here](./tests/adding.html#header-commands-configuring-rustc). Error Info commands) are described more fully [here](./tests/adding.html#header-commands-configuring-rustc).
@ -96,7 +98,7 @@ As a concrete example, here is the implementation for the `parse_failure_status(
pub normalize_stderr: Vec<(String, String)>, pub normalize_stderr: Vec<(String, String)>,
+ pub failure_status: i32, + pub failure_status: i32,
} }
impl TestProps { impl TestProps {
@@ -260,6 +261,7 @@ impl TestProps { @@ -260,6 +261,7 @@ impl TestProps {
run_pass: false, run_pass: false,
@ -105,7 +107,7 @@ As a concrete example, here is the implementation for the `parse_failure_status(
+ failure_status: 101, + failure_status: 101,
} }
} }
@@ -383,6 +385,10 @@ impl TestProps { @@ -383,6 +385,10 @@ impl TestProps {
if let Some(rule) = config.parse_custom_normalization(ln, "normalize-stderr") { if let Some(rule) = config.parse_custom_normalization(ln, "normalize-stderr") {
self.normalize_stderr.push(rule); self.normalize_stderr.push(rule);
@ -115,12 +117,12 @@ As a concrete example, here is the implementation for the `parse_failure_status(
+ self.failure_status = code; + self.failure_status = code;
+ } + }
}); });
for key in &["RUST_TEST_NOCAPTURE", "RUST_TEST_THREADS"] { for key in &["RUST_TEST_NOCAPTURE", "RUST_TEST_THREADS"] {
@@ -488,6 +494,13 @@ impl Config { @@ -488,6 +494,13 @@ impl Config {
self.parse_name_directive(line, "pretty-compare-only") self.parse_name_directive(line, "pretty-compare-only")
} }
+ fn parse_failure_status(&self, line: &str) -> Option<i32> { + fn parse_failure_status(&self, line: &str) -> Option<i32> {
+ match self.parse_name_value_directive(line, "failure-status") { + match self.parse_name_value_directive(line, "failure-status") {
+ Some(code) => code.trim().parse::<i32>().ok(), + Some(code) => code.trim().parse::<i32>().ok(),
@ -141,7 +143,7 @@ located in [src/tools/compiletest/src/runtest.rs](https://github.com/rust-lang/r
```diff ```diff
@@ -295,11 +295,14 @@ impl<'test> TestCx<'test> { @@ -295,11 +295,14 @@ impl<'test> TestCx<'test> {
} }
fn check_correct_failure_status(&self, proc_res: &ProcRes) { fn check_correct_failure_status(&self, proc_res: &ProcRes) {
- // The value the rust runtime returns on failure - // The value the rust runtime returns on failure
- const RUST_ERR: i32 = 101; - const RUST_ERR: i32 = 101;
@ -160,7 +162,7 @@ located in [src/tools/compiletest/src/runtest.rs](https://github.com/rust-lang/r
} }
@@ -320,7 +323,6 @@ impl<'test> TestCx<'test> { @@ -320,7 +323,6 @@ impl<'test> TestCx<'test> {
); );
let proc_res = self.exec_compiled_test(); let proc_res = self.exec_compiled_test();
- -
if !proc_res.status.success() { if !proc_res.status.success() {
@ -172,7 +174,7 @@ located in [src/tools/compiletest/src/runtest.rs](https://github.com/rust-lang/r
); );
- panic!(); - panic!();
} }
} }
``` ```
Note the use of `self.props.failure_status` to access the header command property. In tests which do not specify the failure Note the use of `self.props.failure_status` to access the header command property. In tests which do not specify the failure
status header command, `self.props.failure_status` will evaluate to the default value of 101 at the time of this writing. status header command, `self.props.failure_status` will evaluate to the default value of 101 at the time of this writing.

View File

@ -94,20 +94,21 @@ order to compile a Rust crate, these are the general steps that we
take: take:
1. **Parsing input** 1. **Parsing input**
- this processes the `.rs` files and produces the AST ("abstract syntax tree") - this processes the `.rs` files and produces the AST
("abstract syntax tree")
- the AST is defined in `syntax/ast.rs`. It is intended to match the lexical - the AST is defined in `syntax/ast.rs`. It is intended to match the lexical
syntax of the Rust language quite closely. syntax of the Rust language quite closely.
2. **Name resolution, macro expansion, and configuration** 2. **Name resolution, macro expansion, and configuration**
- once parsing is complete, we process the AST recursively, resolving paths - once parsing is complete, we process the AST recursively, resolving
and expanding macros. This same process also processes `#[cfg]` nodes, and hence paths and expanding macros. This same process also processes `#[cfg]`
may strip things out of the AST as well. nodes, and hence may strip things out of the AST as well.
3. **Lowering to HIR** 3. **Lowering to HIR**
- Once name resolution completes, we convert the AST into the HIR, - Once name resolution completes, we convert the AST into the HIR,
or "high-level IR". The HIR is defined in `src/librustc/hir/`; that module also includes or "high-level IR". The HIR is defined in `src/librustc/hir/`;
the lowering code. that module also includes the lowering code.
- The HIR is a lightly desugared variant of the AST. It is more processed than the - The HIR is a lightly desugared variant of the AST. It is more processed
AST and more suitable for the analyses that follow. It is **not** required to match than the AST and more suitable for the analyses that follow.
the syntax of the Rust language. It is **not** required to match the syntax of the Rust language.
- As a simple example, in the **AST**, we preserve the parentheses - As a simple example, in the **AST**, we preserve the parentheses
that the user wrote, so `((1 + 2) + 3)` and `1 + 2 + 3` parse that the user wrote, so `((1 + 2) + 3)` and `1 + 2 + 3` parse
into distinct trees, even though they are equivalent. In the into distinct trees, even though they are equivalent. In the
@ -125,13 +126,13 @@ take:
the types of expressions, the way to resolve methods, and so forth. the types of expressions, the way to resolve methods, and so forth.
- After type-checking, we can do other analyses, such as privacy checking. - After type-checking, we can do other analyses, such as privacy checking.
4. **Lowering to MIR and post-processing** 4. **Lowering to MIR and post-processing**
- Once type-checking is done, we can lower the HIR into MIR ("middle IR"), which - Once type-checking is done, we can lower the HIR into MIR ("middle IR"),
is a **very** desugared version of Rust, well suited to the borrowck but also which is a **very** desugared version of Rust, well suited to borrowck
certain high-level optimizations. but also to certain high-level optimizations.
5. **Translation to LLVM and LLVM optimizations** 5. **Translation to LLVM and LLVM optimizations**
- From MIR, we can produce LLVM IR. - From MIR, we can produce LLVM IR.
- LLVM then runs its various optimizations, which produces a number of `.o` files - LLVM then runs its various optimizations, which produces a number of
(one for each "codegen unit"). `.o` files (one for each "codegen unit").
6. **Linking** 6. **Linking**
- Finally, those `.o` files are linked together. - Finally, those `.o` files are linked together.

View File

@ -63,23 +63,25 @@ carry around references into the HIR, but rather to carry around
*identifier numbers* (or just "ids"). Right now, you will find four *identifier numbers* (or just "ids"). Right now, you will find four
sorts of identifiers in active use: sorts of identifiers in active use:
- `DefId` primarily names "definitions" or top-level items. - `DefId`, which primarily names "definitions" or top-level items.
- You can think of a `DefId` as shorthand for a very explicit and complete - You can think of a `DefId` as being shorthand for a very explicit
path, like `std::collections::HashMap`. However, these paths are able to and complete path, like `std::collections::HashMap`. However,
name things that are not nameable in normal Rust (e.g. `impl`s), and they these paths are able to name things that are not nameable in
also include extra information about the crate (such as its version number, normal Rust (e.g. impls), and they also include extra information
since two versions of the same crate can co-exist). about the crate (such as its version number, as two versions of
- A `DefId` really consists of two parts, a `CrateNum` (which identifies the the same crate can co-exist).
crate) and a `DefIndex` (which indexes into a list of items that is - A `DefId` really consists of two parts, a `CrateNum` (which
maintained per crate). identifies the crate) and a `DefIndex` (which indixes into a list
- `HirId` combines the index of a particular item with an offset within of items that is maintained per crate).
that item. - `HirId`, which combines the index of a particular item with an
- The key point of an `HirId` is that it is *relative* to some item (which is offset within that item.
named via a `DefId`). - the key point of a `HirId` is that it is *relative* to some item
- `BodyId` an absolute identifier that refers to a specific body (definition (which is named via a `DefId`).
of a function or constant) in the crate. It is currently effectively a - `BodyId`, this is an absolute identifier that refers to a specific
"newtype'd" `NodeId`. body (definition of a function or constant) in the crate. It is currently
- `NodeId` an absolute ID that identifies a single node in the HIR tree. effectively a "newtype'd" `NodeId`.
- `NodeId`, which is an absolute id that identifies a single node in the HIR
tree.
- While these are still in common use, **they are being slowly phased out**. - While these are still in common use, **they are being slowly phased out**.
- Since they are absolute within the crate, adding a new node anywhere in the - Since they are absolute within the crate, adding a new node anywhere in the
tree causes the `NodeId`s of all subsequent code in the crate to change. tree causes the `NodeId`s of all subsequent code in the crate to change.
@ -119,5 +121,5 @@ of a function/closure or the definition of a constant. Bodies are
associated with an **owner**, which is typically some kind of item associated with an **owner**, which is typically some kind of item
(e.g. an `fn()` or `const`), but could also be a closure expression (e.g. an `fn()` or `const`), but could also be a closure expression
(e.g. `|x, y| x + y`). You can use the HIR map to find the body (e.g. `|x, y| x + y`). You can use the HIR map to find the body
associated with a given `DefId` (`maybe_body_owned_by()`) or to find associated with a given def-id (`maybe_body_owned_by()`) or to find
the owner of a body (`body_owner_def_id()`). the owner of a body (`body_owner_def_id()`).

View File

@ -2,8 +2,9 @@
The compiler is built using a tool called `x.py`. You will need to The compiler is built using a tool called `x.py`. You will need to
have Python installed to run it. But before we get to that, if you're going to have Python installed to run it. But before we get to that, if you're going to
be hacking on rustc, you'll want to tweak the configuration of the compiler. The default be hacking on rustc, you'll want to tweak the configuration of the compiler.
configuration is oriented towards running the compiler as a user, not a developer. The default configuration is oriented towards running the compiler as a user,
not a developer.
### Create a config.toml ### Create a config.toml
@ -84,15 +85,16 @@ What this command will do is the following:
- Using this stage 1 compiler, it will build the standard library. - Using this stage 1 compiler, it will build the standard library.
(this is what the `src/libstd`) means. (this is what the `src/libstd`) means.
This is just a subset of the full rustc build. The **full** rustc build (what you This is just a subset of the full rustc build. The **full** rustc build
get if you just say `./x.py build`) has quite a few more steps: (what you get if you just say `./x.py build`) has quite a few more steps:
- Build stage1 rustc with stage0 compiler - Build stage1 rustc with stage0 compiler.
- Build libstd with stage1 compiler (up to here is the same) - Build libstd with stage1 compiler (up to here is the same).
- Build rustc from `src` again, this time with the stage1 compiler (this part is new) - Build rustc from `src` again, this time with the stage1 compiler
- The resulting compiler here is called the "stage2" compiler (this part is new).
- Build libstd with stage2 compiler - The resulting compiler here is called the "stage2" compiler.
- Build librustdoc and a bunch of other things - Build libstd with stage2 compiler.
- Build librustdoc and a bunch of other things.
### Creating a rustup toolchain ### Creating a rustup toolchain
@ -126,12 +128,16 @@ LLVM version: 4.0
### Other x.py commands ### Other x.py commands
Here are a few other useful x.py commands. We'll cover some of them in detail in other sections: Here are a few other useful x.py commands. We'll cover some of them in detail
in other sections:
- Building things: - Building things:
- `./x.py clean` clean up the build directory (`rm -rf build` works too, but then you have to rebuild LLVM) - `./x.py clean` clean up the build directory (`rm -rf build` works too,
- `./x.py build --stage 1` builds everything using the stage 1 compiler, not just up to libstd but then you have to rebuild LLVM)
- `./x.py build --stage 1` builds everything using the stage 1 compiler,
not just up to libstd
- `./x.py build` builds the stage2 compiler - `./x.py build` builds the stage2 compiler
- Running tests (see the [section on running tests](./tests/running.html) for more details): - Running tests (see the [section on running tests](./tests/running.html) for
more details):
- `./x.py test --stage 1 src/libstd` runs the `#[test]` tests from libstd - `./x.py test --stage 1 src/libstd` runs the `#[test]` tests from libstd
- `./x.py test --stage 1 src/test/run-pass` runs the `run-pass` test suite - `./x.py test --stage 1 src/test/run-pass` runs the `run-pass` test suite

View File

@ -2,8 +2,8 @@
The incremental compilation scheme is, in essence, a surprisingly The incremental compilation scheme is, in essence, a surprisingly
simple extension to the overall query system. We'll start by describing simple extension to the overall query system. We'll start by describing
a slightly simplified variant of the real thing the "basic algorithm" and then describe a slightly simplified variant of the real thing the "basic algorithm"
some possible improvements. and then describe some possible improvements.
## The basic algorithm ## The basic algorithm
@ -40,7 +40,7 @@ There are two key insights here:
produced the same result as the previous time. **If it did,** we produced the same result as the previous time. **If it did,** we
can still mark the query as green, and hence avoid re-executing can still mark the query as green, and hence avoid re-executing
dependent queries. dependent queries.
### The try-mark-green algorithm ### The try-mark-green algorithm
At the core of incremental compilation is an algorithm called At the core of incremental compilation is an algorithm called
@ -66,13 +66,15 @@ Try-mark-green works as follows:
- For each query R in `reads(Q)`, we recursively demand the color - For each query R in `reads(Q)`, we recursively demand the color
of R using try-mark-green. of R using try-mark-green.
- Note: it is important that we visit each node in `reads(Q)` in same order - Note: it is important that we visit each node in `reads(Q)` in same order
as they occurred in the original compilation. See [the section on the query DAG below](#dag). as they occurred in the original compilation. See [the section on the
- If **any** of the nodes in `reads(Q)` wind up colored **red**, then Q is dirty. query DAG below](#dag).
- We re-execute Q and compare the hash of its result to the hash of the result - If **any** of the nodes in `reads(Q)` wind up colored **red**, then Q is
from the previous compilation. dirty.
- We re-execute Q and compare the hash of its result to the hash of the
result from the previous compilation.
- If the hash has not changed, we can mark Q as **green** and return. - If the hash has not changed, we can mark Q as **green** and return.
- Otherwise, **all** of the nodes in `reads(Q)` must be **green**. In that case, - Otherwise, **all** of the nodes in `reads(Q)` must be **green**. In that
we can color Q as **green** and return. case, we can color Q as **green** and return.
<a name="dag"> <a name="dag">
@ -80,14 +82,14 @@ Try-mark-green works as follows:
The query DAG code is stored in The query DAG code is stored in
[`src/librustc/dep_graph`][dep_graph]. Construction of the DAG is done [`src/librustc/dep_graph`][dep_graph]. Construction of the DAG is done
by instrumenting the query execution. by instrumenting the query execution.
One key point is that the query DAG also tracks ordering; that is, for One key point is that the query DAG also tracks ordering; that is, for
each query Q, we not only track the queries that Q reads, we track the each query Q, we not only track the queries that Q reads, we track the
**order** in which they were read. This allows try-mark-green to walk **order** in which they were read. This allows try-mark-green to walk
those queries back in the same order. This is important because once a subquery comes back as red, those queries back in the same order. This is important because once a
we can no longer be sure that Q will continue along the same path as before. subquery comes back as red, we can no longer be sure that Q will continue
That is, imagine a query like this: along the same path as before. That is, imagine a query like this:
```rust,ignore ```rust,ignore
fn main_query(tcx) { fn main_query(tcx) {
@ -105,9 +107,10 @@ query `main_query` executes will be `subquery2`, and `subquery3` will
not be executed at all. not be executed at all.
But now imagine that in the **next** compilation, the input has But now imagine that in the **next** compilation, the input has
changed such that `subquery1` returns **false**. In this case, `subquery2` would never changed such that `subquery1` returns **false**. In this case, `subquery2`
execute. If try-mark-green were to visit `reads(main_query)` out of order, would never execute. If try-mark-green were to visit `reads(main_query)` out
however, it might visit `subquery2` before `subquery1`, and hence execute it. of order, however, it might visit `subquery2` before `subquery1`, and hence
execute it.
This can lead to ICEs and other problems in the compiler. This can lead to ICEs and other problems in the compiler.
[dep_graph]: https://github.com/rust-lang/rust/tree/master/src/librustc/dep_graph [dep_graph]: https://github.com/rust-lang/rust/tree/master/src/librustc/dep_graph
@ -124,8 +127,8 @@ we **also** save the results.
This is why the incremental algorithm separates computing the This is why the incremental algorithm separates computing the
**color** of a node, which often does not require its value, from **color** of a node, which often does not require its value, from
computing the **result** of a node. Computing the result is done via a simple algorithm computing the **result** of a node. Computing the result is done via a simple
like so: algorithm like so:
- Check if a saved result for Q is available. If so, compute the color of Q. - Check if a saved result for Q is available. If so, compute the color of Q.
If Q is green, deserialize and return the saved result. If Q is green, deserialize and return the saved result.

View File

@ -74,10 +74,12 @@ match queries::type_of::try_get(tcx, DUMMY_SP, self.did) {
} }
``` ```
So, if you get back an `Err` from `try_get`, then a cycle *did* occur. This means that So, if you get back an `Err` from `try_get`, then a cycle *did* occur. This
you must ensure that a compiler error message is reported. You can do that in two ways: means that you must ensure that a compiler error message is reported. You can
do that in two ways:
The simplest is to invoke `err.emit()`. This will emit the cycle error to the user. The simplest is to invoke `err.emit()`. This will emit the cycle error to the
user.
However, often cycles happen because of an illegal program, and you However, often cycles happen because of an illegal program, and you
know at that point that an error either already has been reported or know at that point that an error either already has been reported or
@ -192,8 +194,8 @@ fn fubar<'cx, 'tcx>(tcx: TyCtxt<'cx, 'tcx>, key: DefId) -> Fubar<'tcx> { .. }
N.B. Most of the `rustc_*` crates only provide **local N.B. Most of the `rustc_*` crates only provide **local
providers**. Almost all **extern providers** wind up going through the providers**. Almost all **extern providers** wind up going through the
[`rustc_metadata` crate][rustc_metadata], which loads the information from the crate [`rustc_metadata` crate][rustc_metadata], which loads the information from the
metadata. But in some cases there are crates that provide queries for crate metadata. But in some cases there are crates that provide queries for
*both* local and external crates, in which case they define both a *both* local and external crates, in which case they define both a
`provide` and a `provide_extern` function that `rustc_driver` can `provide` and a `provide_extern` function that `rustc_driver` can
invoke. invoke.

View File

@ -12,8 +12,8 @@ structure:
- They always begin with the [copyright notice](./conventions.html#copyright); - They always begin with the [copyright notice](./conventions.html#copyright);
- then they should have some kind of - then they should have some kind of
[comment explaining what the test is about](#explanatory_comment); [comment explaining what the test is about](#explanatory_comment);
- next, they can have one or more [header commands](#header_commands), which are special - next, they can have one or more [header commands](#header_commands), which
comments that the test interpreter knows how to interpret. are special comments that the test interpreter knows how to interpret.
- finally, they have the Rust source. This may have various [error - finally, they have the Rust source. This may have various [error
annotations](#error_annotations) which indicate expected compilation errors or annotations](#error_annotations) which indicate expected compilation errors or
warnings. warnings.
@ -28,10 +28,12 @@ rough heuristics:
- Some tests have specialized needs: - Some tests have specialized needs:
- need to run gdb or lldb? use the `debuginfo` test suite - need to run gdb or lldb? use the `debuginfo` test suite
- need to inspect LLVM IR or MIR IR? use the `codegen` or `mir-opt` test suites - need to inspect LLVM IR or MIR IR? use the `codegen` or `mir-opt` test
suites
- need to run rustdoc? Prefer a `rustdoc` test - need to run rustdoc? Prefer a `rustdoc` test
- need to inspect the resulting binary in some way? Then use `run-make` - need to inspect the resulting binary in some way? Then use `run-make`
- For most other things, [a `ui` (or `ui-fulldeps`) test](#ui) is to be preferred: - For most other things, [a `ui` (or `ui-fulldeps`) test](#ui) is to be
preferred:
- `ui` tests subsume both run-pass, compile-fail, and parse-fail tests - `ui` tests subsume both run-pass, compile-fail, and parse-fail tests
- in the case of warnings or errors, `ui` tests capture the full output, - in the case of warnings or errors, `ui` tests capture the full output,
which makes it easier to review but also helps prevent "hidden" regressions which makes it easier to review but also helps prevent "hidden" regressions
@ -59,8 +61,8 @@ When writing a new feature, **create a subdirectory to store your
tests**. For example, if you are implementing RFC 1234 ("Widgets"), tests**. For example, if you are implementing RFC 1234 ("Widgets"),
then it might make sense to put the tests in directories like: then it might make sense to put the tests in directories like:
- `src/test/ui/rfc1234-widgets/` - `src/test/ui/rfc1234-widgets/`
- `src/test/run-pass/rfc1234-widgets/` - `src/test/run-pass/rfc1234-widgets/`
- etc - etc
In other cases, there may already be a suitable directory. (The proper In other cases, there may already be a suitable directory. (The proper
@ -118,16 +120,22 @@ fn main() {
These are used to ignore the test in some situations, which means the test won't These are used to ignore the test in some situations, which means the test won't
be compiled or run. be compiled or run.
* `ignore-X` where `X` is a target detail or stage will ignore the test accordingly (see below) * `ignore-X` where `X` is a target detail or stage will ignore the
* `ignore-pretty` will not compile the pretty-printed test (this is done to test the pretty-printer, but might not always work) test accordingly (see below)
* `ignore-pretty` will not compile the pretty-printed test (this is
done to test the pretty-printer, but might not always work)
* `ignore-test` always ignores the test * `ignore-test` always ignores the test
* `ignore-lldb` and `ignore-gdb` will skip a debuginfo test on that debugger. * `ignore-lldb` and `ignore-gdb` will skip a debuginfo test on that
debugger.
Some examples of `X` in `ignore-X`: Some examples of `X` in `ignore-X`:
* Architecture: `aarch64`, `arm`, `asmjs`, `mips`, `wasm32`, `x86_64`, `x86`, ... * Architecture: `aarch64`, `arm`, `asmjs`, `mips`, `wasm32`, `x86_64`,
* OS: `android`, `emscripten`, `freebsd`, `ios`, `linux`, `macos`, `windows`, ... `x86`, ...
* Environment (fourth word of the target triple): `gnu`, `msvc`, `musl`. * OS: `android`, `emscripten`, `freebsd`, `ios`, `linux`, `macos`,
`windows`, ...
* Environment (fourth word of the target triple): `gnu`, `msvc`,
`musl`.
* Pointer width: `32bit`, `64bit`. * Pointer width: `32bit`, `64bit`.
* Stage: `stage0`, `stage1`, `stage2`. * Stage: `stage0`, `stage1`, `stage2`.
@ -140,17 +148,20 @@ source.
* `min-{gdb,lldb}-version` * `min-{gdb,lldb}-version`
* `min-llvm-version` * `min-llvm-version`
* `must-compile-successfully` for UI tests, indicates that the test is supposed * `must-compile-successfully` for UI tests, indicates that the test is
to compile, as opposed to the default where the test is supposed to error out. supposed to compile, as opposed to the default where the test is
supposed to error out.
* `compile-flags` passes extra command-line args to the compiler, * `compile-flags` passes extra command-line args to the compiler,
e.g. `compile-flags -g` which forces debuginfo to be enabled. e.g. `compile-flags -g` which forces debuginfo to be enabled.
* `should-fail` indicates that the test should fail; used for "meta testing", * `should-fail` indicates that the test should fail; used for "meta
where we test the compiletest program itself to check that it will generate testing", where we test the compiletest program itself to check that
errors in appropriate scenarios. This header is ignored for pretty-printer tests. it will generate errors in appropriate scenarios. This header is
* `gate-test-X` where `X` is a feature marks the test as "gate test" for feature X. ignored for pretty-printer tests.
Such tests are supposed to ensure that the compiler errors when usage of a gated * `gate-test-X` where `X` is a feature marks the test as "gate test"
feature is attempted without the proper `#![feature(X)]` tag. for feature X. Such tests are supposed to ensure that the compiler
Each unstable lang feature is required to have a gate test. errors when usage of a gated feature is attempted without the proper
`#![feature(X)]` tag. Each unstable lang feature is required to
have a gate test.
[`header.rs`]: https://github.com/rust-lang/rust/tree/master/src/tools/compiletest/src/header.rs [`header.rs`]: https://github.com/rust-lang/rust/tree/master/src/tools/compiletest/src/header.rs
@ -245,8 +256,10 @@ can also make UI tests where compilation is expected to succeed, and
you can even run the resulting program. Just add one of the following you can even run the resulting program. Just add one of the following
[header commands](#header_commands): [header commands](#header_commands):
- `// must-compile-successfully` -- compilation should succeed but do not run the resulting binary - `// must-compile-successfully` -- compilation should succeed but do
- `// run-pass` -- compilation should succeed and we should run the resulting binary not run the resulting binary
- `// run-pass` -- compilation should succeed and we should run the
resulting binary
### Editing and updating the reference files ### Editing and updating the reference files
@ -293,7 +306,8 @@ The corresponding reference file will use the normalized output to test both
... ...
``` ```
Please see [`ui/transmute/main.rs`][mrs] and [`main.stderr`][] for a concrete usage example. Please see [`ui/transmute/main.rs`][mrs] and [`main.stderr`][] for a
concrete usage example.
[mrs]: https://github.com/rust-lang/rust/blob/master/src/test/ui/transmute/main.rs [mrs]: https://github.com/rust-lang/rust/blob/master/src/test/ui/transmute/main.rs
[`main.stderr`]: https://github.com/rust-lang/rust/blob/master/src/test/ui/transmute/main.stderr [`main.stderr`]: https://github.com/rust-lang/rust/blob/master/src/test/ui/transmute/main.stderr

View File

@ -1,12 +1,12 @@
# The compiler testing framework # The compiler testing framework
The Rust project runs a wide variety of different tests, orchestrated by the The Rust project runs a wide variety of different tests, orchestrated
build system (`x.py test`). The main test harness for testing the compiler by the build system (`x.py test`). The main test harness for testing
itself is a tool called compiletest (sources in the the compiler itself is a tool called compiletest (sources in the
[`src/tools/compiletest`]). This section gives a brief overview of how the [`src/tools/compiletest`]). This section gives a brief overview of how
testing framework is setup, and then gets into some of the details on [how to the testing framework is setup, and then gets into some of the details
run tests](./tests/running.html#ui) as well as [how to add new on [how to run tests](./tests/running.html#ui) as well as
tests](./tests/adding.html). [how to add new tests](./tests/adding.html).
[`src/tools/compiletest`]: https://github.com/rust-lang/rust/tree/master/src/tools/compiletest [`src/tools/compiletest`]: https://github.com/rust-lang/rust/tree/master/src/tools/compiletest
@ -24,11 +24,13 @@ Here is a brief summary of the test suites as of this writing and what
they mean. In some cases, the test suites are linked to parts of the manual they mean. In some cases, the test suites are linked to parts of the manual
that give more details. that give more details.
- [`ui`](./tests/adding.html#ui) -- tests that check the exact stdout/stderr from compilation - [`ui`](./tests/adding.html#ui) -- tests that check the exact
and/or running the test stdout/stderr from compilation and/or running the test
- `run-pass` -- tests that are expected to compile and execute successfully (no panics) - `run-pass` -- tests that are expected to compile and execute
successfully (no panics)
- `run-pass-valgrind` -- tests that ought to run with valgrind - `run-pass-valgrind` -- tests that ought to run with valgrind
- `run-fail` -- tests that are expected to compile but then panic during execution - `run-fail` -- tests that are expected to compile but then panic
during execution
- `compile-fail` -- tests that are expected to fail compilation. - `compile-fail` -- tests that are expected to fail compilation.
- `parse-fail` -- tests that are expected to fail to parse - `parse-fail` -- tests that are expected to fail to parse
- `pretty` -- tests targeting the Rust "pretty printer", which - `pretty` -- tests targeting the Rust "pretty printer", which
@ -44,19 +46,20 @@ that give more details.
results from previous compilations. results from previous compilations.
- `run-make` -- tests that basically just execute a `Makefile`; the - `run-make` -- tests that basically just execute a `Makefile`; the
ultimate in flexibility but quite annoying to write. ultimate in flexibility but quite annoying to write.
- `rustdoc` -- tests for rustdoc, making sure that the generated files contain - `rustdoc` -- tests for rustdoc, making sure that the generated files
the expected documentation. contain the expected documentation.
- `*-fulldeps` -- same as above, but indicates that the test depends on things other - `*-fulldeps` -- same as above, but indicates that the test depends
than `libstd` (and hence those things must be built) on things other than `libstd` (and hence those things must be built)
## Other Tests ## Other Tests
The Rust build system handles running tests for various other things, The Rust build system handles running tests for various other things,
including: including:
- **Tidy** -- This is a custom tool used for validating source code style and - **Tidy** -- This is a custom tool used for validating source code
formatting conventions, such as rejecting long lines. There is more style and formatting conventions, such as rejecting long lines.
information in the [section on coding conventions](./conventions.html#formatting). There is more information in the
[section on coding conventions](./conventions.html#formatting).
Example: `./x.py test src/tools/tidy` Example: `./x.py test src/tools/tidy`

View File

@ -24,10 +24,11 @@ generally working correctly would be the following:
./x.py test --stage 1 src/test/{ui,compile-fail,run-pass} ./x.py test --stage 1 src/test/{ui,compile-fail,run-pass}
``` ```
This will run the `ui`, `compile-fail`, and `run-pass` test suites, and This will run the `ui`, `compile-fail`, and `run-pass` test suites,
only with the stage 1 build. Of course, the choice of test suites is somewhat and only with the stage 1 build. Of course, the choice of test suites
arbitrary, and may not suit the task you are doing. For example, if you are hacking is somewhat arbitrary, and may not suit the task you are doing. For
on debuginfo, you may be better off with the debuginfo test suite: example, if you are hacking on debuginfo, you may be better off with
the debuginfo test suite:
```bash ```bash
./x.py test --stage 1 src/test/debuginfo ./x.py test --stage 1 src/test/debuginfo

View File

@ -32,7 +32,7 @@ impl<'a> Foo<&'a isize> for AnyInt { }
And the question is, does `AnyInt : for<'a> Foo<&'a isize>`? We want the And the question is, does `AnyInt : for<'a> Foo<&'a isize>`? We want the
answer to be yes. The algorithm for figuring it out is closely related answer to be yes. The algorithm for figuring it out is closely related
to the subtyping for higher-ranked types (which is described in [here][hrsubtype] to the subtyping for higher-ranked types (which is described [here][hrsubtype]
and also in a [paper by SPJ]. If you wish to understand higher-ranked and also in a [paper by SPJ]. If you wish to understand higher-ranked
subtyping, we recommend you read the paper). There are a few parts: subtyping, we recommend you read the paper). There are a few parts:
@ -83,7 +83,8 @@ skolemized to `'0` and the impl trait reference is instantiated to
like `'static == '0`. This means that the taint set for `'0` is `{'0, like `'static == '0`. This means that the taint set for `'0` is `{'0,
'static}`, which fails the leak check. 'static}`, which fails the leak check.
**TODO**: This is because `'static` is not a region variable but is in the taint set, right? **TODO**: This is because `'static` is not a region variable but is in the
taint set, right?
## Higher-ranked trait obligations ## Higher-ranked trait obligations
@ -122,4 +123,5 @@ from. (This is done in `higher_ranked::plug_leaks`). We know that the
leak check passed, so this taint set consists solely of the skolemized leak check passed, so this taint set consists solely of the skolemized
region itself plus various intermediate region variables. We then walk region itself plus various intermediate region variables. We then walk
the trait-reference and convert every region in that taint set back to the trait-reference and convert every region in that taint set back to
a late-bound region, so in this case we'd wind up with `Baz: for<'a> Bar<&'a isize>`. a late-bound region, so in this case we'd wind up with
`Baz: for<'a> Bar<&'a isize>`.

View File

@ -58,20 +58,20 @@ will then be generated in the output binary.
Trait resolution consists of three major parts: Trait resolution consists of three major parts:
- **Selection** is deciding how to resolve a specific obligation. For - **Selection**: Deciding how to resolve a specific obligation. For
example, selection might decide that a specific obligation can be example, selection might decide that a specific obligation can be
resolved by employing an impl which matches the `Self` type, or by resolved by employing an impl which matches the `Self` type, or by using a
using a parameter bound (e.g. `T: Trait`). In the case of an impl, selecting one parameter bound (e.g. `T: Trait`). In the case of an impl, selecting one
obligation can create *nested obligations* because of where clauses obligation can create *nested obligations* because of where clauses
on the impl itself. It may also require evaluating those nested on the impl itself. It may also require evaluating those nested
obligations to resolve ambiguities. obligations to resolve ambiguities.
- **Fulfillment** is keeping track of which obligations - **Fulfillment**: The fulfillment code is what tracks that obligations
are completely fulfilled. Basically, it is a worklist of obligations are completely fulfilled. Basically it is a worklist of obligations
to be selected: once selection is successful, the obligation is to be selected: once selection is successful, the obligation is
removed from the worklist and any nested obligations are enqueued. removed from the worklist and any nested obligations are enqueued.
- **Coherence** checks are intended to ensure that there - **Coherence**: The coherence checks are intended to ensure that there
are never overlapping impls, where two impls could be used with are never overlapping impls, where two impls could be used with
equal precedence. equal precedence.
@ -174,8 +174,12 @@ select this impl, which will cause the type of `$Y` to be unified to
`usize`. (Note that while assembling candidates, we do the initial `usize`. (Note that while assembling candidates, we do the initial
unifications in a transaction, so that they don't affect one another.) unifications in a transaction, so that they don't affect one another.)
**TODO**: The example says we can "select" the impl, but this section is talking specifically about candidate assembly. Does this mean we can sometimes skip confirmation? Or is this poor wording? **TODO**: The example says we can "select" the impl, but this section is
**TODO**: Is the unification of `$Y` part of trait resolution or type inference? Or is this not the same type of "inference variable" as in type inference? talking specifically about candidate assembly. Does this mean we can sometimes
skip confirmation? Or is this poor wording?
**TODO**: Is the unification of `$Y` part of trait resolution or type
inference? Or is this not the same type of "inference variable" as in type
inference?
#### Winnowing: Resolving ambiguities #### Winnowing: Resolving ambiguities
@ -282,10 +286,10 @@ to a particular impl.
One interesting twist has to do with nested obligations. In general, in trans, One interesting twist has to do with nested obligations. In general, in trans,
we only need to do a "shallow" selection for an obligation. That is, we wish to we only need to do a "shallow" selection for an obligation. That is, we wish to
identify which impl applies, but we do not (yet) need to decide how to select identify which impl applies, but we do not (yet) need to decide how to select
any nested obligations. Nonetheless, we *do* currently do a complete any nested obligations. Nonetheless, we *do* currently do a complete resolution,
resolution, and that is because it can sometimes inform the results of type and that is because it can sometimes inform the results of type inference.
inference. That is, we do not have the full substitutions for the type That is, we do not have the full substitutions in terms of the type variables
variables of the impl available to us, so we must run trait selection to figure of the impl available to us, so we must run trait selection to figure
everything out. everything out.
**TODO**: is this still talking about trans? **TODO**: is this still talking about trans?

View File

@ -56,8 +56,8 @@ fn not_in_inference<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) {
} }
``` ```
In contrast, if we want to code that can be usable during type inference, then you In contrast, if we want to code that can be usable during type inference, then
need to declare a distinct `'gcx` and `'tcx` lifetime parameter: you need to declare a distinct `'gcx` and `'tcx` lifetime parameter:
```rust ```rust
fn maybe_in_inference<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, def_id: DefId) { fn maybe_in_inference<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, def_id: DefId) {
@ -141,19 +141,22 @@ In addition to types, there are a number of other arena-allocated data
structures that you can allocate, and which are found in this structures that you can allocate, and which are found in this
module. Here are a few examples: module. Here are a few examples:
- `Substs`, allocated with `mk_substs` this will intern a slice of types, often used to - `Substs`, allocated with `mk_substs` this will intern a slice of types,
specify the values to be substituted for generics (e.g. `HashMap<i32, u32>` often used to specify the values to be substituted for generics
would be represented as a slice `&'tcx [tcx.types.i32, tcx.types.u32]`). (e.g. `HashMap<i32, u32>` would be represented as a slice
`&'tcx [tcx.types.i32, tcx.types.u32]`).
- `TraitRef`, typically passed by value a **trait reference** - `TraitRef`, typically passed by value a **trait reference**
consists of a reference to a trait along with its various type consists of a reference to a trait along with its various type
parameters (including `Self`), like `i32: Display` (here, the def-id parameters (including `Self`), like `i32: Display` (here, the def-id
would reference the `Display` trait, and the substs would contain would reference the `Display` trait, and the substs would contain
`i32`). `i32`).
- `Predicate` defines something the trait system has to prove (see `traits` module). - `Predicate` defines something the trait system has to prove (see `traits`
module).
### Import conventions ### Import conventions
Although there is no hard and fast rule, the `ty` module tends to be used like so: Although there is no hard and fast rule, the `ty` module tends to be used like
so:
```rust ```rust
use ty::{self, Ty, TyCtxt}; use ty::{self, Ty, TyCtxt};

View File

@ -59,13 +59,14 @@ inference works, or perhaps this blog post on
[Unification in the Chalk project]: http://smallcultfollowing.com/babysteps/blog/2017/03/25/unification-in-chalk-part-1/ [Unification in the Chalk project]: http://smallcultfollowing.com/babysteps/blog/2017/03/25/unification-in-chalk-part-1/
All said, the inference context stores four kinds of inference variables as of All told, the inference context stores four kinds of inference variables as of
writing: this writing:
- Type variables, which come in three varieties: - Type variables, which come in three varieties:
- General type variables (the most common). These can be unified with any type. - General type variables (the most common). These can be unified with any
- Integral type variables, which can only be unified with an integral type, and type.
arise from an integer literal expression like `22`. - Integral type variables, which can only be unified with an integral type,
and arise from an integer literal expression like `22`.
- Float type variables, which can only be unified with a float type, and - Float type variables, which can only be unified with a float type, and
arise from a float literal expression like `22.0`. arise from a float literal expression like `22.0`.
- Region variables, which represent lifetimes, and arise all over the place. - Region variables, which represent lifetimes, and arise all over the place.
@ -177,7 +178,7 @@ form of an "outlives" constraint:
'a: 'b 'a: 'b
Actually, the code tends to view them as a subregion relation, but it's the same Actually the code tends to view them as a subregion relation, but it's the same
idea: idea:
'b <= 'a 'b <= 'a
@ -185,8 +186,8 @@ idea:
(There are various other kinds of constriants, such as "verifys"; see (There are various other kinds of constriants, such as "verifys"; see
the `region_constraints` module for details.) the `region_constraints` module for details.)
There is one case where we do some amount of eager unification. If you have an equality constraint There is one case where we do some amount of eager unification. If you have an
between two regions equality constraint between two regions
'a = 'b 'a = 'b