Merge from rust-lang/rust

This commit is contained in:
Laurențiu Nicola 2025-04-28 11:06:53 +03:00
commit eeea654b38
99 changed files with 1584 additions and 787 deletions

View File

@ -18,6 +18,7 @@ jobs:
MDBOOK_LINKCHECK2_VERSION: 0.9.1 MDBOOK_LINKCHECK2_VERSION: 0.9.1
MDBOOK_MERMAID_VERSION: 0.12.6 MDBOOK_MERMAID_VERSION: 0.12.6
MDBOOK_TOC_VERSION: 0.11.2 MDBOOK_TOC_VERSION: 0.11.2
MDBOOK_OUTPUT__LINKCHECK__FOLLOW_WEB_LINKS: ${{ github.event_name != 'pull_request' }}
DEPLOY_DIR: book/html DEPLOY_DIR: book/html
BASE_SHA: ${{ github.event.pull_request.base.sha }} BASE_SHA: ${{ github.event.pull_request.base.sha }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -43,13 +43,13 @@ rustdocs][rustdocs].
To build a local static HTML site, install [`mdbook`](https://github.com/rust-lang/mdBook) with: To build a local static HTML site, install [`mdbook`](https://github.com/rust-lang/mdBook) with:
``` ```
> cargo install mdbook mdbook-linkcheck2 mdbook-toc mdbook-mermaid cargo install mdbook mdbook-linkcheck2 mdbook-toc mdbook-mermaid
``` ```
and execute the following command in the root of the repository: and execute the following command in the root of the repository:
``` ```
> mdbook build --open mdbook build --open
``` ```
The build files are found in the `book/html` directory. The build files are found in the `book/html` directory.
@ -61,8 +61,8 @@ checking is **not** run by default locally, though it is in CI. To enable it
locally, set the environment variable `ENABLE_LINKCHECK=1` like in the locally, set the environment variable `ENABLE_LINKCHECK=1` like in the
following example. following example.
```console ```
$ ENABLE_LINKCHECK=1 mdbook serve ENABLE_LINKCHECK=1 mdbook serve
``` ```
### Table of Contents ### Table of Contents
@ -82,16 +82,29 @@ cargo +stable install josh-proxy --git https://github.com/josh-project/josh --ta
Older versions of `josh-proxy` may not round trip commits losslessly so it is important to install this exact version. Older versions of `josh-proxy` may not round trip commits losslessly so it is important to install this exact version.
### Pull changes from `rust-lang/rust` into this repository ### Pull changes from `rust-lang/rust` into this repository
1) Checkout a new branch that will be used to create a PR into `rust-lang/rustc-dev-guide` 1) Checkout a new branch that will be used to create a PR into `rust-lang/rustc-dev-guide`
2) Run the pull command 2) Run the pull command
``` ```
$ cargo run --manifest-path josh-sync/Cargo.toml rustc-pull cargo run --manifest-path josh-sync/Cargo.toml rustc-pull
``` ```
3) Push the branch to your fork and create a PR into `rustc-dev-guide` 3) Push the branch to your fork and create a PR into `rustc-dev-guide`
### Push changes from this repository into `rust-lang/rust` ### Push changes from this repository into `rust-lang/rust`
1) Run the push command to create a branch named `<branch-name>` in a `rustc` fork under the `<gh-username>` account 1) Run the push command to create a branch named `<branch-name>` in a `rustc` fork under the `<gh-username>` account
``` ```
$ cargo run --manifest-path josh-sync/Cargo.toml rustc-push <branch-name> <gh-username> cargo run --manifest-path josh-sync/Cargo.toml rustc-push <branch-name> <gh-username>
``` ```
2) Create a PR from `<branch-name>` into `rust-lang/rust` 2) Create a PR from `<branch-name>` into `rust-lang/rust`
#### Minimal git config
For simplicity (ease of implementation purposes), the josh-sync script simply calls out to system git. This means that the git invocation may be influenced by global (or local) git configuration.
You may observe "Nothing to pull" even if you *know* rustc-pull has something to pull if your global git config sets `fetch.prunetags = true` (and possibly other configurations may cause unexpected outcomes).
To minimize the likelihood of this happening, you may wish to keep a separate *minimal* git config that *only* has `[user]` entries from global git config, then repoint system git to use the minimal git config instead. E.g.
```
GIT_CONFIG_GLOBAL=/path/to/minimal/gitconfig GIT_CONFIG_SYSTEM='' cargo +stable run --manifest-path josh-sync/Cargo.toml -- rustc-pull
```

View File

@ -1,6 +1,6 @@
[book] [book]
title = "Rust Compiler Development Guide" title = "Rust Compiler Development Guide"
author = "The Rust Project Developers" authors = ["The Rust Project Developers"]
description = "A guide to developing the Rust compiler (rustc)" description = "A guide to developing the Rust compiler (rustc)"
[build] [build]
@ -62,5 +62,7 @@ warning-policy = "error"
"/diagnostics/sessiondiagnostic.html" = "diagnostic-structs.html" "/diagnostics/sessiondiagnostic.html" = "diagnostic-structs.html"
"/diagnostics/diagnostic-codes.html" = "error-codes.html" "/diagnostics/diagnostic-codes.html" = "error-codes.html"
"/miri.html" = "const-eval/interpret.html" "/miri.html" = "const-eval/interpret.html"
"/tests/integration.html" = "ecosystem.html" "/tests/fuchsia.html" = "ecosystem-test-jobs/fuchsia.html"
"/tests/headers.html" = "directives.html" "/tests/headers.html" = "directives.html"
"/tests/integration.html" = "ecosystem.html"
"/tests/rust-for-linux.html" = "ecosystem-test-jobs/rust-for-linux.html"

View File

@ -1,11 +1,8 @@
use std::{ use std::collections::BTreeMap;
collections::BTreeMap, use std::convert::TryInto as _;
convert::TryInto as _, use std::path::{Path, PathBuf};
env, fmt, fs, use std::str::FromStr;
path::{Path, PathBuf}, use std::{env, fmt, fs, process};
process,
str::FromStr,
};
use chrono::{Datelike as _, Month, TimeZone as _, Utc}; use chrono::{Datelike as _, Month, TimeZone as _, Utc};
use glob::glob; use glob::glob;
@ -19,19 +16,13 @@ struct Date {
impl Date { impl Date {
fn months_since(self, other: Date) -> Option<u32> { fn months_since(self, other: Date) -> Option<u32> {
let self_chrono = Utc let self_chrono =
.with_ymd_and_hms(self.year.try_into().unwrap(), self.month, 1, 0, 0, 0) Utc.with_ymd_and_hms(self.year.try_into().unwrap(), self.month, 1, 0, 0, 0).unwrap();
.unwrap(); let other_chrono =
let other_chrono = Utc Utc.with_ymd_and_hms(other.year.try_into().unwrap(), other.month, 1, 0, 0, 0).unwrap();
.with_ymd_and_hms(other.year.try_into().unwrap(), other.month, 1, 0, 0, 0)
.unwrap();
let duration_since = self_chrono.signed_duration_since(other_chrono); let duration_since = self_chrono.signed_duration_since(other_chrono);
let months_since = duration_since.num_days() / 30; let months_since = duration_since.num_days() / 30;
if months_since < 0 { if months_since < 0 { None } else { Some(months_since.try_into().unwrap()) }
None
} else {
Some(months_since.try_into().unwrap())
}
} }
} }
@ -66,26 +57,18 @@ fn collect_dates_from_file(date_regex: &Regex, text: &str) -> Vec<(usize, Date)>
date_regex date_regex
.captures_iter(text) .captures_iter(text)
.filter_map(|cap| { .filter_map(|cap| {
if let (Some(month), Some(year), None, None) | (None, None, Some(month), Some(year)) = ( if let (Some(month), Some(year), None, None) | (None, None, Some(month), Some(year)) =
cap.name("m1"), (cap.name("m1"), cap.name("y1"), cap.name("m2"), cap.name("y2"))
cap.name("y1"), {
cap.name("m2"),
cap.name("y2"),
) {
let year = year.as_str().parse().expect("year"); let year = year.as_str().parse().expect("year");
let month = Month::from_str(month.as_str()) let month = Month::from_str(month.as_str()).expect("month").number_from_month();
.expect("month")
.number_from_month();
Some((cap.get(0).expect("all").range(), Date { year, month })) Some((cap.get(0).expect("all").range(), Date { year, month }))
} else { } else {
None None
} }
}) })
.map(|(byte_range, date)| { .map(|(byte_range, date)| {
line += text[end_of_last_cap..byte_range.end] line += text[end_of_last_cap..byte_range.end].chars().filter(|c| *c == '\n').count();
.chars()
.filter(|c| *c == '\n')
.count();
end_of_last_cap = byte_range.end; end_of_last_cap = byte_range.end;
(line, date) (line, date)
}) })
@ -138,10 +121,7 @@ fn main() {
let root_dir_path = Path::new(&root_dir); let root_dir_path = Path::new(&root_dir);
let glob_pat = format!("{}/**/*.md", root_dir); let glob_pat = format!("{}/**/*.md", root_dir);
let today_chrono = Utc::now().date_naive(); let today_chrono = Utc::now().date_naive();
let current_month = Date { let current_month = Date { year: today_chrono.year_ce().1, month: today_chrono.month() };
year: today_chrono.year_ce().1,
month: today_chrono.month(),
};
let dates_by_file = collect_dates(glob(&glob_pat).unwrap().map(Result::unwrap)); let dates_by_file = collect_dates(glob(&glob_pat).unwrap().map(Result::unwrap));
let dates_by_file: BTreeMap<_, _> = let dates_by_file: BTreeMap<_, _> =
@ -173,10 +153,7 @@ fn main() {
println!(); println!();
for (path, dates) in dates_by_file { for (path, dates) in dates_by_file {
println!( println!("- {}", path.strip_prefix(&root_dir_path).unwrap_or(&path).display(),);
"- {}",
path.strip_prefix(&root_dir_path).unwrap_or(&path).display(),
);
for (line, date) in dates { for (line, date) in dates {
println!(" - [ ] line {}: {}", line, date); println!(" - [ ] line {}: {}", line, date);
} }
@ -191,14 +168,8 @@ mod tests {
#[test] #[test]
fn test_months_since() { fn test_months_since() {
let date1 = Date { let date1 = Date { year: 2020, month: 3 };
year: 2020, let date2 = Date { year: 2021, month: 1 };
month: 3,
};
let date2 = Date {
year: 2021,
month: 1,
};
assert_eq!(date2.months_since(date1), Some(10)); assert_eq!(date2.months_since(date1), Some(10));
} }
@ -273,83 +244,17 @@ Test8
assert_eq!( assert_eq!(
collect_dates_from_file(&make_date_regex(), text), collect_dates_from_file(&make_date_regex(), text),
vec![ vec![
( (3, Date { year: 2021, month: 1 }),
3, (6, Date { year: 2021, month: 2 }),
Date { (9, Date { year: 2021, month: 3 }),
year: 2021, (11, Date { year: 2021, month: 4 }),
month: 1, (17, Date { year: 2021, month: 5 }),
} (20, Date { year: 2021, month: 1 }),
), (23, Date { year: 2021, month: 2 }),
( (26, Date { year: 2021, month: 3 }),
6, (28, Date { year: 2021, month: 4 }),
Date { (34, Date { year: 2021, month: 5 }),
year: 2021, (38, Date { year: 2021, month: 6 }),
month: 2,
}
),
(
9,
Date {
year: 2021,
month: 3,
}
),
(
11,
Date {
year: 2021,
month: 4,
}
),
(
17,
Date {
year: 2021,
month: 5,
}
),
(
20,
Date {
year: 2021,
month: 1,
}
),
(
23,
Date {
year: 2021,
month: 2,
}
),
(
26,
Date {
year: 2021,
month: 3,
}
),
(
28,
Date {
year: 2021,
month: 4,
}
),
(
34,
Date {
year: 2021,
month: 5,
}
),
(
38,
Date {
year: 2021,
month: 6,
}
),
], ],
); );
} }

View File

@ -1,4 +1,4 @@
// Tested with nightly-2025-02-13 // Tested with nightly-2025-03-28
#![feature(rustc_private)] #![feature(rustc_private)]
@ -34,9 +34,9 @@ impl rustc_span::source_map::FileLoader for MyFileLoader {
fn read_file(&self, path: &Path) -> io::Result<String> { fn read_file(&self, path: &Path) -> io::Result<String> {
if path == Path::new("main.rs") { if path == Path::new("main.rs") {
Ok(r#" Ok(r#"
static MESSAGE: &str = "Hello, World!";
fn main() { fn main() {
let message = "Hello, World!"; println!("{MESSAGE}");
println!("{message}");
} }
"# "#
.to_string()) .to_string())
@ -71,14 +71,12 @@ impl rustc_driver::Callbacks for MyCallbacks {
fn after_analysis(&mut self, _compiler: &Compiler, tcx: TyCtxt<'_>) -> Compilation { fn after_analysis(&mut self, _compiler: &Compiler, tcx: TyCtxt<'_>) -> Compilation {
// Analyze the program and inspect the types of definitions. // Analyze the program and inspect the types of definitions.
for id in tcx.hir().items() { for id in tcx.hir_free_items() {
let hir = tcx.hir(); let item = &tcx.hir_item(id);
let item = hir.item(id);
match item.kind { match item.kind {
rustc_hir::ItemKind::Static(_, _, _) | rustc_hir::ItemKind::Fn { .. } => { rustc_hir::ItemKind::Static(ident, ..) | rustc_hir::ItemKind::Fn { ident, .. } => {
let name = item.ident;
let ty = tcx.type_of(item.hir_id().owner.def_id); let ty = tcx.type_of(item.hir_id().owner.def_id);
println!("{name:?}:\t{ty:?}") println!("{ident:?}:\t{ty:?}")
} }
_ => (), _ => (),
} }

View File

@ -1,4 +1,4 @@
// Tested with nightly-2025-02-13 // Tested with nightly-2025-03-28
#![feature(rustc_private)] #![feature(rustc_private)]
@ -20,7 +20,7 @@ use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use rustc_ast_pretty::pprust::item_to_string; use rustc_ast_pretty::pprust::item_to_string;
use rustc_driver::{run_compiler, Compilation}; use rustc_driver::{Compilation, run_compiler};
use rustc_interface::interface::{Compiler, Config}; use rustc_interface::interface::{Compiler, Config};
use rustc_middle::ty::TyCtxt; use rustc_middle::ty::TyCtxt;
@ -70,11 +70,9 @@ impl rustc_driver::Callbacks for MyCallbacks {
} }
fn after_analysis(&mut self, _compiler: &Compiler, tcx: TyCtxt<'_>) -> Compilation { fn after_analysis(&mut self, _compiler: &Compiler, tcx: TyCtxt<'_>) -> Compilation {
// Every compilation contains a single crate.
let hir_krate = tcx.hir();
// Iterate over the top-level items in the crate, looking for the main function. // Iterate over the top-level items in the crate, looking for the main function.
for id in hir_krate.items() { for id in tcx.hir_free_items() {
let item = hir_krate.item(id); let item = &tcx.hir_item(id);
// Use pattern-matching to find a specific node inside the main function. // Use pattern-matching to find a specific node inside the main function.
if let rustc_hir::ItemKind::Fn { body, .. } = item.kind { if let rustc_hir::ItemKind::Fn { body, .. } = item.kind {
let expr = &tcx.hir_body(body).value; let expr = &tcx.hir_body(body).value;

View File

@ -1,4 +1,4 @@
// Tested with nightly-2025-02-13 // Tested with nightly-2025-03-28
#![feature(rustc_private)] #![feature(rustc_private)]
@ -64,14 +64,13 @@ fn main() {
println!("{krate:?}"); println!("{krate:?}");
// Analyze the program and inspect the types of definitions. // Analyze the program and inspect the types of definitions.
rustc_interface::create_and_enter_global_ctxt(&compiler, krate, |tcx| { rustc_interface::create_and_enter_global_ctxt(&compiler, krate, |tcx| {
for id in tcx.hir().items() { for id in tcx.hir_free_items() {
let hir = tcx.hir(); let item = tcx.hir_item(id);
let item = hir.item(id);
match item.kind { match item.kind {
rustc_hir::ItemKind::Static(_, _, _) | rustc_hir::ItemKind::Fn { .. } => { rustc_hir::ItemKind::Static(ident, ..)
let name = item.ident; | rustc_hir::ItemKind::Fn { ident, .. } => {
let ty = tcx.type_of(item.hir_id().owner.def_id); let ty = tcx.type_of(item.hir_id().owner.def_id);
println!("{name:?}:\t{ty:?}") println!("{ident:?}:\t{ty:?}")
} }
_ => (), _ => (),
} }

View File

@ -1,4 +1,4 @@
// Tested with nightly-2025-02-13 // Tested with nightly-2025-03-28
#![feature(rustc_private)] #![feature(rustc_private)]
@ -86,8 +86,10 @@ fn main() {
rustc_interface::run_compiler(config, |compiler| { rustc_interface::run_compiler(config, |compiler| {
let krate = rustc_interface::passes::parse(&compiler.sess); let krate = rustc_interface::passes::parse(&compiler.sess);
rustc_interface::create_and_enter_global_ctxt(&compiler, krate, |tcx| { rustc_interface::create_and_enter_global_ctxt(&compiler, krate, |tcx| {
// Run the analysis phase on the local crate to trigger the type error. // Iterate all the items defined and perform type checking.
let _ = tcx.analysis(()); tcx.par_hir_body_owners(|item_def_id| {
tcx.ensure_ok().typeck(item_def_id);
});
}); });
// If the compiler has encountered errors when this closure returns, it will abort (!) the program. // If the compiler has encountered errors when this closure returns, it will abort (!) the program.
// We avoid this by resetting the error count before returning // We avoid this by resetting the error count before returning

View File

@ -1 +1 @@
4ecd70ddd1039a3954056c1071e40278048476fa b8005bff3248cfc6e327faf4fa631ac49bb49ba9

7
rustfmt.toml Normal file
View File

@ -0,0 +1,7 @@
# matches that of rust-lang/rust
style_edition = "2024"
use_small_heuristics = "Max"
merge_derives = false
group_imports = "StdExternalCrate"
imports_granularity = "Module"
use_field_init_shorthand = true

View File

@ -10,9 +10,9 @@
- [How to build and run the compiler](./building/how-to-build-and-run.md) - [How to build and run the compiler](./building/how-to-build-and-run.md)
- [Quickstart](./building/quickstart.md) - [Quickstart](./building/quickstart.md)
- [Prerequisites](./building/prerequisites.md) - [Prerequisites](./building/prerequisites.md)
- [Suggested Workflows](./building/suggested.md) - [Suggested workflows](./building/suggested.md)
- [Distribution artifacts](./building/build-install-distribution-artifacts.md) - [Distribution artifacts](./building/build-install-distribution-artifacts.md)
- [Building Documentation](./building/compiler-documenting.md) - [Building documentation](./building/compiler-documenting.md)
- [Rustdoc overview](./rustdoc.md) - [Rustdoc overview](./rustdoc.md)
- [Adding a new target](./building/new-target.md) - [Adding a new target](./building/new-target.md)
- [Optimized build](./building/optimized-build.md) - [Optimized build](./building/optimized-build.md)
@ -28,8 +28,11 @@
- [Minicore](./tests/minicore.md) - [Minicore](./tests/minicore.md)
- [Ecosystem testing](./tests/ecosystem.md) - [Ecosystem testing](./tests/ecosystem.md)
- [Crater](./tests/crater.md) - [Crater](./tests/crater.md)
- [Fuchsia](./tests/fuchsia.md) - [Fuchsia](./tests/ecosystem-test-jobs/fuchsia.md)
- [Rust for Linux](./tests/rust-for-linux.md) - [Rust for Linux](./tests/ecosystem-test-jobs/rust-for-linux.md)
- [Codegen backend testing](./tests/codegen-backend-tests/intro.md)
- [Cranelift codegen backend](./tests/codegen-backend-tests/cg_clif.md)
- [GCC codegen backend](./tests/codegen-backend-tests/cg_gcc.md)
- [Performance testing](./tests/perf.md) - [Performance testing](./tests/perf.md)
- [Suggest tests tool](./tests/suggest-tests.md) - [Suggest tests tool](./tests/suggest-tests.md)
- [Misc info](./tests/misc.md) - [Misc info](./tests/misc.md)
@ -39,11 +42,11 @@
- [with the linux perf tool](./profiling/with_perf.md) - [with the linux perf tool](./profiling/with_perf.md)
- [with Windows Performance Analyzer](./profiling/wpa_profiling.md) - [with Windows Performance Analyzer](./profiling/wpa_profiling.md)
- [with the Rust benchmark suite](./profiling/with_rustc_perf.md) - [with the Rust benchmark suite](./profiling/with_rustc_perf.md)
- [crates.io Dependencies](./crates-io.md) - [crates.io dependencies](./crates-io.md)
# Contributing to Rust # Contributing to Rust
- [Contribution Procedures](./contributing.md) - [Contribution procedures](./contributing.md)
- [About the compiler team](./compiler-team.md) - [About the compiler team](./compiler-team.md)
- [Using Git](./git.md) - [Using Git](./git.md)
- [Mastering @rustbot](./rustbot.md) - [Mastering @rustbot](./rustbot.md)
@ -53,7 +56,7 @@
- [Stabilizing Features](./stabilization_guide.md) - [Stabilizing Features](./stabilization_guide.md)
- [Feature Gates](./feature-gates.md) - [Feature Gates](./feature-gates.md)
- [Coding conventions](./conventions.md) - [Coding conventions](./conventions.md)
- [Procedures for Breaking Changes](./bug-fix-procedure.md) - [Procedures for breaking changes](./bug-fix-procedure.md)
- [Using external repositories](./external-repos.md) - [Using external repositories](./external-repos.md)
- [Fuzzing](./fuzzing.md) - [Fuzzing](./fuzzing.md)
- [Notification groups](notification-groups/about.md) - [Notification groups](notification-groups/about.md)
@ -61,12 +64,13 @@
- [ARM](notification-groups/arm.md) - [ARM](notification-groups/arm.md)
- [Cleanup Crew](notification-groups/cleanup-crew.md) - [Cleanup Crew](notification-groups/cleanup-crew.md)
- [Emscripten](notification-groups/emscripten.md) - [Emscripten](notification-groups/emscripten.md)
- [Fuchsia](notification-groups/fuchsia.md)
- [LLVM](notification-groups/llvm.md) - [LLVM](notification-groups/llvm.md)
- [RISC-V](notification-groups/risc-v.md) - [RISC-V](notification-groups/risc-v.md)
- [Rust for Linux](notification-groups/rust-for-linux.md)
- [WASI](notification-groups/wasi.md) - [WASI](notification-groups/wasi.md)
- [WebAssembly](notification-groups/wasm.md) - [WebAssembly](notification-groups/wasm.md)
- [Windows](notification-groups/windows.md) - [Windows](notification-groups/windows.md)
- [Rust for Linux](notification-groups/rust-for-linux.md)
- [Licenses](./licenses.md) - [Licenses](./licenses.md)
- [Editions](guides/editions.md) - [Editions](guides/editions.md)
@ -77,6 +81,7 @@
- [How Bootstrap does it](./building/bootstrapping/how-bootstrap-does-it.md) - [How Bootstrap does it](./building/bootstrapping/how-bootstrap-does-it.md)
- [Writing tools in Bootstrap](./building/bootstrapping/writing-tools-in-bootstrap.md) - [Writing tools in Bootstrap](./building/bootstrapping/writing-tools-in-bootstrap.md)
- [Debugging bootstrap](./building/bootstrapping/debugging-bootstrap.md) - [Debugging bootstrap](./building/bootstrapping/debugging-bootstrap.md)
- [cfg(bootstrap) in dependencies](./building/bootstrapping/bootstrap-in-dependencies.md)
# High-level Compiler Architecture # High-level Compiler Architecture
@ -84,29 +89,35 @@
- [Overview of the compiler](./overview.md) - [Overview of the compiler](./overview.md)
- [The compiler source code](./compiler-src.md) - [The compiler source code](./compiler-src.md)
- [Queries: demand-driven compilation](./query.md) - [Queries: demand-driven compilation](./query.md)
- [The Query Evaluation Model in Detail](./queries/query-evaluation-model-in-detail.md) - [The Query Evaluation Model in detail](./queries/query-evaluation-model-in-detail.md)
- [Incremental compilation](./queries/incremental-compilation.md) - [Incremental compilation](./queries/incremental-compilation.md)
- [Incremental compilation In Detail](./queries/incremental-compilation-in-detail.md) - [Incremental compilation in detail](./queries/incremental-compilation-in-detail.md)
- [Debugging and Testing](./incrcomp-debugging.md) - [Debugging and testing](./incrcomp-debugging.md)
- [Salsa](./queries/salsa.md) - [Salsa](./queries/salsa.md)
- [Memory Management in Rustc](./memory.md) - [Memory management in rustc](./memory.md)
- [Serialization in Rustc](./serialization.md) - [Serialization in rustc](./serialization.md)
- [Parallel Compilation](./parallel-rustc.md) - [Parallel compilation](./parallel-rustc.md)
- [Rustdoc internals](./rustdoc-internals.md) - [Rustdoc internals](./rustdoc-internals.md)
- [Search](./rustdoc-internals/search.md) - [Search](./rustdoc-internals/search.md)
- [The `rustdoc` test suite](./rustdoc-internals/rustdoc-test-suite.md)
- [Autodiff internals](./autodiff/internals.md)
- [Installation](./autodiff/installation.md)
- [How to debug](./autodiff/debugging.md)
- [Autodiff flags](./autodiff/flags.md)
- [Current limitations](./autodiff/limitations.md)
# Source Code Representation # Source Code Representation
- [Prologue](./part-3-intro.md) - [Prologue](./part-3-intro.md)
- [Syntax and the AST](./syntax-intro.md) - [Syntax and the AST](./syntax-intro.md)
- [Lexing and Parsing](./the-parser.md) - [Lexing and parsing](./the-parser.md)
- [Macro expansion](./macro-expansion.md) - [Macro expansion](./macro-expansion.md)
- [Name resolution](./name-resolution.md) - [Name resolution](./name-resolution.md)
- [Attributes](./attributes.md) - [Attributes](./attributes.md)
- [`#[test]` Implementation](./test-implementation.md) - [`#[test]` implementation](./test-implementation.md)
- [Panic Implementation](./panic-implementation.md) - [Panic implementation](./panic-implementation.md)
- [AST Validation](./ast-validation.md) - [AST validation](./ast-validation.md)
- [Feature Gate Checking](./feature-gate-ck.md) - [Feature gate checking](./feature-gate-ck.md)
- [Lang Items](./lang-items.md) - [Lang Items](./lang-items.md)
- [The HIR (High-level IR)](./hir.md) - [The HIR (High-level IR)](./hir.md)
- [Lowering AST to HIR](./ast-lowering.md) - [Lowering AST to HIR](./ast-lowering.md)
@ -124,7 +135,8 @@
- [rustc_driver and rustc_interface](./rustc-driver/intro.md) - [rustc_driver and rustc_interface](./rustc-driver/intro.md)
- [Example: Type checking](./rustc-driver/interacting-with-the-ast.md) - [Example: Type checking](./rustc-driver/interacting-with-the-ast.md)
- [Example: Getting diagnostics](./rustc-driver/getting-diagnostics.md) - [Example: Getting diagnostics](./rustc-driver/getting-diagnostics.md)
- [Errors and Lints](diagnostics.md) - [Remarks on perma-unstable features](./rustc-driver/remarks-on-perma-unstable-features.md)
- [Errors and lints](diagnostics.md)
- [Diagnostic and subdiagnostic structs](./diagnostics/diagnostic-structs.md) - [Diagnostic and subdiagnostic structs](./diagnostics/diagnostic-structs.md)
- [Translation](./diagnostics/translation.md) - [Translation](./diagnostics/translation.md)
- [`LintStore`](./diagnostics/lintstore.md) - [`LintStore`](./diagnostics/lintstore.md)
@ -144,10 +156,7 @@
- [ADTs and Generic Arguments](./ty_module/generic_arguments.md) - [ADTs and Generic Arguments](./ty_module/generic_arguments.md)
- [Parameter types/consts/regions](./ty_module/param_ty_const_regions.md) - [Parameter types/consts/regions](./ty_module/param_ty_const_regions.md)
- [`TypeFolder` and `TypeFoldable`](./ty-fold.md) - [`TypeFolder` and `TypeFoldable`](./ty-fold.md)
- [Parameter Environments](./param_env/param_env_summary.md) - [Typing/Param Envs](./typing_parameter_envs.md)
- [What is it?](./param_env/param_env_what_is_it.md)
- [How are `ParamEnv`'s constructed internally](./param_env/param_env_construction_internals.md)
- [Which `ParamEnv` do I use?](./param_env/param_env_acquisition.md)
- [Type inference](./type-inference.md) - [Type inference](./type-inference.md)
- [Trait solving](./traits/resolution.md) - [Trait solving](./traits/resolution.md)
- [Higher-ranked trait bounds](./traits/hrtb.md) - [Higher-ranked trait bounds](./traits/hrtb.md)
@ -173,14 +182,14 @@
- [Type checking](./type-checking.md) - [Type checking](./type-checking.md)
- [Method Lookup](./method-lookup.md) - [Method Lookup](./method-lookup.md)
- [Variance](./variance.md) - [Variance](./variance.md)
- [Coherence Checking](./coherence.md) - [Coherence checking](./coherence.md)
- [Opaque Types](./opaque-types-type-alias-impl-trait.md) - [Opaque types](./opaque-types-type-alias-impl-trait.md)
- [Inference details](./opaque-types-impl-trait-inference.md) - [Inference details](./opaque-types-impl-trait-inference.md)
- [Return Position Impl Trait In Trait](./return-position-impl-trait-in-trait.md) - [Return Position Impl Trait In Trait](./return-position-impl-trait-in-trait.md)
- [Region inference restrictions][opaque-infer] - [Region inference restrictions][opaque-infer]
- [Effect checking](./effects.md) - [Const condition checking](./effects.md)
- [Pattern and Exhaustiveness Checking](./pat-exhaustive-checking.md) - [Pattern and Exhaustiveness Checking](./pat-exhaustive-checking.md)
- [Unsafety Checking](./unsafety-checking.md) - [Unsafety checking](./unsafety-checking.md)
- [MIR dataflow](./mir/dataflow.md) - [MIR dataflow](./mir/dataflow.md)
- [Drop elaboration](./mir/drop-elaboration.md) - [Drop elaboration](./mir/drop-elaboration.md)
- [The borrow checker](./borrow_check.md) - [The borrow checker](./borrow_check.md)

View File

@ -3,33 +3,41 @@
This guide is meant to help document how rustc the Rust compiler works, This guide is meant to help document how rustc the Rust compiler works,
as well as to help new contributors get involved in rustc development. as well as to help new contributors get involved in rustc development.
There are seven parts to this guide: There are several parts to this guide:
1. [Building `rustc`][p1]: 1. [Building and debugging `rustc`][p1]:
Contains information that should be useful no matter how you are contributing, Contains information that should be useful no matter how you are contributing,
about building, debugging, profiling, etc. about building, debugging, profiling, etc.
2. [Contributing to `rustc`][p2]: 1. [Contributing to Rust][p2]:
Contains information that should be useful no matter how you are contributing, Contains information that should be useful no matter how you are contributing,
about procedures for contribution, using git and Github, stabilizing features, etc. about procedures for contribution, using git and Github, stabilizing features, etc.
3. [High-Level Compiler Architecture][p3]: 1. [Bootstrapping][p3]:
Describes how the Rust compiler builds itself using previous versions, including
an introduction to the bootstrap process and debugging methods.
1. [High-level Compiler Architecture][p4]:
Discusses the high-level architecture of the compiler and stages of the compile process. Discusses the high-level architecture of the compiler and stages of the compile process.
4. [Source Code Representation][p4]: 1. [Source Code Representation][p5]:
Describes the process of taking raw source code from the user Describes the process of taking raw source code from the user
and transforming it into various forms that the compiler can work with easily. and transforming it into various forms that the compiler can work with easily.
5. [Analysis][p5]: 1. [Supporting Infrastructure][p6]:
discusses the analyses that the compiler uses to check various properties of the code Covers command-line argument conventions, compiler entry points like rustc_driver and
rustc_interface, and the design and implementation of errors and lints.
1. [Analysis][p7]:
Discusses the analyses that the compiler uses to check various properties of the code
and inform later stages of the compile process (e.g., type checking). and inform later stages of the compile process (e.g., type checking).
6. [From MIR to Binaries][p6]: How linked executable machine code is generated. 1. [MIR to Binaries][p8]: How linked executable machine code is generated.
7. [Appendices][p7] at the end with useful reference information. 1. [Appendices][p9] at the end with useful reference information.
There are a few of these with different information, including a glossary. There are a few of these with different information, including a glossary.
[p1]: ./building/how-to-build-and-run.html [p1]: ./building/how-to-build-and-run.html
[p2]: ./contributing.md [p2]: ./contributing.md
[p3]: ./part-2-intro.md [p3]: ./building/bootstrapping/intro.md
[p4]: ./part-3-intro.md [p4]: ./part-2-intro.md
[p5]: ./part-4-intro.md [p5]: ./part-3-intro.md
[p6]: ./part-5-intro.md [p6]: ./cli.md
[p7]: ./appendix/background.md [p7]: ./part-4-intro.md
[p8]: ./part-5-intro.md
[p9]: ./appendix/background.md
### Constant change ### Constant change

View File

@ -40,5 +40,5 @@ Item | Kind | Short description | Chapter |
[Emitting Diagnostics]: ../diagnostics.html [Emitting Diagnostics]: ../diagnostics.html
[Macro expansion]: ../macro-expansion.html [Macro expansion]: ../macro-expansion.html
[Name resolution]: ../name-resolution.html [Name resolution]: ../name-resolution.html
[Parameter Environment]: ../param_env/param_env_summary.html [Parameter Environment]: ../typing_parameter_envs.html
[Trait Solving: Goals and Clauses]: ../traits/goals-and-clauses.html#domain-goals [Trait Solving: Goals and Clauses]: ../traits/goals-and-clauses.html#domain-goals

View File

@ -31,7 +31,6 @@ Term | Meaning
<span id="generics">generics</span> | The list of generic parameters defined on an item. There are three kinds of generic parameters: Type, lifetime and const parameters. <span id="generics">generics</span> | The list of generic parameters defined on an item. There are three kinds of generic parameters: Type, lifetime and const parameters.
<span id="hir">HIR</span> | The _high-level [IR](#ir)_, created by lowering and desugaring the AST. ([see more](../hir.md)) <span id="hir">HIR</span> | The _high-level [IR](#ir)_, created by lowering and desugaring the AST. ([see more](../hir.md))
<span id="hir-id">`HirId`</span> | Identifies a particular node in the HIR by combining a def-id with an "intra-definition offset". See [the HIR chapter for more](../hir.md#identifiers-in-the-hir). <span id="hir-id">`HirId`</span> | Identifies a particular node in the HIR by combining a def-id with an "intra-definition offset". See [the HIR chapter for more](../hir.md#identifiers-in-the-hir).
<span id="hir-map">HIR map</span> | The HIR map, accessible via `tcx.hir()`, allows you to quickly navigate the HIR and convert between various forms of identifiers.
<span id="ice">ICE</span> | Short for _internal compiler error_, this is when the compiler crashes. <span id="ice">ICE</span> | Short for _internal compiler error_, this is when the compiler crashes.
<span id="ich">ICH</span> | Short for _incremental compilation hash_, these are used as fingerprints for things such as HIR and crate metadata, to check if changes have been made. This is useful in incremental compilation to see if part of a crate has changed and should be recompiled. <span id="ich">ICH</span> | Short for _incremental compilation hash_, these are used as fingerprints for things such as HIR and crate metadata, to check if changes have been made. This is useful in incremental compilation to see if part of a crate has changed and should be recompiled.
<span id="infcx">`infcx`</span> | The type inference context (`InferCtxt`). (see `rustc_middle::infer`) <span id="infcx">`infcx`</span> | The type inference context (`InferCtxt`). (see `rustc_middle::infer`)

View File

@ -1,4 +1,4 @@
# AST Validation # AST validation
_AST validation_ is a separate AST pass that visits each _AST validation_ is a separate AST pass that visits each
item in the tree and performs simple checks. This pass item in the tree and performs simple checks. This pass

113
src/autodiff/debugging.md Normal file
View File

@ -0,0 +1,113 @@
# Reporting backend crashes
If after a compilation failure you are greeted by a large amount of llvm-ir code, then our enzyme backend likely failed to compile your code. These cases are harder to debug, so your help is highly appreciated. Please also keep in mind that release builds are usually much more likely to work at the moment.
The final goal here is to reproduce your bug in the enzyme [compiler explorer](https://enzyme.mit.edu/explorer/), in order to create a bug report in the [Enzyme](https://github.com/enzymead/enzyme/issues) repository.
We have an `autodiff` flag which you can pass to `rustflags` to help with this. it will print the whole llvm-ir module, along with some `__enzyme_fwddiff` or `__enzyme_autodiff` calls. A potential workflow on linux could look like:
## Controlling llvm-ir generation
Before generating the llvm-ir, keep in mind two techniques that can help ensure the relevant rust code is visible for debugging:
- **`std::hint::black_box`**: wrap rust variables or expressions in `std::hint::black_box()` to prevent rust and llvm from optimizing them away. This is useful when you need to inspect or manually manipulate specific values in the llvm-ir.
- **`extern "rust"` or `extern "c"`**: if you want to see how a specific function declaration is lowered to llvm-ir, you can declare it as `extern "rust"` or `extern "c"`. You can also look for existing `__enzyme_autodiff` or similar declarations within the generated module for examples.
## 1) Generate an llvm-ir reproducer
```sh
rustflags="-z autodiff=enable,printmodbefore" cargo +enzyme build --release &> out.ll
```
This also captures a few warnings and info messages above and below your module. open out.ll and remove every line above `; moduleid = <somehash>`. Now look at the end of the file and remove everything that's not part of llvm-ir, i.e. remove errors and warnings. The last line of your llvm-ir should now start with `!<somenumber> = `, i.e. `!40831 = !{i32 0, i32 1037508, i32 1037538, i32 1037559}` or `!43760 = !dilocation(line: 297, column: 5, scope: !43746)`.
The actual numbers will depend on your code.
## 2) Check your llvm-ir reproducer
To confirm that your previous step worked, we will use llvm's `opt` tool. find your path to the opt binary, with a path similar to `<some_dir>/rust/build/<x86/arm/...-target-tripple>/build/bin/opt`. also find `llvmenzyme-19.<so/dll/dylib>` path, similar to `/rust/build/target-tripple/enzyme/build/enzyme/llvmenzyme-19`. Please keep in mind that llvm frequently updates it's llvm backend, so the version number might be higher (20, 21, ...). Once you have both, run the following command:
```sh
<path/to/opt> out.ll -load-pass-plugin=/path/to/llvmenzyme-19.so -passes="enzyme" -s
```
If the previous step succeeded, you are going to see the same error that you saw when compiling your rust code with cargo.
If you fail to get the same error, please open an issue in the rust repository. If you succeed, congrats! the file is still huge, so let's automatically minimize it.
## 3) Minimize your llvm-ir reproducer
First find your `llvm-extract` binary, it's in the same folder as your opt binary. then run:
```sh
<path/to/llvm-extract> -s --func=<name> --recursive --rfunc="enzyme_autodiff*" --rfunc="enzyme_fwddiff*" --rfunc=<fnc_called_by_enzyme> out.ll -o mwe.ll
```
This command creates `mwe.ll`, a minimal working example.
Please adjust the name passed with the last `--func` flag. You can either apply the `#[no_mangle]` attribute to the function you differentiate, then you can replace it with the rust name. otherwise you will need to look up the mangled function name. To do that, open `out.ll` and search for `__enzyme_fwddiff` or `__enzyme_autodiff`. the first string in that function call is the name of your function. example:
```llvm-ir
define double @enzyme_opt_helper_0(ptr %0, i64 %1, double %2) {
%4 = call double (...) @__enzyme_fwddiff(ptr @_zn2ad3_f217h3b3b1800bd39fde3e, metadata !"enzyme_const", ptr %0, metadata !"enzyme_const", i64 %1, metadata !"enzyme_dup", double %2, double %2)
ret double %4
}
```
Here, `_zn2ad3_f217h3b3b1800bd39fde3e` is the correct name. make sure to not copy the leading `@`. redo step 2) by running the `opt` command again, but this time passing `mwe.ll` as the input file instead of `out.ll`. Check if this minimized example still reproduces the crash.
## 4) (Optional) Minimize your llvm-ir reproducer further.
After the previous step you should have an `mwe.ll` file with ~5k loc. let's try to get it down to 50. find your `llvm-reduce` binary next to `opt` and `llvm-extract`. Copy the first line of your error message, an example could be:
```sh
opt: /home/manuel/prog/rust/src/llvm-project/llvm/lib/ir/instructions.cpp:686: void llvm::callinst::init(llvm::functiontype*, llvm::value*, llvm::arrayref<llvm::value*>, llvm::arrayref<llvm::operandbundledeft<llvm::value*> >, const llvm::twine&): assertion `(args.size() == fty->getnumparams() || (fty->isvararg() && args.size() > fty->getnumparams())) && "calling a function with bad signature!"' failed.
```
If you just get a `segfault` there is no sensible error message and not much to do automatically, so continue to 5).
otherwise, create a `script.sh` file containing
```sh
#!/bin/bash
<path/to/your/opt> $1 -load-pass-plugin=/path/to/llvmenzyme-19.so -passes="enzyme" \
|& grep "/some/path.cpp:686: void llvm::callinst::init"
```
Experiment a bit with which error message you pass to grep. it should be long enough to make sure that the error is unique. However, for longer errors including `(` or `)` you will need to escape them correctly which can become annoying. Run
```sh
<path/to/llvm-reduce> --test=script.sh mwe.ll
```
If you see `input isn't interesting! verify interesting-ness test`, you got the error message in script.sh wrong, you need to make sure that grep matches your actual error. If all works out, you will see a lot of iterations, ending with a new `reduced.ll` file. Verify with `opt` that you still get the same error.
### Advanced debugging: manual llvm-ir investigation
Once you have a minimized reproducer (`mwe.ll` or `reduced.ll`), you can delve deeper:
- **manual editing:** try manually rewriting the llvm-ir. for certain issues, like those involving indirect calls, you might investigate enzyme-specific intrinsics like `__enzyme_virtualreverse`. Understanding how to use these might require consulting enzyme's documentation or source code.
- **enzyme test cases:** look for relevant test cases within the [enzyme repository](https://github.com/enzymead/enzyme/tree/main/enzyme/test) that might demonstrate the correct usage of features or intrinsics related to your problem.
## 5) Report your bug.
Afterwards, you should be able to copy and paste your `mwe.ll` (or `reduced.ll`) example into our [compiler explorer](https://enzyme.mit.edu/explorer/).
- Select `llvm ir` as language and `opt 20` as compiler.
- Replace the field to the right of your compiler with `-passes="enzyme"`, if it is not already set.
- Hopefully, you will see once again your now familiar error.
- Please use the share button to copy links to them.
- Please create an issue on [https://github.com/enzymead/enzyme/issues](https://github.com/enzymead/enzyme/issues) and share `mwe.ll` and (if you have it) `reduced.ll`, as well as links to the compiler explorer. Please feel free to also add your rust code or a link to it.
#### Documenting findings
some enzyme errors, like `"attempting to call an indirect active function whose runtime value is inactive"`, have historically caused confusion. If you investigate such an issue, even if you don't find a complete solution, please consider documenting your findings. If the insights are general to enzyme and not specific to its rust usage, contributing them to the main [enzyme documentation](https://github.com/enzymead/www) is often the best first step. You can also mention your findings in the relevant enzyme github issue or propose updates to these docs if appropriate. This helps prevent others from starting from scratch.
With a clear reproducer and documentation, hopefully an enzyme developer will be able to fix your bug. Once that happens, the enzyme submodule inside the rust compiler will be updated, which should allow you to differentiate your rust code. Thanks for helping us to improve rust-ad.
# Minimize rust code
Beyond having a minimal llvm-ir reproducer, it is also helpful to have a minimal rust reproducer without dependencies. This allows us to add it as a test case to ci once we fix it, which avoids regressions for the future.
There are a few solutions to help you with minimizing the rust reproducer. This is probably the most simple automated approach: [cargo-minimize](https://github.com/nilstrieb/cargo-minimize).
Otherwise we have various alternatives, including [`treereduce`](https://github.com/langston-barrett/treereduce), [`halfempty`](https://github.com/googleprojectzero/halfempty), or [`picireny`](https://github.com/renatahodovan/picireny), potentially also [`creduce`](https://github.com/csmith-project/creduce).

42
src/autodiff/flags.md Normal file
View File

@ -0,0 +1,42 @@
# Supported `RUSTFLAGS`
To support you while debugging or profiling, we have added support for an experimental `-Z autodiff` rustc flag (which can be passed to cargo via `RUSTFLAGS`), which allow changing the behaviour of Enzyme, without recompiling rustc. We currently support the following values for `autodiff`.
### Debug Flags
```text
PrintTA // Print TypeAnalysis information
PrintAA // Print ActivityAnalysis information
Print // Print differentiated functions while they are being generated and optimized
PrintPerf // Print AD related Performance warnings
PrintModBefore // Print the whole LLVM-IR module directly before running AD
PrintModAfter // Print the whole LLVM-IR module after running AD, before optimizations
PrintModFinal // Print the whole LLVM-IR module after running optimizations and AD
LooseTypes // Risk incorrect derivatives instead of aborting when missing Type Info
```
<div class="warning">
`LooseTypes` is often helpful to get rid of Enzyme errors stating `Can not deduce type of <X>` and to be able to run some code. But please keep in mind that this flag absolutely has the chance to cause incorrect gradients. Even worse, the gradients might be correct for certain input values, but not for others. So please create issues about such bugs and only use this flag temporarily while you wait for your bug to be fixed.
</div>
### Benchmark flags
For performance experiments and benchmarking we also support
```text
NoPostopt // We won't optimize the LLVM-IR Module after AD
RuntimeActivity // Enables the runtime activity feature from Enzyme
Inline // Instructs Enzyme to maximize inlining as far as possible, beyond LLVM's default
```
You can combine multiple `autodiff` values using a comma as separator:
```bash
RUSTFLAGS="-Z autodiff=Enable,LooseTypes,PrintPerf" cargo +enzyme build
```
Using `-Zautodiff=Enable` will allow using autodiff and update your normal rustc compilation pipeline:
1. Run your selected compilation pipeline. If you selected a release build, we will disable vectorization and loop unrolling.
2. Differentiate your functions.
3. Run your selected compilation pipeline again on the whole module. This time we do not disable vectorization or loop unrolling.

View File

@ -0,0 +1,86 @@
# Installation
In the near future, `std::autodiff` should become available in nightly builds for users. As a contribute however, you will still need to build rustc from source. Please be aware that the msvc target is not supported at the moment, all other tier 1 targets should work. Please open an issue if you encounter any problems on a supported tier 1 target, or if you succesfully build this project on a tier2/tier3 target.
## Build instructions
First you need to clone and configure the Rust repository:
```bash
git clone --depth=1 git@github.com:rust-lang/rust.git
cd rust
./configure --enable-llvm-link-shared --enable-llvm-plugins --enable-llvm-enzyme --release-channel=nightly --enable-llvm-assertions --enable-clang --enable-lld --enable-option-checking --enable-ninja --disable-docs
```
Afterwards you can build rustc using:
```bash
./x.py build --stage 1 library
```
Afterwards rustc toolchain link will allow you to use it through cargo:
```
rustup toolchain link enzyme build/host/stage1
rustup toolchain install nightly # enables -Z unstable-options
```
You can then run our test cases:
```bash
./x.py test --stage 1 library tests/ui/autodiff
./x.py test --stage 1 library tests/codegen/autodiff
./x.py test --stage 1 library tests/pretty/autodiff*
```
Autodiff is still experimental, so if you want to use it in your own projects, you will need to add `lto="fat"` to your Cargo.toml
and use `RUSTFLAGS="-Zautodiff=Enable" cargo +enzyme` instead of `cargo` or `cargo +nightly`.
## Compiler Explorer and dist builds
Our compiler explorer instance can be updated to a newer rustc in a similar way. First, prepare a docker instance.
```bash
docker run -it ubuntu:22.04
export CC=clang CXX=clang++
apt update
apt install wget vim python3 git curl libssl-dev pkg-config lld ninja-build cmake clang build-essential
```
Then build rustc in a slightly altered way:
```bash
git clone --depth=1 https://github.com/EnzymeAD/rust.git
cd rust
./configure --enable-llvm-link-shared --enable-llvm-plugins --enable-llvm-enzyme --release-channel=nightly --enable-llvm-assertions --enable-clang --enable-lld --enable-option-checking --enable-ninja --disable-docs
./x dist
```
We then copy the tarball to our host. The dockerid is the newest entry under `docker ps -a`.
```bash
docker cp <dockerid>:/rust/build/dist/rust-nightly-x86_64-unknown-linux-gnu.tar.gz rust-nightly-x86_64-unknown-linux-gnu.tar.gz
```
Afterwards we can create a new (pre-release) tag on the EnzymeAD/rust repository and make a PR against the EnzymeAD/enzyme-explorer repository to update the tag.
Remember to ping `tgymnich` on the PR to run his update script.
## Build instruction for Enzyme itself
Following the Rust build instruction above will build LLVMEnzyme, LLDEnzyme, and ClangEnzyme along with the Rust compiler.
We recommend that approach, if you just want to use any of them and have no experience with cmake.
However, if you prefer to just build Enzyme without Rust, then these instructions might help.
```bash
git clone --depth=1 git@github.com:llvm/llvm-project.git
cd llvm-project
mkdir build
cd build
cmake -G Ninja ../llvm -DLLVM_TARGETS_TO_BUILD="host" -DLLVM_ENABLE_ASSERTIONS=ON -DLLVM_ENABLE_PROJECTS="clang;lld" -DLLVM_ENABLE_RUNTIMES="openmp" -DLLVM_ENABLE_PLUGINS=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=.
ninja
ninja install
```
This gives you a working LLVM build, now we can continue with building Enzyme.
Leave the `llvm-project` folder, and execute the following commands:
```bash
git clone git@github.com:EnzymeAD/Enzyme.git
cd Enzyme/enzyme
mkdir build
cd build
cmake .. -G Ninja -DLLVM_DIR=<YourLocalPath>/llvm-project/build/lib/cmake/llvm/ -DLLVM_EXTERNAL_LIT=<YourLocalPath>/llvm-project/llvm/utils/lit/lit.py -DCMAKE_BUILD_TYPE=Release -DCMAKE_EXPORT_COMPILE_COMMANDS=YES -DBUILD_SHARED_LIBS=ON
ninja
```
This will build Enzyme, and you can find it in `Enzyme/enzyme/build/lib/<LLD/Clang/LLVM>Enzyme.so`. (Endings might differ based on your OS).

27
src/autodiff/internals.md Normal file
View File

@ -0,0 +1,27 @@
The `std::autodiff` module in Rust allows differentiable programming:
```rust
#![feature(autodiff)]
use std::autodiff::autodiff;
// f(x) = x * x, f'(x) = 2.0 * x
// bar therefore returns (x * x, 2.0 * x)
#[autodiff(bar, Reverse, Active, Active)]
fn foo(x: f32) -> f32 { x * x }
fn main() {
assert_eq!(bar(3.0, 1.0), (9.0, 6.0));
assert_eq!(bar(4.0, 1.0), (16.0, 8.0));
}
```
The detailed documentation for the `std::autodiff` module is available at [std::autodiff](https://doc.rust-lang.org/std/autodiff/index.html).
Differentiable programing is used in various fields like numerical computing, [solid mechanics][ratel], [computational chemistry][molpipx], [fluid dynamics][waterlily] or for Neural Network training via Backpropagation, [ODE solver][diffsol], [differentiable rendering][libigl], [quantum computing][catalyst], and climate simulations.
[ratel]: https://gitlab.com/micromorph/ratel
[molpipx]: https://arxiv.org/abs/2411.17011v
[waterlily]: https://github.com/WaterLily-jl/WaterLily.jl
[diffsol]: https://github.com/martinjrobins/diffsol
[libigl]: https://github.com/alecjacobson/libigl-enzyme-example?tab=readme-ov-file#run
[catalyst]: https://github.com/PennyLaneAI/catalyst

View File

@ -0,0 +1,27 @@
# Current limitations
## Safety and Soundness
Enzyme currently assumes that the user passes shadow arguments (`dx`, `dy`, ...) of appropriate size. Under Reverse Mode, we additionally assume that shadow arguments are mutable. In Reverse Mode we adjust the outermost pointer or reference to be mutable. Therefore `&f32` will receive the shadow type `&mut f32`. However, we do not check length for other types than slices (e.g. enums, Vec). We also do not enforce mutability of inner references, but will warn if we recognize them. We do intend to add additional checks over time.
## ABI adjustments
In some cases, a function parameter might get lowered in a way that we currently don't handle correctly, leading to a compile time type mismatch in the `rustc_codegen_llvm` backend. Here are some [examples](https://github.com/EnzymeAD/rust/issues/105).
## Compile Times
Enzyme will often achieve excellent runtime performance, but might increase your compile time by a large factor. For Rust, we already have made significant improvements and have a list of further improvements planed - please reach out if you have time to help here.
### Type Analysis
Most of the times, Type Analysis (TA) is the reason of large (>5x) compile time increases when using Enzyme. This poster explains why we need to run Type Analysis in the bottom left part: [Poster Link](https://c.wsmoses.com/posters/Enzyme-llvmdev.pdf).
We intend to increase the number of locations where we pass down Type information based on Rust types, which in turn will reduce the number of locations where Enzyme has to run Type Analysis, which will help compile times.
### Duplicated Optimizations
The key reason for Enzyme offering often excellent performance is that Enzyme differentiates already optimized LLVM-IR. However, we also (have to) run LLVM's optimization pipeline after differentiating, to make sure that the code which Enzyme generates is optimized properly. As a result you should have excellent runtime performance (please fill an issue if not), but at a compile time cost for running optimizations twice.
### Fat-LTO
The usage of `#[autodiff(...)]` currently requires compiling your project with Fat-LTO. We technically only need LTO if the function being differentiated calls functions in other compilation units. Therefore, other solutions are possible, but this is the most simple one to get started.

View File

@ -38,7 +38,7 @@ which means that LLVM assertion failures can show up as compiler crashes (not
ICEs but "real" crashes) and other sorts of weird behavior. If you are ICEs but "real" crashes) and other sorts of weird behavior. If you are
encountering these, it is a good idea to try using a compiler with LLVM encountering these, it is a good idea to try using a compiler with LLVM
assertions enabled - either an "alt" nightly or a compiler you build yourself assertions enabled - either an "alt" nightly or a compiler you build yourself
by setting `[llvm] assertions=true` in your config.toml - and see whether by setting `[llvm] assertions=true` in your bootstrap.toml - and see whether
anything turns up. anything turns up.
The rustc build process builds the LLVM tools into The rustc build process builds the LLVM tools into
@ -160,7 +160,7 @@ from `./build/<host-triple>/llvm/bin/` with the LLVM IR emitted by rustc.
When investigating the implementation of LLVM itself, you should be When investigating the implementation of LLVM itself, you should be
aware of its [internal debug infrastructure][llvm-debug]. aware of its [internal debug infrastructure][llvm-debug].
This is provided in LLVM Debug builds, which you enable for rustc This is provided in LLVM Debug builds, which you enable for rustc
LLVM builds by changing this setting in the config.toml: LLVM builds by changing this setting in the bootstrap.toml:
``` ```
[llvm] [llvm]
# Indicates whether the LLVM assertions are enabled or not # Indicates whether the LLVM assertions are enabled or not

View File

@ -110,7 +110,7 @@ See [`compute_hir_hash`] for where the hash is actually computed.
[SVH]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_data_structures/svh/struct.Svh.html [SVH]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_data_structures/svh/struct.Svh.html
[incremental compilation]: ../queries/incremental-compilation.md [incremental compilation]: ../queries/incremental-compilation.md
[`compute_hir_hash`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_ast_lowering/struct.LoweringContext.html#method.compute_hir_hash [`compute_hir_hash`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_ast_lowering/fn.compute_hir_hash.html
### Stable Crate Id ### Stable Crate Id

View File

@ -116,14 +116,14 @@ so let's go through each in detail.
at the time of the branch, at the time of the branch,
and the remaining part is the current date. and the remaining part is the current date.
2. Apply Rust-specific patches to the llvm-project repository. 1. Apply Rust-specific patches to the llvm-project repository.
All features and bugfixes are upstream, All features and bugfixes are upstream,
but there's often some weird build-related patches but there's often some weird build-related patches
that don't make sense to upstream. that don't make sense to upstream.
These patches are typically the latest patches in the These patches are typically the latest patches in the
rust-lang/llvm-project branch that rustc is currently using. rust-lang/llvm-project branch that rustc is currently using.
3. Build the new LLVM in the `rust` repository. 1. Build the new LLVM in the `rust` repository.
To do this, To do this,
you'll want to update the `src/llvm-project` repository to your branch, you'll want to update the `src/llvm-project` repository to your branch,
and the revision you've created. and the revision you've created.
@ -144,14 +144,14 @@ so let's go through each in detail.
Note that `profile = "compiler"` and other defaults set by `./x setup` Note that `profile = "compiler"` and other defaults set by `./x setup`
download LLVM from CI instead of building it from source. download LLVM from CI instead of building it from source.
You should disable this temporarily to make sure your changes are being used. You should disable this temporarily to make sure your changes are being used.
This is done by having the following setting in `config.toml`: This is done by having the following setting in `bootstrap.toml`:
```toml ```toml
[llvm] [llvm]
download-ci-llvm = false download-ci-llvm = false
``` ```
4. Test for regressions across other platforms. LLVM often has at least one bug 1. Test for regressions across other platforms. LLVM often has at least one bug
for non-tier-1 architectures, so it's good to do some more testing before for non-tier-1 architectures, so it's good to do some more testing before
sending this to bors! If you're low on resources you can send the PR as-is sending this to bors! If you're low on resources you can send the PR as-is
now to bors, though, and it'll get tested anyway. now to bors, though, and it'll get tested anyway.
@ -170,22 +170,17 @@ so let's go through each in detail.
* `./src/ci/docker/run.sh dist-various-2` * `./src/ci/docker/run.sh dist-various-2`
* `./src/ci/docker/run.sh armhf-gnu` * `./src/ci/docker/run.sh armhf-gnu`
5. Prepare a PR to `rust-lang/rust`. Work with maintainers of 1. Prepare a PR to `rust-lang/rust`. Work with maintainers of
`rust-lang/llvm-project` to get your commit in a branch of that repository, `rust-lang/llvm-project` to get your commit in a branch of that repository,
and then you can send a PR to `rust-lang/rust`. You'll change at least and then you can send a PR to `rust-lang/rust`. You'll change at least
`src/llvm-project` and will likely also change [`llvm-wrapper`] as well. `src/llvm-project` and will likely also change [`llvm-wrapper`] as well.
<!-- date-check: Sep 2024 --> <!-- date-check: mar 2025 -->
> For prior art, here are some previous LLVM updates: > For prior art, here are some previous LLVM updates:
> - [LLVM 11](https://github.com/rust-lang/rust/pull/73526)
> - [LLVM 12](https://github.com/rust-lang/rust/pull/81451)
> - [LLVM 13](https://github.com/rust-lang/rust/pull/87570)
> - [LLVM 14](https://github.com/rust-lang/rust/pull/93577)
> - [LLVM 15](https://github.com/rust-lang/rust/pull/99464)
> - [LLVM 16](https://github.com/rust-lang/rust/pull/109474)
> - [LLVM 17](https://github.com/rust-lang/rust/pull/115959) > - [LLVM 17](https://github.com/rust-lang/rust/pull/115959)
> - [LLVM 18](https://github.com/rust-lang/rust/pull/120055) > - [LLVM 18](https://github.com/rust-lang/rust/pull/120055)
> - [LLVM 19](https://github.com/rust-lang/rust/pull/127513) > - [LLVM 19](https://github.com/rust-lang/rust/pull/127513)
> - [LLVM 20](https://github.com/rust-lang/rust/pull/135763)
Note that sometimes it's easiest to land [`llvm-wrapper`] compatibility as a PR Note that sometimes it's easiest to land [`llvm-wrapper`] compatibility as a PR
before actually updating `src/llvm-project`. before actually updating `src/llvm-project`.
@ -194,7 +189,7 @@ so let's go through each in detail.
others interested in trying out the new LLVM can benefit from work you've done others interested in trying out the new LLVM can benefit from work you've done
to update the C++ bindings. to update the C++ bindings.
3. Over the next few months, 1. Over the next few months,
LLVM will continually push commits to its `release/a.b` branch. LLVM will continually push commits to its `release/a.b` branch.
We will often want to have those bug fixes as well. We will often want to have those bug fixes as well.
The merge process for that is to use `git merge` itself to merge LLVM's The merge process for that is to use `git merge` itself to merge LLVM's
@ -202,9 +197,9 @@ so let's go through each in detail.
This is typically This is typically
done multiple times when necessary while LLVM's release branch is baking. done multiple times when necessary while LLVM's release branch is baking.
4. LLVM then announces the release of version `a.b`. 1. LLVM then announces the release of version `a.b`.
5. After LLVM's official release, 1. After LLVM's official release,
we follow the process of creating a new branch on the we follow the process of creating a new branch on the
rust-lang/llvm-project repository again, rust-lang/llvm-project repository again,
this time with a new date. this time with a new date.

View File

@ -1,4 +1,4 @@
# Procedures for Breaking Changes # Procedures for breaking changes
<!-- toc --> <!-- toc -->

View File

@ -0,0 +1,53 @@
# `cfg(bootstrap)` in compiler dependencies
The rust compiler uses some external crates that can run into cyclic dependencies with the compiler itself: the compiler needs an updated crate to build, but the crate needs an updated compiler. This page describes how `#[cfg(bootstrap)]` can be used to break this cycle.
## Enabling `#[cfg(bootstrap)]`
Usually the use of `#[cfg(bootstrap)]` in an external crate causes a warning:
```
warning: unexpected `cfg` condition name: `bootstrap`
--> src/main.rs:1:7
|
1 | #[cfg(bootstrap)]
| ^^^^^^^^^
|
= help: expected names are: `docsrs`, `feature`, and `test` and 31 more
= help: consider using a Cargo feature instead
= help: or consider adding in `Cargo.toml` the `check-cfg` lint config for the lint:
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(bootstrap)'] }
= help: or consider adding `println!("cargo::rustc-check-cfg=cfg(bootstrap)");` to the top of the `build.rs`
= note: see <https://doc.rust-lang.org/nightly/rustc/check-cfg/cargo-specifics.html> for more information about checking conditional configuration
= note: `#[warn(unexpected_cfgs)]` on by default
```
This warning can be silenced by adding these lines to the project's `Cargo.toml`:
```toml
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(bootstrap)'] }
```
Now `#[cfg(bootstrap)]` can be used in the crate just like it can be in the compiler: when the bootstrap compiler is used, code annotated with `#[cfg(bootstrap)]` is compiled, otherwise code annotated with `#[cfg(not(bootstrap))]` is compiled.
## The update dance
As a concrete example we'll use a change where the `#[naked]` attribute was made into an unsafe attribute, which caused a cyclic dependency with the `compiler-builtins` crate.
### Step 1: accept the new behavior in the compiler ([#139797](https://github.com/rust-lang/rust/pull/139797))
In this example it is possible to accept both the old and new behavior at the same time by disabling an error.
### Step 2: update the crate ([#821](https://github.com/rust-lang/compiler-builtins/pull/821))
Now in the crate, use `#[cfg(bootstrap)]` to use the old behavior, or `#[cfg(not(bootstrap))]` to use the new behavior.
### Step 3: update the crate version used by the compiler ([#139934](https://github.com/rust-lang/rust/pull/139934))
For `compiler-builtins` this meant a version bump, in other cases it may be a git submodule update.
### Step 4: remove the old behavior from the compiler ([#139753](https://github.com/rust-lang/rust/pull/139753))
The updated crate can now be used. In this example that meant that the old behavior could be removed.

View File

@ -129,7 +129,7 @@ Both `tracing::*` macros and the `tracing::instrument` proc-macro attribute need
```rs ```rs
#[cfg(feature = "tracing")] #[cfg(feature = "tracing")]
use tracing::{instrument, trace}; use tracing::instrument;
struct Foo; struct Foo;
@ -138,7 +138,6 @@ impl Step for Foo {
#[cfg_attr(feature = "tracing", instrument(level = "trace", name = "Foo::should_run", skip_all))] #[cfg_attr(feature = "tracing", instrument(level = "trace", name = "Foo::should_run", skip_all))]
fn should_run(run: ShouldRun<'_>) -> ShouldRun<'_> { fn should_run(run: ShouldRun<'_>) -> ShouldRun<'_> {
#[cfg(feature = "tracing")]
trace!(?run, "entered Foo::should_run"); trace!(?run, "entered Foo::should_run");
todo!() todo!()
@ -154,7 +153,6 @@ impl Step for Foo {
), ),
)] )]
fn run(self, builder: &Builder<'_>) -> Self::Output { fn run(self, builder: &Builder<'_>) -> Self::Output {
#[cfg(feature = "tracing")]
trace!(?run, "entered Foo::run"); trace!(?run, "entered Foo::run");
todo!() todo!()

View File

@ -36,7 +36,7 @@ like the standard library (std) or the compiler (rustc).
- Document internal rustc items - Document internal rustc items
Compiler documentation is not built by default. Compiler documentation is not built by default.
To create it by default with `x doc`, modify `config.toml`: To create it by default with `x doc`, modify `bootstrap.toml`:
```toml ```toml
[build] [build]

View File

@ -63,7 +63,7 @@ cd rust
> **NOTE**: A shallow clone limits which `git` commands can be run. > **NOTE**: A shallow clone limits which `git` commands can be run.
> If you intend to work on and contribute to the compiler, it is > If you intend to work on and contribute to the compiler, it is
> generally recommended to fully clone the repository [as shown above](#get-the-source-code), > generally recommended to fully clone the repository [as shown above](#get-the-source-code),
> or to perform a [partial clone](#shallow-clone-the-repository) instead. > or to perform a [partial clone](#partial-clone-the-repository) instead.
> >
> For example, `git bisect` and `git blame` require access to the commit history, > For example, `git bisect` and `git blame` require access to the commit history,
> so they don't work if the repository was cloned with `--depth 1`. > so they don't work if the repository was cloned with `--depth 1`.
@ -159,15 +159,15 @@ similar to the one declared in section [What is `x.py`](#what-is-xpy), but
it works as an independent process to execute the `x.py` rather than calling the it works as an independent process to execute the `x.py` rather than calling the
shell to run the platform related scripts. shell to run the platform related scripts.
## Create a `config.toml` ## Create a `bootstrap.toml`
To start, run `./x setup` and select the `compiler` defaults. This will do some initialization To start, run `./x setup` and select the `compiler` defaults. This will do some initialization
and create a `config.toml` for you with reasonable defaults. If you use a different default (which and create a `bootstrap.toml` for you with reasonable defaults. If you use a different default (which
you'll likely want to do if you want to contribute to an area of rust other than the compiler, such you'll likely want to do if you want to contribute to an area of rust other than the compiler, such
as rustdoc), make sure to read information about that default (located in `src/bootstrap/defaults`) as rustdoc), make sure to read information about that default (located in `src/bootstrap/defaults`)
as the build process may be different for other defaults. as the build process may be different for other defaults.
Alternatively, you can write `config.toml` by hand. See `config.example.toml` for all the available Alternatively, you can write `bootstrap.toml` by hand. See `bootstrap.example.toml` for all the available
settings and explanations of them. See `src/bootstrap/defaults` for common settings to change. settings and explanations of them. See `src/bootstrap/defaults` for common settings to change.
If you have already built `rustc` and you change settings related to LLVM, then you may have to If you have already built `rustc` and you change settings related to LLVM, then you may have to
@ -206,7 +206,7 @@ See the chapters on
Note that building will require a relatively large amount of storage space. Note that building will require a relatively large amount of storage space.
You may want to have upwards of 10 or 15 gigabytes available to build the compiler. You may want to have upwards of 10 or 15 gigabytes available to build the compiler.
Once you've created a `config.toml`, you are now ready to run Once you've created a `bootstrap.toml`, you are now ready to run
`x`. There are a lot of options here, but let's start with what is `x`. There are a lot of options here, but let's start with what is
probably the best "go to" command for building a local compiler: probably the best "go to" command for building a local compiler:
@ -326,7 +326,7 @@ involve proc macros or build scripts, you must be sure to explicitly build targe
host platform (in this case, `x86_64-unknown-linux-gnu`). host platform (in this case, `x86_64-unknown-linux-gnu`).
If you want to always build for other targets without needing to pass flags to `x build`, If you want to always build for other targets without needing to pass flags to `x build`,
you can configure this in the `[build]` section of your `config.toml` like so: you can configure this in the `[build]` section of your `bootstrap.toml` like so:
```toml ```toml
[build] [build]
@ -336,8 +336,8 @@ target = ["x86_64-unknown-linux-gnu", "wasm32-wasip1"]
Note that building for some targets requires having external dependencies installed Note that building for some targets requires having external dependencies installed
(e.g. building musl targets requires a local copy of musl). (e.g. building musl targets requires a local copy of musl).
Any target-specific configuration (e.g. the path to a local copy of musl) Any target-specific configuration (e.g. the path to a local copy of musl)
will need to be provided by your `config.toml`. will need to be provided by your `bootstrap.toml`.
Please see `config.example.toml` for information on target-specific configuration keys. Please see `bootstrap.example.toml` for information on target-specific configuration keys.
For examples of the complete configuration necessary to build a target, please visit For examples of the complete configuration necessary to build a target, please visit
[the rustc book](https://doc.rust-lang.org/rustc/platform-support.html), [the rustc book](https://doc.rust-lang.org/rustc/platform-support.html),

View File

@ -4,12 +4,11 @@ These are a set of steps to add support for a new target. There are
numerous end states and paths to get there, so not all sections may be numerous end states and paths to get there, so not all sections may be
relevant to your desired goal. relevant to your desired goal.
See also the associated documentation in the See also the associated documentation in the [target tier policy].
[target tier policy][target_tier_policy_add].
<!-- toc --> <!-- toc -->
[target_tier_policy_add]: https://doc.rust-lang.org/rustc/target-tier-policy.html#adding-a-new-target [target tier policy]: https://doc.rust-lang.org/rustc/target-tier-policy.html#adding-a-new-target
## Specifying a new LLVM ## Specifying a new LLVM
@ -38,7 +37,7 @@ able to configure Rust to treat your build as the system LLVM to avoid
redundant builds. redundant builds.
You can tell Rust to use a pre-built version of LLVM using the `target` section You can tell Rust to use a pre-built version of LLVM using the `target` section
of `config.toml`: of `bootstrap.toml`:
```toml ```toml
[target.x86_64-unknown-linux-gnu] [target.x86_64-unknown-linux-gnu]
@ -56,8 +55,8 @@ for codegen tests. This tool is normally built with LLVM, but if you use your
own preinstalled LLVM, you will need to provide `FileCheck` in some other way. own preinstalled LLVM, you will need to provide `FileCheck` in some other way.
On Debian-based systems, you can install the `llvm-N-tools` package (where `N` On Debian-based systems, you can install the `llvm-N-tools` package (where `N`
is the LLVM version number, e.g. `llvm-8-tools`). Alternately, you can specify is the LLVM version number, e.g. `llvm-8-tools`). Alternately, you can specify
the path to `FileCheck` with the `llvm-filecheck` config item in `config.toml` the path to `FileCheck` with the `llvm-filecheck` config item in `bootstrap.toml`
or you can disable codegen test with the `codegen-tests` item in `config.toml`. or you can disable codegen test with the `codegen-tests` item in `bootstrap.toml`.
## Creating a target specification ## Creating a target specification
@ -142,14 +141,14 @@ After this, run `cargo update -p libc` to update the lockfiles.
Beware that if you patch to a local `path` dependency, this will enable Beware that if you patch to a local `path` dependency, this will enable
warnings for that dependency. Some dependencies are not warning-free, and due warnings for that dependency. Some dependencies are not warning-free, and due
to the `deny-warnings` setting in `config.toml`, the build may suddenly start to the `deny-warnings` setting in `bootstrap.toml`, the build may suddenly start
to fail. to fail.
To work around warnings, you may want to: To work around warnings, you may want to:
- Modify the dependency to remove the warnings - Modify the dependency to remove the warnings
- Or for local development purposes, suppress the warnings by setting deny-warnings = false in config.toml. - Or for local development purposes, suppress the warnings by setting deny-warnings = false in bootstrap.toml.
```toml ```toml
# config.toml # bootstrap.toml
[rust] [rust]
deny-warnings = false deny-warnings = false
``` ```

View File

@ -13,7 +13,7 @@ This page describes how you can use these approaches when building `rustc` yours
Link-time optimization is a powerful compiler technique that can increase program performance. To Link-time optimization is a powerful compiler technique that can increase program performance. To
enable (Thin-)LTO when building `rustc`, set the `rust.lto` config option to `"thin"` enable (Thin-)LTO when building `rustc`, set the `rust.lto` config option to `"thin"`
in `config.toml`: in `bootstrap.toml`:
```toml ```toml
[rust] [rust]
@ -34,7 +34,7 @@ Enabling LTO on Linux has [produced] speed-ups by up to 10%.
Using a different memory allocator for `rustc` can provide significant performance benefits. If you Using a different memory allocator for `rustc` can provide significant performance benefits. If you
want to enable the `jemalloc` allocator, you can set the `rust.jemalloc` option to `true` want to enable the `jemalloc` allocator, you can set the `rust.jemalloc` option to `true`
in `config.toml`: in `bootstrap.toml`:
```toml ```toml
[rust] [rust]
@ -46,7 +46,7 @@ jemalloc = true
## Codegen units ## Codegen units
Reducing the amount of codegen units per `rustc` crate can produce a faster build of the compiler. Reducing the amount of codegen units per `rustc` crate can produce a faster build of the compiler.
You can modify the number of codegen units for `rustc` and `libstd` in `config.toml` with the You can modify the number of codegen units for `rustc` and `libstd` in `bootstrap.toml` with the
following options: following options:
```toml ```toml
@ -67,7 +67,7 @@ RUSTFLAGS="-C target_cpu=x86-64-v3" ./x build ...
``` ```
If you also want to compile LLVM for a specific instruction set, you can set `llvm` flags If you also want to compile LLVM for a specific instruction set, you can set `llvm` flags
in `config.toml`: in `bootstrap.toml`:
```toml ```toml
[llvm] [llvm]
@ -109,11 +109,16 @@ like Python or LLVM.
Here is an example of how can `opt-dist` be used locally (outside of CI): Here is an example of how can `opt-dist` be used locally (outside of CI):
1. Build the tool with the following command: 1. Enable metrics in your `bootstrap.toml` file, because `opt-dist` expects it to be enabled:
```toml
[build]
metrics = true
```
2. Build the tool with the following command:
```bash ```bash
./x build tools/opt-dist ./x build tools/opt-dist
``` ```
2. Run the tool with the `local` mode and provide necessary parameters: 3. Run the tool with the `local` mode and provide necessary parameters:
```bash ```bash
./build/host/stage0-tools-bin/opt-dist local \ ./build/host/stage0-tools-bin/opt-dist local \
--target-triple <target> \ # select target, e.g. "x86_64-unknown-linux-gnu" --target-triple <target> \ # select target, e.g. "x86_64-unknown-linux-gnu"

View File

@ -38,4 +38,4 @@ incremental compilation ([see here][config]). This will make compilation take
longer (especially after a rebase), but will save a ton of space from the longer (especially after a rebase), but will save a ton of space from the
incremental caches. incremental caches.
[config]: ./how-to-build-and-run.md#create-a-configtoml [config]: ./how-to-build-and-run.md#create-a-bootstraptoml

View File

@ -1,4 +1,4 @@
# Suggested Workflows # Suggested workflows
The full bootstrapping process takes quite a while. Here are some suggestions to The full bootstrapping process takes quite a while. Here are some suggestions to
make your life easier. make your life easier.
@ -20,6 +20,43 @@ your `.git/hooks` folder as `pre-push` (without the `.sh` extension!).
You can also install the hook as a step of running `./x setup`! You can also install the hook as a step of running `./x setup`!
## Config extensions
When working on different tasks, you might need to switch between different bootstrap configurations.
Sometimes you may want to keep an old configuration for future use. But saving raw config values in
random files and manually copying and pasting them can quickly become messy, especially if you have a
long history of different configurations.
To simplify managing multiple configurations, you can create config extensions.
For example, you can create a simple config file named `cross.toml`:
```toml
[build]
build = "x86_64-unknown-linux-gnu"
host = ["i686-unknown-linux-gnu"]
target = ["i686-unknown-linux-gnu"]
[llvm]
download-ci-llvm = false
[target.x86_64-unknown-linux-gnu]
llvm-config = "/path/to/llvm-19/bin/llvm-config"
```
Then, include this in your `bootstrap.toml`:
```toml
include = ["cross.toml"]
```
You can also include extensions within extensions recursively.
**Note:** In the `include` field, the overriding logic follows a right-to-left order. For example,
in `include = ["a.toml", "b.toml"]`, extension `b.toml` overrides `a.toml`. Also, parent extensions
always overrides the inner ones.
## Configuring `rust-analyzer` for `rustc` ## Configuring `rust-analyzer` for `rustc`
### Project-local rust-analyzer setup ### Project-local rust-analyzer setup
@ -123,6 +160,30 @@ Another way is without a plugin, and creating your own logic in your
configuration. The following code will work for any checkout of rust-lang/rust (newer than Febuary 2025): configuration. The following code will work for any checkout of rust-lang/rust (newer than Febuary 2025):
```lua ```lua
local function expand_config_variables(option)
local var_placeholders = {
['${workspaceFolder}'] = function(_)
return vim.lsp.buf.list_workspace_folders()[1]
end,
}
if type(option) == "table" then
local mt = getmetatable(option)
local result = {}
for k, v in pairs(option) do
result[expand_config_variables(k)] = expand_config_variables(v)
end
return setmetatable(result, mt)
end
if type(option) ~= "string" then
return option
end
local ret = option
for key, fn in pairs(var_placeholders) do
ret = ret:gsub(key, fn)
end
return ret
end
lspconfig.rust_analyzer.setup { lspconfig.rust_analyzer.setup {
root_dir = function() root_dir = function()
local default = lspconfig.rust_analyzer.config_def.default_config.root_dir() local default = lspconfig.rust_analyzer.config_def.default_config.root_dir()
@ -142,7 +203,7 @@ lspconfig.rust_analyzer.setup {
-- load rust-lang/rust settings -- load rust-lang/rust settings
local file = io.open(config) local file = io.open(config)
local json = vim.json.decode(file:read("*a")) local json = vim.json.decode(file:read("*a"))
client.config.settings["rust-analyzer"] = json.lsp["rust-analyzer"].initialization_options client.config.settings["rust-analyzer"] = expand_config_variables(json.lsp["rust-analyzer"].initialization_options)
client.notify("workspace/didChangeConfiguration", { settings = client.config.settings }) client.notify("workspace/didChangeConfiguration", { settings = client.config.settings })
end end
return true return true
@ -305,7 +366,7 @@ subsequent rebuilds:
``` ```
If you don't want to include the flag with every command, you can enable it in If you don't want to include the flag with every command, you can enable it in
the `config.toml`: the `bootstrap.toml`:
```toml ```toml
[rust] [rust]
@ -384,20 +445,20 @@ ln -s ./src/tools/nix-dev-shell/envrc-shell ./.envrc # Use nix-shell
### Note ### Note
Note that when using nix on a not-NixOS distribution, it may be necessary to set Note that when using nix on a not-NixOS distribution, it may be necessary to set
**`patch-binaries-for-nix = true` in `config.toml`**. Bootstrap tries to detect **`patch-binaries-for-nix = true` in `bootstrap.toml`**. Bootstrap tries to detect
whether it's running in nix and enable patching automatically, but this whether it's running in nix and enable patching automatically, but this
detection can have false negatives. detection can have false negatives.
You can also use your nix shell to manage `config.toml`: You can also use your nix shell to manage `bootstrap.toml`:
```nix ```nix
let let
config = pkgs.writeText "rustc-config" '' config = pkgs.writeText "rustc-config" ''
# Your config.toml content goes here # Your bootstrap.toml content goes here
'' ''
pkgs.mkShell { pkgs.mkShell {
/* ... */ /* ... */
# This environment variable tells bootstrap where our config.toml is. # This environment variable tells bootstrap where our bootstrap.toml is.
RUST_BOOTSTRAP_CONFIG = config; RUST_BOOTSTRAP_CONFIG = config;
} }
``` ```

View File

@ -1,4 +1,3 @@
# Coherence # Coherence
> NOTE: this is based on [notes by @lcnr](https://github.com/rust-lang/rust/pull/121848) > NOTE: this is based on [notes by @lcnr](https://github.com/rust-lang/rust/pull/121848)

View File

@ -11,13 +11,13 @@ chapter](./backend/debugging.md)).
## Configuring the compiler ## Configuring the compiler
By default, rustc is built without most debug information. To enable debug info, By default, rustc is built without most debug information. To enable debug info,
set `debug = true` in your config.toml. set `debug = true` in your bootstrap.toml.
Setting `debug = true` turns on many different debug options (e.g., `debug-assertions`, Setting `debug = true` turns on many different debug options (e.g., `debug-assertions`,
`debug-logging`, etc.) which can be individually tweaked if you want to, but many people `debug-logging`, etc.) which can be individually tweaked if you want to, but many people
simply set `debug = true`. simply set `debug = true`.
If you want to use GDB to debug rustc, please set `config.toml` with options: If you want to use GDB to debug rustc, please set `bootstrap.toml` with options:
```toml ```toml
[rust] [rust]
@ -35,14 +35,14 @@ debuginfo-level = 2
The default configuration will enable `symbol-mangling-version` v0. The default configuration will enable `symbol-mangling-version` v0.
This requires at least GDB v10.2, This requires at least GDB v10.2,
otherwise you need to disable new symbol-mangling-version in `config.toml`. otherwise you need to disable new symbol-mangling-version in `bootstrap.toml`.
```toml ```toml
[rust] [rust]
new-symbol-mangling = false new-symbol-mangling = false
``` ```
> See the comments in `config.example.toml` for more info. > See the comments in `bootstrap.example.toml` for more info.
You will need to rebuild the compiler after changing any configuration option. You will need to rebuild the compiler after changing any configuration option.
@ -301,7 +301,8 @@ Right below you can find elaborate explainers on a selected few.
Some compiler options for debugging specific features yield graphviz graphs - Some compiler options for debugging specific features yield graphviz graphs -
e.g. the `#[rustc_mir(borrowck_graphviz_postflow="suffix.dot")]` attribute e.g. the `#[rustc_mir(borrowck_graphviz_postflow="suffix.dot")]` attribute
dumps various borrow-checker dataflow graphs. on a function dumps various borrow-checker dataflow graphs in conjunction with
`-Zdump-mir-dataflow`.
These all produce `.dot` files. To view these files, install graphviz (e.g. These all produce `.dot` files. To view these files, install graphviz (e.g.
`apt-get install graphviz`) and then run the following commands: `apt-get install graphviz`) and then run the following commands:
@ -373,7 +374,7 @@ error: aborting due to previous error
## Configuring CodeLLDB for debugging `rustc` ## Configuring CodeLLDB for debugging `rustc`
If you are using VSCode, and have edited your `config.toml` to request debugging If you are using VSCode, and have edited your `bootstrap.toml` to request debugging
level 1 or 2 for the parts of the code you're interested in, then you should be level 1 or 2 for the parts of the code you're interested in, then you should be
able to use the [CodeLLDB] extension in VSCode to debug it. able to use the [CodeLLDB] extension in VSCode to debug it.

View File

@ -35,7 +35,7 @@ They're the wrappers of the `const_eval` query.
Statics are special; all other functions do not represent statics correctly Statics are special; all other functions do not represent statics correctly
and have thus assertions preventing their use on statics. and have thus assertions preventing their use on statics.
The `const_eval_*` functions use a [`ParamEnv`](./param_env/param_env_summary.html) of environment The `const_eval_*` functions use a [`ParamEnv`](./typing_parameter_envs.html) of environment
in which the constant is evaluated (e.g. the function within which the constant is used) in which the constant is evaluated (e.g. the function within which the constant is used)
and a [`GlobalId`]. The `GlobalId` is made up of an `Instance` referring to a constant and a [`GlobalId`]. The `GlobalId` is made up of an `Instance` referring to a constant
or static or of an `Instance` of a function and an index into the function's `Promoted` table. or static or of an `Instance` of a function and an index into the function's `Promoted` table.

View File

@ -1,4 +1,4 @@
# Contribution Procedures # Contribution procedures
<!-- toc --> <!-- toc -->
@ -81,7 +81,7 @@ smaller user-facing changes.
into a PR that ends up not getting merged!** [See this document][mcpinfo] for into a PR that ends up not getting merged!** [See this document][mcpinfo] for
more info on MCPs. more info on MCPs.
[mcpinfo]: https://forge.rust-lang.org/compiler/mcp.html [mcpinfo]: https://forge.rust-lang.org/compiler/proposals-and-stabilization.html#how-do-i-submit-an-mcp
[zulip]: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler [zulip]: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler
### Performance ### Performance
@ -150,6 +150,20 @@ when contributing to Rust under [the git section](./git.md).
[t-compiler]: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler [t-compiler]: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler
[triagebot]: https://github.com/rust-lang/rust/blob/master/triagebot.toml [triagebot]: https://github.com/rust-lang/rust/blob/master/triagebot.toml
### Keeping your branch up-to-date
The CI in rust-lang/rust applies your patches directly against the current master,
not against the commit your branch is based on. This can lead to unexpected failures
if your branch is outdated, even when there are no explicit merge conflicts.
Before submitting or updating a PR, make sure to update your branch
as mentioned [here](git.md#keeping-things-up-to-date) if it's significantly
behind the master branch (e.g., more than 100 commits behind).
This fetches the latest master branch and rebases your changes on top of it,
ensuring your PR is tested against the latest code.
After rebasing, it's recommended to [run the relevant tests locally](tests/intro.md) to catch any issues before CI runs.
### r? ### r?
All pull requests are reviewed by another person. We have a bot, All pull requests are reviewed by another person. We have a bot,
@ -346,7 +360,7 @@ function in the same way as other pull requests.
[`src/doc`]: https://github.com/rust-lang/rust/tree/master/src/doc [`src/doc`]: https://github.com/rust-lang/rust/tree/master/src/doc
[std-root]: https://github.com/rust-lang/rust/blob/master/library/std/src/lib.rs#L1 [std-root]: https://github.com/rust-lang/rust/blob/master/library/std/src/lib.rs#L1
To find documentation-related issues, sort by the [A-docs label]. To find documentation-related issues, use the [A-docs label].
You can find documentation style guidelines in [RFC 1574]. You can find documentation style guidelines in [RFC 1574].
@ -373,7 +387,7 @@ Just a few things to keep in mind:
There is no strict limit on line lengths; let the sentence or part of the sentence flow to its proper end on the same line. There is no strict limit on line lengths; let the sentence or part of the sentence flow to its proper end on the same line.
- When contributing text to the guide, please contextualize the information with some time period - When contributing text to the guide, please contextualize the information with some time period
and/or a reason so that the reader knows how much to trust or mistrust the information. and/or a reason so that the reader knows how much to trust the information.
Aim to provide a reasonable amount of context, possibly including but not limited to: Aim to provide a reasonable amount of context, possibly including but not limited to:
- A reason for why the data may be out of date other than "change", - A reason for why the data may be out of date other than "change",
@ -387,28 +401,28 @@ Just a few things to keep in mind:
- jan 2021 - jan 2021
- january 2021 - january 2021
There is a CI action (in `~/.github/workflows/date-check.yml`) There is a CI action (in `.github/workflows/date-check.yml`)
that generates a monthly showing those that are over 6 months old that generates a monthly report showing those that are over 6 months old
([example](https://github.com/rust-lang/rustc-dev-guide/issues/2052)). ([example](https://github.com/rust-lang/rustc-dev-guide/issues/2052)).
For the action to pick the date, For the action to pick the date,
add a special annotation before specifying the date: add a special annotation before specifying the date:
```md ```md
<!-- date-check --> Sep 2024 <!-- date-check --> Apr 2025
``` ```
Example: Example:
```md ```md
As of <!-- date-check --> Sep 2024, the foo did the bar. As of <!-- date-check --> Apr 2025, the foo did the bar.
``` ```
For cases where the date should not be part of the visible rendered output, For cases where the date should not be part of the visible rendered output,
use the following instead: use the following instead:
```md ```md
<!-- date-check: Sep 2024 --> <!-- date-check: Apr 2025 -->
``` ```
- A link to a relevant WG, tracking issue, `rustc` rustdoc page, or similar, that may provide - A link to a relevant WG, tracking issue, `rustc` rustdoc page, or similar, that may provide

View File

@ -1,3 +1,5 @@
# Coding conventions
This file offers some tips on the coding conventions for rustc. This This file offers some tips on the coding conventions for rustc. This
chapter covers [formatting](#formatting), [coding for correctness](#cc), chapter covers [formatting](#formatting), [coding for correctness](#cc),
[using crates from crates.io](#cio), and some tips on [using crates from crates.io](#cio), and some tips on
@ -5,7 +7,7 @@ chapter covers [formatting](#formatting), [coding for correctness](#cc),
<a id="formatting"></a> <a id="formatting"></a>
# Formatting and the tidy script ## Formatting and the tidy script
rustc is moving towards the [Rust standard coding style][fmt]. rustc is moving towards the [Rust standard coding style][fmt].
@ -20,44 +22,42 @@ Formatting is checked by the `tidy` script. It runs automatically when you do
`./x test` and can be run in isolation with `./x fmt --check`. `./x test` and can be run in isolation with `./x fmt --check`.
If you want to use format-on-save in your editor, the pinned version of If you want to use format-on-save in your editor, the pinned version of
`rustfmt` is built under `build/<target>/stage0/bin/rustfmt`. You'll have to `rustfmt` is built under `build/<target>/stage0/bin/rustfmt`.
pass the <!-- date-check: nov 2022 --> `--edition=2021` argument yourself when calling
`rustfmt` directly.
[fmt]: https://github.com/rust-dev-tools/fmt-rfcs [fmt]: https://github.com/rust-dev-tools/fmt-rfcs
[`rustfmt`]:https://github.com/rust-lang/rustfmt [`rustfmt`]:https://github.com/rust-lang/rustfmt
## Formatting C++ code ### Formatting C++ code
The compiler contains some C++ code for interfacing with parts of LLVM that The compiler contains some C++ code for interfacing with parts of LLVM that
don't have a stable C API. don't have a stable C API.
When modifying that code, use this command to format it: When modifying that code, use this command to format it:
```sh ```console
./x test tidy --extra-checks=cpp:fmt --bless ./x test tidy --extra-checks cpp:fmt --bless
``` ```
This uses a pinned version of `clang-format`, to avoid relying on the local This uses a pinned version of `clang-format`, to avoid relying on the local
environment. environment.
## Formatting and linting Python code ### Formatting and linting Python code
The Rust repository contains quite a lot of Python code. We try to keep The Rust repository contains quite a lot of Python code. We try to keep
it both linted and formatted by the [ruff][ruff] tool. it both linted and formatted by the [ruff] tool.
When modifying Python code, use this command to format it: When modifying Python code, use this command to format it:
```sh
./x test tidy --extra-checks=py:fmt --bless ```console
./x test tidy --extra-checks py:fmt --bless
``` ```
and the following command to run lints: And, the following command to run lints:
```sh
./x test tidy --extra-checks=py:lint ```console
./x test tidy --extra-checks py:lint
``` ```
This uses a pinned version of `ruff`, to avoid relying on the local These use a pinned version of `ruff`, to avoid relying on the local environment.
environment.
[ruff]: https://github.com/astral-sh/ruff [ruff]: https://github.com/astral-sh/ruff
@ -65,7 +65,7 @@ environment.
<!-- REUSE-IgnoreStart --> <!-- REUSE-IgnoreStart -->
<!-- Prevent REUSE from interpreting the heading as a copyright notice --> <!-- Prevent REUSE from interpreting the heading as a copyright notice -->
## Copyright notice ### Copyright notice
<!-- REUSE-IgnoreEnd --> <!-- REUSE-IgnoreEnd -->
In the past, files began with a copyright and license notice. Please **omit** In the past, files began with a copyright and license notice. Please **omit**
@ -75,41 +75,42 @@ MIT/Apache-2.0).
All of the copyright notices should be gone by now, but if you come across one All of the copyright notices should be gone by now, but if you come across one
in the rust-lang/rust repo, feel free to open a PR to remove it. in the rust-lang/rust repo, feel free to open a PR to remove it.
## Line length ### Line length
Lines should be at most 100 characters. It's even better if you can Lines should be at most 100 characters. It's even better if you can
keep things to 80. keep things to 80.
**Ignoring the line length limit.** Sometimes in particular for Sometimes, and particularly for tests, it can be necessary to exempt yourself from this limit.
tests it can be necessary to exempt yourself from this limit. In In that case, you can add a comment towards the top of the file like so:
that case, you can add a comment towards the top of the file like so:
```rust ```rust
// ignore-tidy-linelength // ignore-tidy-linelength
``` ```
## Tabs vs spaces ### Tabs vs spaces
Prefer 4-space indent. Prefer 4-space indents.
<a id="cc"></a> <a id="cc"></a>
# Coding for correctness ## Coding for correctness
Beyond formatting, there are a few other tips that are worth Beyond formatting, there are a few other tips that are worth
following. following.
## Prefer exhaustive matches ### Prefer exhaustive matches
Using `_` in a match is convenient, but it means that when new Using `_` in a match is convenient, but it means that when new
variants are added to the enum, they may not get handled correctly. variants are added to the enum, they may not get handled correctly.
Ask yourself: if a new variant were added to this enum, what's the Ask yourself: if a new variant were added to this enum, what's the
chance that it would want to use the `_` code, versus having some chance that it would want to use the `_` code, versus having some
other treatment? Unless the answer is "low", then prefer an other treatment? Unless the answer is "low", then prefer an
exhaustive match. (The same advice applies to `if let` and `while exhaustive match.
let`, which are effectively tests for a single variant.)
## Use "TODO" comments for things you don't want to forget The same advice applies to `if let` and `while let`,
which are effectively tests for a single variant.
### Use "TODO" comments for things you don't want to forget
As a useful tool to yourself, you can insert a `// TODO` comment As a useful tool to yourself, you can insert a `// TODO` comment
for something that you want to get back to before you land your PR: for something that you want to get back to before you land your PR:
@ -136,13 +137,13 @@ if foo {
<a id="cio"></a> <a id="cio"></a>
# Using crates from crates.io ## Using crates from crates.io
See the [crates.io dependencies][crates] section. See the [crates.io dependencies][crates] section.
<a id="er"></a> <a id="er"></a>
# How to structure your PR ## How to structure your PR
How you prepare the commits in your PR can make a big difference for the How you prepare the commits in your PR can make a big difference for the
reviewer. Here are some tips. reviewer. Here are some tips.
@ -172,7 +173,7 @@ require that every intermediate commit successfully builds we only
expect to be able to bisect at a PR level. However, if you *can* make expect to be able to bisect at a PR level. However, if you *can* make
individual commits build, that is always helpful. individual commits build, that is always helpful.
# Naming conventions ## Naming conventions
Apart from normal Rust style/naming conventions, there are also some specific Apart from normal Rust style/naming conventions, there are also some specific
to the compiler. to the compiler.

View File

@ -1,4 +1,4 @@
# crates.io Dependencies # crates.io dependencies
The Rust compiler supports building with some dependencies from `crates.io`. The Rust compiler supports building with some dependencies from `crates.io`.
Examples are `log` and `env_logger`. Examples are `log` and `env_logger`.

View File

@ -1,4 +1,4 @@
# Errors and Lints # Errors and lints
<!-- toc --> <!-- toc -->
@ -772,7 +772,7 @@ store.register_renamed("single_use_lifetime", "single_use_lifetimes");
[`store.register_removed`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_lint/struct.LintStore.html#method.register_removed [`store.register_removed`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_lint/struct.LintStore.html#method.register_removed
[`rustc_lint::register_builtins`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_lint/fn.register_builtins.html [`rustc_lint::register_builtins`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_lint/fn.register_builtins.html
### Lint Groups ### Lint groups
Lints can be turned on in groups. These groups are declared in the Lints can be turned on in groups. These groups are declared in the
[`register_builtins`][rbuiltins] function in [`rustc_lint::lib`][builtin]. The [`register_builtins`][rbuiltins] function in [`rustc_lint::lib`][builtin]. The
@ -954,9 +954,6 @@ application of these fields based on a variety of attributes when using
`Self="std::iter::Iterator<char>"`. This is needed because `Self` is a `Self="std::iter::Iterator<char>"`. This is needed because `Self` is a
keyword which cannot appear in attributes. keyword which cannot appear in attributes.
- `direct`: user-specified rather than derived obligation. - `direct`: user-specified rather than derived obligation.
- `from_method`: usable both as boolean (whether the flag is present, like
`crate_local`) or matching against a particular method. Currently used
for `try`.
- `from_desugaring`: usable both as boolean (whether the flag is present) - `from_desugaring`: usable both as boolean (whether the flag is present)
or matching against a particular desugaring. The desugaring is identified or matching against a particular desugaring. The desugaring is identified
with its variant name in the `DesugaringKind` enum. with its variant name in the `DesugaringKind` enum.

View File

@ -1,66 +1,159 @@
# Effects and effect checking # Effects and const condition checking
Note: all of this describes the implementation of the unstable `effects` and ## The `HostEffect` predicate
`const_trait_impl` features. None of this implementation is usable or visible from
stable Rust.
The implementation of const traits and `~const` bounds is a limited effect system. [`HostEffectPredicate`]s are a kind of predicate from `~const Tr` or `const Tr`
It is used to allow trait bounds on `const fn` to be used within the `const fn` for bounds. It has a trait reference, and a `constness` which could be `Maybe` or
method calls. Within the function, in order to know whether a method on a trait `Const` depending on the bound. Because `~const Tr`, or rather `Maybe` bounds
bound is `const`, we need to know whether there is a `~const` bound for the trait. apply differently based on whichever contexts they are in, they have different
In order to know whether we can instantiate a `~const` bound on a `const fn`, we behavior than normal bounds. Where normal trait bounds on a function such as
need to know whether there is a `const_trait` impl for the type and trait being `T: Tr` are collected within the [`predicates_of`] query to be proven when a
used (or whether the `const fn` is used at runtime, then any type implementing the function is called and to be assumed within the function, bounds such as
trait is ok, just like with other bounds). `T: ~const Tr` will behave as a normal trait bound and add `T: Tr` to the result
from `predicates_of`, but also adds a `HostEffectPredicate` to the
[`const_conditions`] query.
We perform these checks via a const generic boolean that gets attached to all On the other hand, `T: const Tr` bounds do not change meaning across contexts,
`const fn` and `const trait`. The following sections will explain the desugarings therefore they will result in `HostEffect(T: Tr, const)` being added to
and the way we perform the checks at call sites. `predicates_of`, and not `const_conditions`.
The const generic boolean is inverted to the meaning of `const`. In the compiler [`HostEffectPredicate`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_type_ir/predicate/struct.HostEffectPredicate.html
it is called `host`, because it enables "host APIs" like `static` items, network [`predicates_of`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.predicates_of
access, disk access, random numbers and everything else that isn't available in [`const_conditions`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.const_conditions
`const` contexts. So `false` means "const", `true` means "not const" and if it's
a generic parameter, it means "maybe const" (meaning we're in a const fn or const
trait).
## `const fn` ## The `const_conditions` query
All `const fn` have a `#[rustc_host] const host: bool` generic parameter that is `predicates_of` represents a set of predicates that need to be proven to use an
hidden from users. Any `~const Trait` bounds in the generics list or `where` bounds item. For example, to use `foo` in the example below:
of a `const fn` get converted to `Trait<host> + Trait<true>` bounds. The `Trait<true>`
exists so that associated types of the generic param can be used from projections
like `<T as Trait>::Assoc`, because there are no `<T as ~const Trait>` projections for now.
## `#[const_trait] trait`s ```rust
fn foo<T>() where T: Default {}
```
The `#[const_trait]` attribute gives the marked trait a `#[rustc_host] const host: bool` We must be able to prove that `T` implements `Default`. In a similar vein,
generic parameter. All functions of the trait "inherit" this generic parameter, just like `const_conditions` represents a set of predicates that need to be proven to use
they have all the regular generic parameters of the trait. Any `~const Trait` super-trait an item *in const contexts*. If we adjust the example above to use `const` trait
bounds get desugared to `Trait<host> + Trait<true>` in order to allow using associated bounds:
types and consts of the super traits in the trait declaration. This is necessary, because
`<Self as SuperTrait>::Assoc` is always `<Self as SuperTrait<true>>::Assoc` as there is
no `<Self as ~const SuperTrait>` syntax.
## `typeck` performing method and function call checks. ```rust
const fn foo<T>() where T: ~const Default {}
```
When generic parameters are instantiated for any items, the `host` generic parameter Then `foo` would get a `HostEffect(T: Default, maybe)` in the `const_conditions`
is always instantiated as an inference variable. This is a special kind of inference var query, suggesting that in order to call `foo` from const contexts, one must
that is not part of the type or const inference variables, similar to how we have prove that `T` has a const implementation of `Default`.
special inference variables for type variables that we know to be an integer, but not
yet which one. These separate inference variables fall back to `true` at
the end of typeck (in `fallback_effects`) to ensure that `let _ = some_fn_item_name;`
will keep compiling.
All actually used (in function calls, casts, or anywhere else) function items, will ## Enforcement of `const_conditions`
have the `enforce_context_effects` method invoked.
It trivially returns if the function being called has no `host` generic parameter.
In order to error if a non-const function is called in a const context, we have not `const_conditions` are currently checked in various places.
yet disabled the const-check logic that happens on MIR, because
`enforce_context_effects` does not yet perform this check.
The function call's `host` parameter is then equated to the context's `host` value, Every call in HIR from a const context (which includes `const fn` and `const`
which almost always trivially succeeds, as it was an inference var. If the inference items) will check that `const_conditions` of the function we are calling hold.
var has already been bound (since the function item is invoked twice), the second This is done in [`FnCtxt::enforce_context_effects`]. Note that we don't check
invocation checks it against the first. if the function is only referred to but not called, as the following code needs
to compile:
```rust
const fn hi<T: ~const Default>() -> T {
T::default()
}
const X: fn() -> u32 = hi::<u32>;
```
For a trait `impl` to be well-formed, we must be able to prove the
`const_conditions` of the trait from the `impl`'s environment. This is checked
in [`wfcheck::check_impl`].
Here's an example:
```rust
#[const_trait]
trait Bar {}
#[const_trait]
trait Foo: ~const Bar {}
// `const_conditions` contains `HostEffect(Self: Bar, maybe)`
impl const Bar for () {}
impl const Foo for () {}
// ^ here we check `const_conditions` for the impl to be well-formed
```
Methods of trait impls must not have stricter bounds than the method of the
trait that they are implementing. To check that the methods are compatible, a
hybrid environment is constructed with the predicates of the `impl` plus the
predicates of the trait method, and we attempt to prove the predicates of the
impl method. We do the same for `const_conditions`:
```rust
#[const_trait]
trait Foo {
fn hi<T: ~const Default>();
}
impl<T: ~const Clone> Foo for Vec<T> {
fn hi<T: ~const PartialEq>();
// ^ we can't prove `T: ~const PartialEq` given `T: ~const Clone` and
// `T: ~const Default`, therefore we know that the method on the impl
// is stricter than the method on the trait.
}
```
These checks are done in [`compare_method_predicate_entailment`]. A similar
function that does the same check for associated types is called
[`compare_type_predicate_entailment`]. Both of these need to consider
`const_conditions` when in const contexts.
In MIR, as part of const checking, `const_conditions` of items that are called
are revalidated again in [`Checker::revalidate_conditional_constness`].
[`compare_method_predicate_entailment`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/compare_impl_item/fn.compare_method_predicate_entailment.html
[`compare_type_predicate_entailment`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/compare_impl_item/fn.compare_type_predicate_entailment.html
[`FnCtxt::enforce_context_effects`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_typeck/fn_ctxt/struct.FnCtxt.html#method.enforce_context_effects
[`wfcheck::check_impl`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/wfcheck/fn.check_impl.html
[`Checker::revalidate_conditional_constness`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_const_eval/check_consts/check/struct.Checker.html#method.revalidate_conditional_constness
## `explicit_implied_const_bounds` on associated types and traits
Bounds on associated types, opaque types, and supertraits such as
```rust
trait Foo: ~const PartialEq {
type X: ~const PartialEq;
}
fn foo() -> impl ~const PartialEq {
// ^ unimplemented syntax
}
```
Have their bounds represented differently. Unlike `const_conditions` which need
to be proved for callers, and can be assumed inside the definition (e.g. trait
bounds on functions), these bounds need to be proved at definition (at the impl,
or when returning the opaque) but can be assumed for callers. The non-const
equivalent of these bounds are called [`explicit_item_bounds`].
These bounds are checked in [`compare_impl_item::check_type_bounds`] for HIR
typeck, [`evaluate_host_effect_from_item_bounds`] in the old solver and
[`consider_additional_alias_assumptions`] in the new solver.
[`explicit_item_bounds`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.explicit_item_bounds
[`compare_impl_item::check_type_bounds`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/compare_impl_item/fn.check_type_bounds.html
[`evaluate_host_effect_from_item_bounds`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_trait_selection/traits/effects/fn.evaluate_host_effect_from_item_bounds.html
[`consider_additional_alias_assumptions`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_next_trait_solver/solve/assembly/trait.GoalKind.html#tymethod.consider_additional_alias_assumptions
## Proving `HostEffectPredicate`s
`HostEffectPredicate`s are implemented both in the [old solver] and the [new
trait solver]. In general, we can prove a `HostEffect` predicate when either of
these conditions are met:
* The predicate can be assumed from caller bounds;
* The type has a `const` `impl` for the trait, *and* that const conditions on
the impl holds, *and* that the `explicit_implied_const_bounds` on the trait
holds; or
* The type has a built-in implementation for the trait in const contexts. For
example, `Fn` may be implemented by function items if their const conditions
are satisfied, or `Destruct` is implemented in const contexts if the type can
be dropped at compile time.
[old solver]: https://doc.rust-lang.org/nightly/nightly-rustc/src/rustc_trait_selection/traits/effects.rs.html
[new trait solver]: https://doc.rust-lang.org/nightly/nightly-rustc/src/rustc_next_trait_solver/solve/effect_goals.rs.html

View File

@ -1,4 +1,4 @@
# Feature Gates # Feature gates
This chapter is intended to provide basic help for adding, removing, and This chapter is intended to provide basic help for adding, removing, and
modifying feature gates. modifying feature gates.

View File

@ -123,7 +123,7 @@ what actually results in superior throughput.
You may want to build rustc from source with debug assertions to find You may want to build rustc from source with debug assertions to find
additional bugs, though this is a trade-off: it can slow down fuzzing by additional bugs, though this is a trade-off: it can slow down fuzzing by
requiring extra work for every execution. To enable debug assertions, add this requiring extra work for every execution. To enable debug assertions, add this
to `config.toml` when compiling rustc: to `bootstrap.toml` when compiling rustc:
```toml ```toml
[rust] [rust]

View File

@ -100,7 +100,7 @@ The HIR uses a bunch of different identifiers that coexist and serve different p
a wrapper around a [`HirId`]. For more info about HIR bodies, please refer to the a wrapper around a [`HirId`]. For more info about HIR bodies, please refer to the
[HIR chapter][hir-bodies]. [HIR chapter][hir-bodies].
These identifiers can be converted into one another through the [HIR map][map]. These identifiers can be converted into one another through the `TyCtxt`.
[`DefId`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/def_id/struct.DefId.html [`DefId`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/def_id/struct.DefId.html
[`LocalDefId`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/def_id/struct.LocalDefId.html [`LocalDefId`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/def_id/struct.LocalDefId.html
@ -110,47 +110,42 @@ These identifiers can be converted into one another through the [HIR map][map].
[`CrateNum`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/def_id/struct.CrateNum.html [`CrateNum`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/def_id/struct.CrateNum.html
[`DefIndex`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/def_id/struct.DefIndex.html [`DefIndex`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/def_id/struct.DefIndex.html
[`Body`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/struct.Body.html [`Body`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/struct.Body.html
[hir-map]: ./hir.md#the-hir-map
[hir-bodies]: ./hir.md#hir-bodies [hir-bodies]: ./hir.md#hir-bodies
[map]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html
## The HIR Map ## HIR Operations
Most of the time when you are working with the HIR, you will do so via Most of the time when you are working with the HIR, you will do so via
the **HIR Map**, accessible in the tcx via [`tcx.hir()`] (and defined in `TyCtxt`. It contains a number of methods, defined in the `hir::map` module and
the [`hir::map`] module). The [HIR map] contains a [number of methods] to mostly prefixed with `hir_`, to convert between IDs of various kinds and to
convert between IDs of various kinds and to lookup data associated lookup data associated with a HIR node.
with a HIR node.
[`tcx.hir()`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.hir [`TyCtxt`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html
[`hir::map`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/index.html
[HIR map]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html
[number of methods]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html#methods
For example, if you have a [`LocalDefId`], and you would like to convert it For example, if you have a [`LocalDefId`], and you would like to convert it
to a [`HirId`], you can use [`tcx.hir().local_def_id_to_hir_id(def_id)`][local_def_id_to_hir_id]. to a [`HirId`], you can use [`tcx.local_def_id_to_hir_id(def_id)`][local_def_id_to_hir_id].
You need a `LocalDefId`, rather than a `DefId`, since only local items have HIR nodes. You need a `LocalDefId`, rather than a `DefId`, since only local items have HIR nodes.
[local_def_id_to_hir_id]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html#method.local_def_id_to_hir_id [local_def_id_to_hir_id]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.local_def_id_to_hir_id
Similarly, you can use [`tcx.hir().find(n)`][find] to lookup the node for a Similarly, you can use [`tcx.hir_node(n)`][hir_node] to lookup the node for a
[`HirId`]. This returns a `Option<Node<'hir>>`, where [`Node`] is an enum [`HirId`]. This returns a `Option<Node<'hir>>`, where [`Node`] is an enum
defined in the map. By matching on this, you can find out what sort of defined in the map. By matching on this, you can find out what sort of
node the `HirId` referred to and also get a pointer to the data node the `HirId` referred to and also get a pointer to the data
itself. Often, you know what sort of node `n` is e.g. if you know itself. Often, you know what sort of node `n` is e.g. if you know
that `n` must be some HIR expression, you can do that `n` must be some HIR expression, you can do
[`tcx.hir().expect_expr(n)`][expect_expr], which will extract and return the [`tcx.hir_expect_expr(n)`][expect_expr], which will extract and return the
[`&hir::Expr`][Expr], panicking if `n` is not in fact an expression. [`&hir::Expr`][Expr], panicking if `n` is not in fact an expression.
[find]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html#method.find [hir_node]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.hir_node
[`Node`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.Node.html [`Node`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.Node.html
[expect_expr]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html#method.expect_expr [expect_expr]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.expect_expr
[Expr]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/struct.Expr.html [Expr]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/struct.Expr.html
Finally, you can use the HIR map to find the parents of nodes, via Finally, you can find the parents of nodes, via
calls like [`tcx.hir().get_parent(n)`][get_parent]. calls like [`tcx.parent_hir_node(n)`][parent_hir_node].
[parent_hir_node]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.parent_hir_node
[get_parent]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html#method.get_parent
## HIR Bodies ## HIR Bodies
@ -158,10 +153,10 @@ A [`rustc_hir::Body`] represents some kind of executable code, such as the body
of a function/closure or the definition of a constant. Bodies are of a function/closure or the definition of a constant. Bodies are
associated with an **owner**, which is typically some kind of item associated with an **owner**, which is typically some kind of item
(e.g. an `fn()` or `const`), but could also be a closure expression (e.g. an `fn()` or `const`), but could also be a closure expression
(e.g. `|x, y| x + y`). You can use the HIR map to find the body (e.g. `|x, y| x + y`). You can use the `TyCtxt` to find the body
associated with a given def-id ([`maybe_body_owned_by`]) or to find associated with a given def-id ([`hir_maybe_body_owned_by`]) or to find
the owner of a body ([`body_owner_def_id`]). the owner of a body ([`hir_body_owner_def_id`]).
[`rustc_hir::Body`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/struct.Body.html [`rustc_hir::Body`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/struct.Body.html
[`maybe_body_owned_by`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html#method.maybe_body_owned_by [`hir_maybe_body_owned_by`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.hir_maybe_body_owned_by
[`body_owner_def_id`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/map/struct.Map.html#method.body_owner_def_id [`hir_body_owner_def_id`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TyCtxt.html#method.hir_body_owner_def_id

View File

@ -44,7 +44,7 @@ like this; for example, the compiler team recommends
filing a Major Change Proposal ([MCP][mcp]) as a lightweight way to filing a Major Change Proposal ([MCP][mcp]) as a lightweight way to
garner support and feedback without requiring full consensus. garner support and feedback without requiring full consensus.
[mcp]: https://forge.rust-lang.org/compiler/mcp.html#public-facing-changes-require-rfcbot-fcp [mcp]: https://forge.rust-lang.org/compiler/proposals-and-stabilization.html#how-do-i-submit-an-mcp
You don't need to have the implementation fully ready for r+ to propose an FCP, You don't need to have the implementation fully ready for r+ to propose an FCP,
but it is generally a good idea to have at least a proof but it is generally a good idea to have at least a proof

View File

@ -1,4 +1,4 @@
# Debugging and Testing Dependencies # Debugging and testing dependencies
## Testing the dependency graph ## Testing the dependency graph

View File

@ -34,7 +34,7 @@ Detailed instructions and examples are documented in the
[coverage map]: https://llvm.org/docs/CoverageMappingFormat.html [coverage map]: https://llvm.org/docs/CoverageMappingFormat.html
[rustc-book-instrument-coverage]: https://doc.rust-lang.org/nightly/rustc/instrument-coverage.html [rustc-book-instrument-coverage]: https://doc.rust-lang.org/nightly/rustc/instrument-coverage.html
## Recommended `config.toml` settings ## Recommended `bootstrap.toml` settings
When working on the coverage instrumentation code, it is usually necessary to When working on the coverage instrumentation code, it is usually necessary to
**enable the profiler runtime** by setting `profiler = true` in `[build]`. **enable the profiler runtime** by setting `profiler = true` in `[build]`.
@ -83,7 +83,7 @@ statically links coverage-instrumented binaries with LLVM runtime code
In the `rustc` source tree, In the `rustc` source tree,
`library/profiler_builtins` bundles the LLVM `compiler-rt` code into a Rust library crate. `library/profiler_builtins` bundles the LLVM `compiler-rt` code into a Rust library crate.
Note that when building `rustc`, Note that when building `rustc`,
`profiler_builtins` is only included when `build.profiler = true` is set in `config.toml`. `profiler_builtins` is only included when `build.profiler = true` is set in `bootstrap.toml`.
When compiling with `-C instrument-coverage`, When compiling with `-C instrument-coverage`,
[`CrateLoader::postprocess()`][crate-loader-postprocess] dynamically loads [`CrateLoader::postprocess()`][crate-loader-postprocess] dynamically loads
@ -115,7 +115,7 @@ human-readable coverage report.
> Tests in `coverage-run` mode have an implicit `//@ needs-profiler-runtime` > Tests in `coverage-run` mode have an implicit `//@ needs-profiler-runtime`
> directive, so they will be skipped if the profiler runtime has not been > directive, so they will be skipped if the profiler runtime has not been
> [enabled in `config.toml`](#recommended-configtoml-settings). > [enabled in `bootstrap.toml`](#recommended-configtoml-settings).
Finally, the [`tests/codegen/instrument-coverage/testprog.rs`] test compiles a simple Rust program Finally, the [`tests/codegen/instrument-coverage/testprog.rs`] test compiles a simple Rust program
with `-C instrument-coverage` and compares the compiled program's LLVM IR to with `-C instrument-coverage` and compares the compiled program's LLVM IR to

View File

@ -1,4 +1,4 @@
# Memory Management in Rustc # Memory management in rustc
Generally rustc tries to be pretty careful how it manages memory. Generally rustc tries to be pretty careful how it manages memory.
The compiler allocates _a lot_ of data structures throughout compilation, The compiler allocates _a lot_ of data structures throughout compilation,

View File

@ -120,9 +120,9 @@ even though they should be visible by ordinary scoping rules. An example:
fn do_something<T: Default>(val: T) { // <- New rib in both types and values (1) fn do_something<T: Default>(val: T) { // <- New rib in both types and values (1)
// `val` is accessible, as is the helper function // `val` is accessible, as is the helper function
// `T` is accessible // `T` is accessible
let helper = || { // New rib on `helper` (2) and another on the block (3) let helper = || { // New rib on the block (2)
// `val` is accessible here // `val` is accessible here
}; // End of (3) }; // End of (2), new rib on `helper` (3)
// `val` is accessible, `helper` variable shadows `helper` function // `val` is accessible, `helper` variable shadows `helper` function
fn helper() { // <- New rib in both types and values (4) fn helper() { // <- New rib in both types and values (4)
// `val` is not accessible here, (4) is not transparent for locals // `val` is not accessible here, (4) is not transparent for locals
@ -130,7 +130,7 @@ fn do_something<T: Default>(val: T) { // <- New rib in both types and values (1)
} // End of (4) } // End of (4)
let val = T::default(); // New rib (5) let val = T::default(); // New rib (5)
// `val` is the variable, not the parameter here // `val` is the variable, not the parameter here
} // End of (5), (2) and (1) } // End of (5), (3) and (1)
``` ```
Because the rules for different namespaces are a bit different, each namespace Because the rules for different namespaces are a bit different, each namespace

View File

@ -23,7 +23,7 @@ Here's the list of the notification groups:
- [ARM](./arm.md) - [ARM](./arm.md)
- [Cleanup Crew](./cleanup-crew.md) - [Cleanup Crew](./cleanup-crew.md)
- [Emscripten](./emscripten.md) - [Emscripten](./emscripten.md)
- [LLVM](./llvm.md) - [LLVM Icebreakers](./llvm.md)
- [RISC-V](./risc-v.md) - [RISC-V](./risc-v.md)
- [WASI](./wasi.md) - [WASI](./wasi.md)
- [WebAssembly](./wasm.md) - [WebAssembly](./wasm.md)
@ -83,7 +83,7 @@ group. For example:
@rustbot ping arm @rustbot ping arm
@rustbot ping cleanup-crew @rustbot ping cleanup-crew
@rustbot ping emscripten @rustbot ping emscripten
@rustbot ping llvm @rustbot ping icebreakers-llvm
@rustbot ping risc-v @rustbot ping risc-v
@rustbot ping wasi @rustbot ping wasi
@rustbot ping wasm @rustbot ping wasm

View File

@ -0,0 +1,12 @@
# Fuchsia notification group
**Github Label:** [O-fuchsia] <br>
**Ping command:** `@rustbot ping fuchsia`
[O-fuchsia]: https://github.com/rust-lang/rust/labels/O-fuchsia
This list will be used to notify [Fuchsia][fuchsia] maintainers
when the compiler or the standard library changes in a way that would
break the Fuchsia integration.
[fuchsia]: ../tests/ecosystem-test-jobs/fuchsia.md

View File

@ -1,13 +1,16 @@
# LLVM Notification group # LLVM Icebreakers Notification group
**Github Label:** [A-LLVM] <br> **Github Label:** [A-LLVM] <br>
**Ping command:** `@rustbot ping llvm` **Ping command:** `@rustbot ping icebreakers-llvm`
[A-LLVM]: https://github.com/rust-lang/rust/labels/A-LLVM [A-LLVM]: https://github.com/rust-lang/rust/labels/A-LLVM
The "LLVM Notification Group" are focused on bugs that center around LLVM. *Note*: this notification group is *not* the same as the LLVM working group
These bugs often arise because of LLVM optimizations gone awry, or as (WG-llvm).
the result of an LLVM upgrade. The goal here is:
The "LLVM Icebreakers Notification Group" are focused on bugs that center around
LLVM. These bugs often arise because of LLVM optimizations gone awry, or as the
result of an LLVM upgrade. The goal here is:
- to determine whether the bug is a result of us generating invalid LLVM IR, - to determine whether the bug is a result of us generating invalid LLVM IR,
or LLVM misoptimizing; or LLVM misoptimizing;

View File

@ -351,7 +351,7 @@ approach is to turn [`RefCell`]s into [`Mutex`]s -- that is, we
switch to thread-safe internal mutability. However, there are ongoing switch to thread-safe internal mutability. However, there are ongoing
challenges with lock contention, maintaining query-system invariants under challenges with lock contention, maintaining query-system invariants under
concurrency, and the complexity of the code base. One can try out the current concurrency, and the complexity of the code base. One can try out the current
work by enabling parallel compilation in `config.toml`. It's still early days, work by enabling parallel compilation in `bootstrap.toml`. It's still early days,
but there are already some promising performance improvements. but there are already some promising performance improvements.
[`RefCell`]: https://doc.rust-lang.org/std/cell/struct.RefCell.html [`RefCell`]: https://doc.rust-lang.org/std/cell/struct.RefCell.html

View File

@ -1,4 +1,4 @@
# Panicking in rust # Panicking in Rust
<!-- toc --> <!-- toc -->

View File

@ -1,4 +1,4 @@
# Parallel Compilation # Parallel compilation
<div class="warning"> <div class="warning">
As of <!-- date-check --> November 2024, As of <!-- date-check --> November 2024,
@ -28,7 +28,7 @@ The following sections are kept for now but are quite outdated.
[codegen]: backend/codegen.md [codegen]: backend/codegen.md
## Code Generation ## Code generation
During monomorphization the compiler splits up all the code to During monomorphization the compiler splits up all the code to
be generated into smaller chunks called _codegen units_. These are then generated by be generated into smaller chunks called _codegen units_. These are then generated by
@ -38,7 +38,7 @@ occurs in the [`rustc_codegen_ssa::base`] module.
[`rustc_codegen_ssa::base`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/base/index.html [`rustc_codegen_ssa::base`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/base/index.html
## Data Structures ## Data structures
The underlying thread-safe data-structures used in the parallel compiler The underlying thread-safe data-structures used in the parallel compiler
can be found in the [`rustc_data_structures::sync`] module. These data structures can be found in the [`rustc_data_structures::sync`] module. These data structures
@ -83,7 +83,7 @@ can be accessed directly through `Deref::deref`.
[`rustc_data_structures::sync::worker_local`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_data_structures/sync/worker_local/index.html [`rustc_data_structures::sync::worker_local`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_data_structures/sync/worker_local/index.html
[`WorkerLocal`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_data_structures/sync/worker_local/struct.WorkerLocal.html [`WorkerLocal`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_data_structures/sync/worker_local/struct.WorkerLocal.html
## Parallel Iterator ## Parallel iterator
The parallel iterators provided by the [`rayon`] crate are easy ways to The parallel iterators provided by the [`rayon`] crate are easy ways to
implement parallelism. In the current implementation of the parallel compiler implement parallelism. In the current implementation of the parallel compiler
@ -124,7 +124,7 @@ the parallel iterator function has been used are as follows:
There are still many loops that have the potential to use parallel iterators. There are still many loops that have the potential to use parallel iterators.
## Query System ## Query system
The query model has some properties that make it actually feasible to evaluate The query model has some properties that make it actually feasible to evaluate
multiple queries in parallel without too much effort: multiple queries in parallel without too much effort:

View File

@ -1,43 +0,0 @@
# Which `ParamEnv` do I use?
When needing a [`ParamEnv`][pe] in the compiler there are a few options for obtaining one:
- The correct env is already in scope simply use it (or pass it down the call stack to where you are).
- The [`tcx.param_env(def_id)` query][param_env_query]
- Use [`ParamEnv::new`][param_env_new] to construct an env with an arbitrary set of where clauses. Then call [`traits::normalize_param_env_or_error`][normalize_env_or_error] which will handle normalizing and elaborating all the where clauses in the env for you.
- Creating an empty environment via [`ParamEnv::reveal_all`][env_reveal_all] or [`ParamEnv::empty`][env_empty]
In the large majority of cases a `ParamEnv` when required already exists somewhere in scope or above in the call stack and should be passed down. A non exhaustive list of places where you might find an existing `ParamEnv`:
- During typeck `FnCtxt` has a [`param_env` field][fnctxt_param_env]
- When writing late lints the `LateContext` has a [`param_env` field][latectxt_param_env]
- During well formedness checking the `WfCheckingCtxt` has a [`param_env` field][wfckctxt_param_env]
- The `TypeChecker` used by Mir Typeck has a [`param_env` field][mirtypeck_param_env]
- In the next-gen trait solver all `Goal`s have a [`param_env` field][goal_param_env] specifying what environment to prove the goal in
- When editing an existing [`TypeRelation`][typerelation] if it implements `PredicateEmittingRelation` then a [`param_env` method][typerelation_param_env] will be available.
Using the `param_env` query to obtain an env is generally done at the start of some kind of analysis and then passed everywhere that a `ParamEnv` is required. For example the type checker will create a `ParamEnv` for the item it is type checking and then pass it around everywhere.
Creating an env from an arbitrary set of where clauses is usually unnecessary and should only be done if the environment you need does not correspond to an actual item in the source code (i.e. [`compare_method_predicate_entailment`][method_pred_entailment] as mentioned earlier).
Creating an empty environment via `ParamEnv::empty` is almost always wrong. There are very few places where we actually know that the environment should be empty. One of the only places where we do actually know this is after monomorphization, however the `ParamEnv` there should be constructed via `ParamEnv::reveal_all` instead as at this point we should be able to determine the hidden type of opaque types. Codegen/Post-mono is one of the only places that should be using `ParamEnv::reveal_all`.
An additional piece of complexity here is specifying the `Reveal` (see linked docs for explanation of what reveal does) used for the `ParamEnv`. When constructing a param env using the `param_env` query it will have `Reveal::UserFacing`, if `Reveal::All` is desired then the [`tcx.param_env_reveal_all_normalized`][env_reveal_all_normalized] query can be used instead.
The `ParamEnv` type has a method [`ParamEnv::with_reveal_all_normalized`][with_reveal_all] which converts an existing `ParamEnv` into one with `Reveal::All` specified. Where possible the previously mentioned query should be preferred as it is more efficient.
[param_env_new]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html#method.new
[normalize_env_or_error]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_trait_selection/traits/fn.normalize_param_env_or_error.html
[fnctxt_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_typeck/fn_ctxt/struct.FnCtxt.html#structfield.param_env
[latectxt_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_lint/context/struct.LateContext.html#structfield.param_env
[wfckctxt_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/wfcheck/struct.WfCheckingCtxt.html#structfield.param_env
[goal_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_infer/infer/canonical/ir/solve/struct.Goal.html#structfield.param_env
[typerelation_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_infer/infer/trait.PredicateEmittingRelation.html#tymethod.param_env
[typerelation]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/relate/trait.TypeRelation.html
[mirtypeck_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_borrowck/type_check/struct.TypeChecker.html#structfield.param_env
[env_reveal_all_normalized]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/context/struct.TyCtxt.html#method.param_env_reveal_all_normalized
[with_reveal_all]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html#method.with_reveal_all_normalized
[env_reveal_all]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html#method.reveal_all
[env_empty]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html#method.empty
[pe]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html
[param_env_query]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_typeck/fn_ctxt/struct.FnCtxt.html#structfield.param_env
[method_pred_entailment]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/compare_impl_item/fn.compare_method_predicate_entailment.html

View File

@ -1,83 +0,0 @@
# How are `ParamEnv`'s constructed internally?
Creating a [`ParamEnv`][pe] is more complicated than simply using the list of where clauses defined on an item as written by the user. We need to both elaborate supertraits into the env and fully normalize all aliases. This logic is handled by [`traits::normalize_param_env_or_error`][normalize_env_or_error] (even though it does not mention anything about elaboration).
## Elaborating supertraits
When we have a function such as `fn foo<T: Copy>()` we would like to be able to prove `T: Clone` inside of the function as the `Copy` trait has a `Clone` supertrait. Constructing a `ParamEnv` looks at all of the trait bounds in the env and explicitly adds new where clauses to the `ParamEnv` for any supertraits found on the traits.
A concrete example would be the following function:
```rust
trait Trait: SuperTrait {}
trait SuperTrait: SuperSuperTrait {}
// `bar`'s unelaborated `ParamEnv` would be:
// `[T: Sized, T: Copy, T: Trait]`
fn bar<T: Copy + Trait>(a: T) {
requires_impl(a);
}
fn requires_impl<T: Clone + SuperSuperTrait>(a: T) {}
```
If we did not elaborate the env then the `requires_impl` call would fail to typecheck as we would not be able to prove `T: Clone` or `T: SuperSuperTrait`. In practice we elaborate the env which means that `bar`'s `ParamEnv` is actually:
`[T: Sized, T: Copy, T: Clone, T: Trait, T: SuperTrait, T: SuperSuperTrait]`
This allows us to prove `T: Clone` and `T: SuperSuperTrait` when type checking `bar`.
The `Clone` trait has a `Sized` supertrait however we do not end up with two `T: Sized` bounds in the env (one for the supertrait and one for the implicitly added `T: Sized` bound). This is because the elaboration process (implemented via [`util::elaborate`][elaborate]) deduplicates the where clauses to avoid this.
As a side effect this also means that even if no actual elaboration of supertraits takes place, the existing where clauses in the env are _also_ deduplicated. See the following example:
```rust
trait Trait {}
// The unelaborated `ParamEnv` would be:
// `[T: Sized, T: Trait, T: Trait]`
// but after elaboration it would be:
// `[T: Sized, T: Trait]`
fn foo<T: Trait + Trait>() {}
```
The [next-gen trait solver][next-gen-solver] also requires this elaboration to take place.
[elaborate]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_infer/traits/util/fn.elaborate.html
[next-gen-solver]: ../solve/trait-solving.md
## Normalizing all bounds
In the old trait solver the where clauses stored in `ParamEnv` are required to be fully normalized or else the trait solver will not function correctly. A concrete example of needing to normalize the `ParamEnv` is the following:
```rust
trait Trait<T> {
type Assoc;
}
trait Other {
type Bar;
}
impl<T> Other for T {
type Bar = u32;
}
// `foo`'s unnormalized `ParamEnv` would be:
// `[T: Sized, U: Sized, U: Trait<T::Bar>]`
fn foo<T, U>(a: U)
where
U: Trait<<T as Other>::Bar>,
{
requires_impl(a);
}
fn requires_impl<U: Trait<u32>>(_: U) {}
```
As humans we can tell that `<T as Other>::Bar` is equal to `u32` so the trait bound on `U` is equivalent to `U: Trait<u32>`. In practice trying to prove `U: Trait<u32>` in the old solver in this environment would fail as it is unable to determine that `<T as Other>::Bar` is equal to `u32`.
To work around this we normalize `ParamEnv`'s after constructing them so that `foo`'s `ParamEnv` is actually: `[T: Sized, U: Sized, U: Trait<u32>]` which means the trait solver is now able to use the `U: Trait<u32>` in the `ParamEnv` to determine that the trait bound `U: Trait<u32>` holds.
This workaround does not work in all cases as normalizing associated types requires a `ParamEnv` which introduces a bootstrapping problem. We need a normalized `ParamEnv` in order for normalization to give correct results, but we need to normalize to get that `ParamEnv`. Currently we normalize the `ParamEnv` once using the unnormalized param env and it tends to give okay results in practice even though there are some examples where this breaks ([example]).
In the next-gen trait solver the requirement for all where clauses in the `ParamEnv` to be fully normalized is not present and so we do not normalize when constructing `ParamEnv`s.
[example]: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=e6933265ea3e84eaa47019465739992c
[pe]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html
[normalize_env_or_error]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_trait_selection/traits/fn.normalize_param_env_or_error.html

View File

@ -1,18 +0,0 @@
# The `ParamEnv` type
## Summary
The [`ParamEnv`][pe] is used to store information about the environment that we are interacting with the type system from. For example the set of in-scope where-clauses is stored in `ParamEnv` as it differs between each item whereas the list of user written impls is not stored in the `ParamEnv` as this does not change for each item.
This chapter of the dev guide covers:
- A high level summary of what a `ParamEnv` is and what it is used for
- Technical details about what the process of constructing a `ParamEnv` involves
- Guidance about how to acquire a `ParamEnv` when one is required
## Bundling
A useful API on `ParamEnv` is the [`and`][and] method which allows bundling a value with the `ParamEnv`. The `and` method produces a [`ParamEnvAnd<T>`][pea] making it clearer that using the inner value is intended to be done in that specific environment.
[and]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html#method.and
[pe]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html
[pea]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnvAnd.html

View File

@ -1,59 +0,0 @@
# What is a `ParamEnv`?
The type system relies on information in the environment in order for it to function correctly. This information is stored in the [`ParamEnv`][pe] type and it is important to use the correct `ParamEnv` when interacting with the type system.
The information represented by `ParamEnv` is a list of in-scope where-clauses, and a `Reveal` (see linked docs for more information). A `ParamEnv` typically corresponds to a specific item's where clauses, some clauses are not explicitly written bounds and instead are implicitly added in [`predicates_of`][predicates_of] such as `ConstArgHasType` or some implied bounds.
A `ParamEnv` can also be created with arbitrary data that is not derived from a specific item such as in [`compare_method_predicate_entailment`][method_pred_entailment] which creates a hybrid `ParamEnv` consisting of the impl's where clauses and the trait definition's function's where clauses. In most cases `ParamEnv`s are initially created via the [`param_env` query][query] which returns a `ParamEnv` derived from the provided item's where clauses.
If we have a function such as:
```rust
// `foo` would have a `ParamEnv` of:
// `[T: Sized, T: Trait, <T as Trait>::Assoc: Clone]`
fn foo<T: Trait>()
where
<T as Trait>::Assoc: Clone,
{}
```
If we were conceptually inside of `foo` (for example, type-checking or linting it) we would use this `ParamEnv` everywhere that we interact with the type system. This would allow things such as normalization (TODO: write a chapter about normalization and link it), evaluating generic constants, and proving where clauses/goals, to rely on `T` being sized, implementing `Trait`, etc.
A more concrete example:
```rust
// `foo` would have a `ParamEnv` of:
// `[T: Sized, T: Clone]`
fn foo<T: Clone>(a: T) {
// when typechecking `foo` we require all the where clauses on `bar`
// to hold in order for it to be legal to call. This means we have to
// prove `T: Clone`. As we are type checking `foo` we use `foo`'s
// environment when trying to check that `T: Clone` holds.
//
// Trying to prove `T: Clone` with a `ParamEnv` of `[T: Sized, T: Clone]`
// will trivially succeed as bound we want to prove is in our environment.
requires_clone(a);
}
```
Or alternatively an example that would not compile:
```rust
// `foo2` would have a `ParamEnv` of:
// `[T: Sized]`
fn foo2<T>(a: T) {
// When typechecking `foo2` we attempt to prove `T: Clone`.
// As we are type checking `foo2` we use `foo2`'s environment
// when trying to prove `T: Clone`.
//
// Trying to prove `T: Clone` with a `ParamEnv` of `[T: Sized]` will
// fail as there is nothing in the environment telling the trait solver
// that `T` implements `Clone` and there exists no user written impl
// that could apply.
requires_clone(a);
}
```
It's very important to use the correct `ParamEnv` when interacting with the type system as otherwise it can lead to ICEs or things compiling when they shouldn't (or vice versa). See [#82159](https://github.com/rust-lang/rust/pull/82159) and [#82067](https://github.com/rust-lang/rust/pull/82067) as examples of PRs that changed rustc to use the correct param env to avoid ICE. Determining how to acquire the correct `ParamEnv` is explained later in this chapter.
[predicates_of]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/collect/predicates_of/fn.predicates_of.html
[method_pred_entailment]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/compare_impl_item/fn.compare_method_predicate_entailment.html
[pe]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html
[query]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/context/struct.TyCtxt.html#method.param_env

View File

@ -120,7 +120,7 @@ The `rustc` version of this can be found in `library/profiler_builtins` which
basically packs the C code from `compiler-rt` into a Rust crate. basically packs the C code from `compiler-rt` into a Rust crate.
In order for `profiler_builtins` to be built, `profiler = true` must be set In order for `profiler_builtins` to be built, `profiler = true` must be set
in `rustc`'s `config.toml`. in `rustc`'s `bootstrap.toml`.
[compiler-rt-profile]: https://github.com/llvm/llvm-project/tree/main/compiler-rt/lib/profile [compiler-rt-profile]: https://github.com/llvm/llvm-project/tree/main/compiler-rt/lib/profile

View File

@ -87,7 +87,7 @@ Example output for the compiler:
Since this doesn't seem to work with incremental compilation or `./x check`, Since this doesn't seem to work with incremental compilation or `./x check`,
you will be compiling rustc _a lot_. you will be compiling rustc _a lot_.
I recommend changing a few settings in `config.toml` to make it bearable: I recommend changing a few settings in `bootstrap.toml` to make it bearable:
``` ```
[rust] [rust]
# A debug build takes _a third_ as long on my machine, # A debug build takes _a third_ as long on my machine,

View File

@ -6,7 +6,7 @@ This is a guide for how to profile rustc with [perf](https://perf.wiki.kernel.or
- Get a clean checkout of rust-lang/master, or whatever it is you want - Get a clean checkout of rust-lang/master, or whatever it is you want
to profile. to profile.
- Set the following settings in your `config.toml`: - Set the following settings in your `bootstrap.toml`:
- `debuginfo-level = 1` - enables line debuginfo - `debuginfo-level = 1` - enables line debuginfo
- `jemalloc = false` - lets you do memory use profiling with valgrind - `jemalloc = false` - lets you do memory use profiling with valgrind
- leave everything else the defaults - leave everything else the defaults

View File

@ -9,7 +9,7 @@ which will download and build the suite for you, build a local compiler toolchai
You can use the `./x perf <command> [options]` command to use this integration. You can use the `./x perf <command> [options]` command to use this integration.
You can use normal bootstrap flags for this command, such as `--stage 1` or `--stage 2`, for example to modify the stage of the created sysroot. It might also be useful to configure `config.toml` to better support profiling, e.g. set `rust.debuginfo-level = 1` to add source line information to the built compiler. You can use normal bootstrap flags for this command, such as `--stage 1` or `--stage 2`, for example to modify the stage of the created sysroot. It might also be useful to configure `bootstrap.toml` to better support profiling, e.g. set `rust.debuginfo-level = 1` to add source line information to the built compiler.
`x perf` currently supports the following commands: `x perf` currently supports the following commands:
- `benchmark <id>`: Benchmark the compiler and store the results under the passed `id`. - `benchmark <id>`: Benchmark the compiler and store the results under the passed `id`.

View File

@ -44,7 +44,7 @@ compiler we're using to build rustc will aid our analysis greatly by allowing WP
symbols correctly. Unfortunately, the stage 0 compiler does not have symbols turned on which is why symbols correctly. Unfortunately, the stage 0 compiler does not have symbols turned on which is why
we'll need to build a stage 1 compiler and then a stage 2 compiler ourselves. we'll need to build a stage 1 compiler and then a stage 2 compiler ourselves.
To do this, make sure you have set `debuginfo-level = 1` in your `config.toml` file. This tells To do this, make sure you have set `debuginfo-level = 1` in your `bootstrap.toml` file. This tells
rustc to generate debug information which includes stack frames when bootstrapping. rustc to generate debug information which includes stack frames when bootstrapping.
Now you can build the stage 1 compiler: `x build --stage 1 -i library` or however Now you can build the stage 1 compiler: `x build --stage 1 -i library` or however

View File

@ -1,4 +1,4 @@
# Incremental Compilation In Detail # Incremental Compilation in detail
<!-- toc --> <!-- toc -->

View File

@ -1,4 +1,4 @@
# The Query Evaluation Model in Detail # The Query Evaluation Model in detail
<!-- toc --> <!-- toc -->

View File

@ -7,7 +7,7 @@ otherwise be printed to stderr.
To get diagnostics from the compiler, To get diagnostics from the compiler,
configure [`rustc_interface::Config`] to output diagnostic to a buffer, configure [`rustc_interface::Config`] to output diagnostic to a buffer,
and run [`TyCtxt.analysis`]. and run [`rustc_hir_typeck::typeck`] for each item.
```rust ```rust
{{#include ../../examples/rustc-interface-getting-diagnostics.rs}} {{#include ../../examples/rustc-interface-getting-diagnostics.rs}}
@ -16,3 +16,4 @@ and run [`TyCtxt.analysis`].
[`rustc_interface`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_interface/index.html [`rustc_interface`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_interface/index.html
[`rustc_interface::Config`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_interface/interface/struct.Config.html [`rustc_interface::Config`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_interface/interface/struct.Config.html
[`TyCtxt.analysis`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_interface/passes/fn.analysis.html [`TyCtxt.analysis`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_interface/passes/fn.analysis.html
[`rustc_hir_typeck::typeck`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_typeck/fn.typeck.html

View File

@ -0,0 +1,54 @@
# Remarks on perma unstable features
## `rustc_private`
### Overview
The `rustc_private` feature allows external crates to use compiler internals.
### Using `rustc-private` with Official Toolchains
When using the `rustc_private` feature with official Rust toolchains distributed via rustup, you need to install two additional components:
1. **`rustc-dev`**: Provides compiler libraries
2. **`llvm-tools`**: Provides LLVM libraries required for linking
#### Installation Steps
Install both components using rustup:
```text
rustup component add rustc-dev llvm-tools
```
#### Common Error
Without the `llvm-tools` component, you'll encounter linking errors like:
```text
error: linking with `cc` failed: exit status: 1
|
= note: rust-lld: error: unable to find library -lLLVM-{version}
```
### Using `rustc-private` with Custom Toolchains
For custom-built toolchains or environments not using rustup, additional configuration is typically required:
#### Requirements
- LLVM libraries must be available in your system's library search paths
- The LLVM version must match the one used to build your Rust toolchain
#### Troubleshooting Steps
1. **Check LLVM installation**: Verify LLVM is installed and accessible
2. **Configure library paths**: You may need to set environment variables:
```text
export LD_LIBRARY_PATH=/path/to/llvm/lib:$LD_LIBRARY_PATH
```
3. **Check version compatibility**: Ensure your LLVM version is compatible with your Rust toolchain
### Additional Resources
- [GitHub Issue #137421](https://github.com/rust-lang/rust/issues/137421): Explains that `rustc_private` linker failures often occur because `llvm-tools` is not installed

View File

@ -0,0 +1,112 @@
# The `rustdoc` test suite
This page is specifically about the test suite named `rustdoc`.
For other test suites used for testing rustdoc, see [Rustdoc tests](../rustdoc.md#tests).
The `rustdoc` test suite is specifically used to test the HTML output of rustdoc.
This is achieved by means of `htmldocck.py`, a custom checker script that leverages [XPath].
[XPath]: https://en.wikipedia.org/wiki/XPath
## Directives
Directives to htmldocck are similar to those given to `compiletest` in that they take the form of `//@` comments.
In addition to the directives listed here,
`rustdoc` tests also support most
[compiletest directives](../tests/directives.html).
All `PATH`s in directives are relative to the the rustdoc output directory (`build/TARGET/test/rustdoc/TESTNAME`),
so it is conventional to use a `#![crate_name = "foo"]` attribute to avoid
having to write a long crate name multiple times.
To avoid repetion, `-` can be used in any `PATH` argument to re-use the previous `PATH` argument.
All arguments take the form of quoted strings
(both single and double quotes are supported),
with the exception of `COUNT` and the special `-` form of `PATH`.
Directives are assertions that place constraints on the generated HTML.
All directives (except `files`) can be negated by putting a `!` in front of their name.
Similar to shell commands,
directives can extend across multiple lines if their last char is `\`.
In this case, the start of the next line should be `//`, with no `@`.
For example, `//@ !has 'foo/struct.Bar.html'` checks that crate `foo` does not have a page for a struct named `Bar` in the crate root.
### `has`
Usage 1: `//@ has PATH`
Usage 2: `//@ has PATH XPATH PATTERN`
In the first form, `has` checks that a given file exists.
In the second form, `has` is an alias for `matches`,
except `PATTERN` is a whitespace-normalized[^1] string instead of a regex.
### `matches`
Usage: `//@ matches PATH XPATH PATTERN`
Checks that the text of each element selected by `XPATH` in `PATH` matches the python-flavored regex `PATTERN`.
### `matchesraw`
Usage: `//@ matchesraw PATH PATTERN`
Checks that the contents of the file `PATH` matches the regex `PATTERN`.
### `hasraw`
Usage: `//@ hasraw PATH PATTERN`
Same as `matchesraw`, except `PATTERN` is a whitespace-normalized[^1] string instead of a regex.
### `count`
Usage: `//@ count PATH XPATH COUNT`
Checks that there are exactly `COUNT` matches for `XPATH` within the file `PATH`.
### `snapshot`
Usage: `//@ snapshot NAME PATH XPATH`
Creates a snapshot test named NAME.
A snapshot test captures a subtree of the DOM, at the location
determined by the XPath, and compares it to a pre-recorded value
in a file. The file's name is the test's name with the `.rs` extension
replaced with `.NAME.html`, where NAME is the snapshot's name.
htmldocck supports the `--bless` option to accept the current subtree
as expected, saving it to the file determined by the snapshot's name.
compiletest's `--bless` flag is forwarded to htmldocck.
### `has-dir`
Usage: `//@ has-dir PATH`
Checks for the existance of directory `PATH`.
### `files`
Usage: `//@ files PATH ENTRIES`
Checks that the directory `PATH` contains exactly `ENTRIES`.
`ENTRIES` is a python list of strings inside a quoted string,
as if it were to be parsed by `eval`.
(note that the list is actually parsed by `shlex.split`,
so it cannot contain arbitrary python expressions).
Example: `//@ files "foo/bar" '["index.html", "sidebar-items.js"]'`
[^1]: Whitespace normalization means that all spans of consecutive whitespace are replaced with a single space. The files themselves are also whitespace-normalized.
## Limitations
`htmldocck.py` uses the xpath implementation from the standard library.
This leads to several limitations:
* All `XPATH` arguments must start with `//` due to a flaw in the implemention.
* Many XPath features (functions, axies, etc.) are not supported.
* Only well-formed HTML can be parsed (hopefully rustdoc doesn't output mismatched tags).

View File

@ -58,12 +58,12 @@ does is call the `main()` that's in this crate's `lib.rs`, though.)
* If you want to copy those docs to a webserver, copy all of * If you want to copy those docs to a webserver, copy all of
`build/host/doc`, since that's where the CSS, JS, fonts, and landing `build/host/doc`, since that's where the CSS, JS, fonts, and landing
page are. page are.
* For frontend debugging, disable the `rust.docs-minification` option in [`config.toml`]. * For frontend debugging, disable the `rust.docs-minification` option in [`bootstrap.toml`].
* Use `./x test tests/rustdoc*` to run the tests using a stage1 * Use `./x test tests/rustdoc*` to run the tests using a stage1
rustdoc. rustdoc.
* See [Rustdoc internals] for more information about tests. * See [Rustdoc internals] for more information about tests.
[`config.toml`]: ./building/how-to-build-and-run.md [`bootstrap.toml`]: ./building/how-to-build-and-run.md
## Code structure ## Code structure
@ -77,29 +77,33 @@ does is call the `main()` that's in this crate's `lib.rs`, though.)
`doctest.rs`. `doctest.rs`.
* The Markdown renderer is loaded up in `html/markdown.rs`, including functions * The Markdown renderer is loaded up in `html/markdown.rs`, including functions
for extracting doctests from a given block of Markdown. for extracting doctests from a given block of Markdown.
* The tests on the structure of rustdoc HTML output are located in `tests/rustdoc`, where
they're handled by the test runner of bootstrap and the supplementary script
`src/etc/htmldocck.py`.
* Frontend CSS and JavaScript are stored in `html/static/`. * Frontend CSS and JavaScript are stored in `html/static/`.
## Tests ## Tests
* All paths in this section are relative to `tests` in the rust-lang/rust repository. * Tests on search engine and index are located in `tests/rustdoc-js` and `tests/rustdoc-js-std`.
* Tests on search engine and index are located in `rustdoc-js` and `rustdoc-js-std`.
The format is specified The format is specified
[in the search guide](rustdoc-internals/search.md#testing-the-search-engine). [in the search guide](rustdoc-internals/search.md#testing-the-search-engine).
* Tests on the "UI" of rustdoc (the terminal output it produces when run) are in * Tests on the "UI" of rustdoc (the terminal output it produces when run) are in
`rustdoc-ui` `tests/rustdoc-ui`
* Tests on the "GUI" of rustdoc (the HTML, JS, and CSS as rendered in a browser) * Tests on the "GUI" of rustdoc (the HTML, JS, and CSS as rendered in a browser)
are in `rustdoc-gui`. These use a [NodeJS tool called are in `tests/rustdoc-gui`. These use a [NodeJS tool called
browser-UI-test](https://github.com/GuillaumeGomez/browser-UI-test/) that uses browser-UI-test](https://github.com/GuillaumeGomez/browser-UI-test/) that uses
puppeteer to run tests in a headless browser and check rendering and puppeteer to run tests in a headless browser and check rendering and
interactivity. interactivity. For information on how to write this form of test,
see [`tests/rustdoc-gui/README.md`][rustdoc-gui-readme]
as well as [the description of the `.goml` format][goml-script]
* Additionally, JavaScript type annotations are written using [TypeScript-flavored JSDoc] * Additionally, JavaScript type annotations are written using [TypeScript-flavored JSDoc]
comments and an external d.ts file. The code itself is plain, valid JavaScript; we only comments and an external d.ts file. The code itself is plain, valid JavaScript; we only
use tsc as a linter. use tsc as a linter.
* The tests on the structure of rustdoc HTML output are located in `tests/rustdoc`,
where they're handled by the test runner of bootstrap and
the supplementary script `src/etc/htmldocck.py`.
[These tests have several extra directives available to them](./rustdoc-internals/rustdoc-test-suite.md).
[TypeScript-flavored JSDoc]: https://www.typescriptlang.org/docs/handbook/jsdoc-supported-types.html [TypeScript-flavored JSDoc]: https://www.typescriptlang.org/docs/handbook/jsdoc-supported-types.html
[rustdoc-gui-readme]: https://github.com/rust-lang/rust/blob/master/tests/rustdoc-gui/README.md
[goml-script]: https://github.com/GuillaumeGomez/browser-UI-test/blob/master/goml-script.md
## Constraints ## Constraints

View File

@ -32,7 +32,7 @@ implementation:
* The sanitizer runtime libraries are part of the [compiler-rt] project, and * The sanitizer runtime libraries are part of the [compiler-rt] project, and
[will be built][sanitizer-build] on [supported targets][sanitizer-targets] [will be built][sanitizer-build] on [supported targets][sanitizer-targets]
when enabled in `config.toml`: when enabled in `bootstrap.toml`:
```toml ```toml
[build] [build]
@ -80,7 +80,7 @@ Sanitizers are validated by code generation tests in
[`tests/ui/sanitizer/`][test-ui] directory. [`tests/ui/sanitizer/`][test-ui] directory.
Testing sanitizer functionality requires the sanitizer runtimes (built when Testing sanitizer functionality requires the sanitizer runtimes (built when
`sanitizer = true` in `config.toml`) and target providing support for particular `sanitizer = true` in `bootstrap.toml`) and target providing support for particular
sanitizer. When sanitizer is unsupported on given target, sanitizers tests will sanitizer. When sanitizer is unsupported on given target, sanitizers tests will
be ignored. This behaviour is controlled by compiletest `needs-sanitizer-*` be ignored. This behaviour is controlled by compiletest `needs-sanitizer-*`
directives. directives.

View File

@ -1,4 +1,4 @@
# Serialization in Rustc # Serialization in rustc
rustc has to [serialize] and deserialize various data during compilation. rustc has to [serialize] and deserialize various data during compilation.
Specifically: Specifically:

View File

@ -33,7 +33,7 @@ For opaque types in the defining scope and in the implicit-negative coherence mo
always done in two steps. Outside of the defining scope `normalizes-to` for opaques always always done in two steps. Outside of the defining scope `normalizes-to` for opaques always
returns `Err(NoSolution)`. returns `Err(NoSolution)`.
We start by trying to to assign the expected type as a hidden type. We start by trying to assign the expected type as a hidden type.
In the implicit-negative coherence mode, this currently always results in ambiguity without In the implicit-negative coherence mode, this currently always results in ambiguity without
interacting with the opaque types storage. We could instead add allow 'defining' all opaque types, interacting with the opaque types storage. We could instead add allow 'defining' all opaque types,

View File

@ -2,8 +2,7 @@
This chapter describes how trait solving works with the new WIP solver located in This chapter describes how trait solving works with the new WIP solver located in
[`rustc_trait_selection/solve`][solve]. Feel free to also look at the docs for [`rustc_trait_selection/solve`][solve]. Feel free to also look at the docs for
[the current solver](../traits/resolution.md) and [the chalk solver](../traits/chalk.md) [the current solver](../traits/resolution.md) and [the chalk solver](../traits/chalk.md).
can be found separately.
## Core concepts ## Core concepts

View File

@ -83,7 +83,7 @@ with your hand-written one, it will not share a [Symbol][Symbol]. This
technique prevents name collision during code generation and is the foundation technique prevents name collision during code generation and is the foundation
of Rust's [`macro`] hygiene. of Rust's [`macro`] hygiene.
## Step 2: Harness Generation ## Step 2: Harness generation
Now that our tests are accessible from the root of our crate, we need to do Now that our tests are accessible from the root of our crate, we need to do
something with them using [`rustc_ast`][ast] generates a module like so: something with them using [`rustc_ast`][ast] generates a module like so:
@ -106,7 +106,7 @@ called [`test`][test] that is part of Rust core, that implements all of the
runtime for testing. [`test`][test]'s interface is unstable, so the only stable way runtime for testing. [`test`][test]'s interface is unstable, so the only stable way
to interact with it is through the `#[test]` macro. to interact with it is through the `#[test]` macro.
## Step 3: Test Object Generation ## Step 3: Test object generation
If you've written tests in Rust before, you may be familiar with some of the If you've written tests in Rust before, you may be familiar with some of the
optional attributes available on test functions. For example, a test can be optional attributes available on test functions. For example, a test can be

View File

@ -175,6 +175,8 @@ See [compiletest directives] for a listing of directives.
- For `ignore-*`/`needs-*`/`only-*` directives, unless extremely obvious, - For `ignore-*`/`needs-*`/`only-*` directives, unless extremely obvious,
provide a brief remark on why the directive is needed. E.g. `"//@ ignore-wasi provide a brief remark on why the directive is needed. E.g. `"//@ ignore-wasi
(wasi codegens the main symbol differently)"`. (wasi codegens the main symbol differently)"`.
- When using `//@ ignore-auxiliary`, specify the corresponding main test files,
e.g. ``//@ ignore-auxiliary (used by `./foo.rs`)``.
## FileCheck best practices ## FileCheck best practices

View File

@ -133,29 +133,37 @@ There are several use-cases for try builds:
Again, a working compiler build is needed for this, which can be produced by Again, a working compiler build is needed for this, which can be produced by
the [dist-x86_64-linux] CI job. the [dist-x86_64-linux] CI job.
- Run a specific CI job (e.g. Windows tests) on a PR, to quickly test if it - Run a specific CI job (e.g. Windows tests) on a PR, to quickly test if it
passes the test suite executed by that job. You can select which CI jobs will passes the test suite executed by that job.
be executed in the try build by adding up to 10 lines containing `try-job:
<name of job>` to the PR description. All such specified jobs will be executed You can select which CI jobs will
in the try build once the `@bors try` command is used on the PR. If no try be executed in the try build by adding lines containing `try-job:
jobs are specified in this way, the jobs defined in the `try` section of <job pattern>` to the PR description. All such specified jobs will be executed
[`jobs.yml`] will be executed by default. in the try build once the `@bors try` command is used on the PR. If no try
jobs are specified in this way, the jobs defined in the `try` section of
[`jobs.yml`] will be executed by default.
Each pattern can either be an exact name of a job or a glob pattern that matches multiple jobs,
for example `*msvc*` or `*-alt`. You can start at most 20 jobs in a single try build. When using
glob patterns, you might want to wrap them in backticks (`` ` ``) to avoid GitHub rendering
the pattern as Markdown.
> **Using `try-job` PR description directives** > **Using `try-job` PR description directives**
> >
> 1. Identify which set of try-jobs (max 10) you would like to exercise. You can > 1. Identify which set of try-jobs you would like to exercise. You can
> find the name of the CI jobs in [`jobs.yml`]. > find the name of the CI jobs in [`jobs.yml`].
> >
> 2. Amend PR description to include (usually at the end of the PR description) > 2. Amend PR description to include a set of patterns (usually at the end
> e.g. > of the PR description), for example:
> >
> ```text > ```text
> This PR fixes #123456. > This PR fixes #123456.
> >
> try-job: x86_64-msvc > try-job: x86_64-msvc
> try-job: test-various > try-job: test-various
> try-job: `*-alt`
> ``` > ```
> >
> Each `try-job` directive must be on its own line. > Each `try-job` pattern must be on its own line.
> >
> 3. Run the prescribed try jobs with `@bors try`. As aforementioned, this > 3. Run the prescribed try jobs with `@bors try`. As aforementioned, this
> requires the user to either (1) have `try` permissions or (2) be delegated > requires the user to either (1) have `try` permissions or (2) be delegated
@ -172,6 +180,8 @@ their results can be seen [here](https://github.com/rust-lang-ci/rust/actions),
although usually you will be notified of the result by a comment made by bors on although usually you will be notified of the result by a comment made by bors on
the corresponding PR. the corresponding PR.
Note that if you start the default try job using `@bors try`, it will skip building several `dist` components and running post-optimization tests, to make the build duration shorter. If you want to execute the full build as it would happen before a merge, add an explicit `try-job` pattern with the name of the default try job (currently `dist-x86_64-linux`).
Multiple try builds can execute concurrently across different PRs. Multiple try builds can execute concurrently across different PRs.
<div class="warning"> <div class="warning">
@ -427,7 +437,7 @@ To learn more about the dashboard, see the [Datadog CI docs].
## Determining the CI configuration ## Determining the CI configuration
If you want to determine which `config.toml` settings are used in CI for a If you want to determine which `bootstrap.toml` settings are used in CI for a
particular job, it is probably easiest to just look at the build log. To do particular job, it is probably easiest to just look at the build log. To do
this: this:

View File

@ -0,0 +1,3 @@
# Cranelift codegen backend tests
TODO: please add some more information to this page.

View File

@ -0,0 +1,3 @@
# GCC codegen backend tests
TODO: please add some more information to this page.

View File

@ -0,0 +1,13 @@
# Codegen backend testing
See also the [Code generation](../../backend/codegen.md) chapter.
In addition to the primary LLVM codegen backend, the rust-lang/rust CI also runs tests of the [cranelift][cg_clif] and [GCC][cg_gcc] codegen backends in certain test jobs.
For more details on the tests involved, see:
- [Cranelift codegen backend tests](./cg_clif.md)
- [GCC codegen backend tests](./cg_gcc.md)
[cg_clif]: https://github.com/rust-lang/rustc_codegen_cranelift
[cg_gcc]: https://github.com/rust-lang/rustc_codegen_gcc

View File

@ -525,10 +525,10 @@ data into a human-readable code coverage report.
Instrumented binaries need to be linked against the LLVM profiler runtime, so Instrumented binaries need to be linked against the LLVM profiler runtime, so
`coverage-run` tests are **automatically skipped** unless the profiler runtime `coverage-run` tests are **automatically skipped** unless the profiler runtime
is enabled in `config.toml`: is enabled in `bootstrap.toml`:
```toml ```toml
# config.toml # bootstrap.toml
[build] [build]
profiler = true profiler = true
``` ```

View File

@ -6,7 +6,8 @@
FIXME(jieyouxu) completely revise this chapter. FIXME(jieyouxu) completely revise this chapter.
--> -->
Directives are special comments that tell compiletest how to build and interpret a test. They must appear before the Rust source in the test. They may also appear in `rmake.rs` [run-make tests](compiletest.md#run-make-tests). Directives are special comments that tell compiletest how to build and interpret a test.
They may also appear in `rmake.rs` [run-make tests](compiletest.md#run-make-tests).
They are normally put after the short comment that explains the point of this They are normally put after the short comment that explains the point of this
test. Compiletest test suites use `//@` to signal that a comment is a directive. test. Compiletest test suites use `//@` to signal that a comment is a directive.
@ -100,6 +101,7 @@ for more details.
| `normalize-stdout` | Normalize actual stdout with a rule `"<raw>" -> "<normalized>"` before comparing against snapshot | `ui`, `incremental` | `"<RAW>" -> "<NORMALIZED>"`, `<RAW>`/`<NORMALIZED>` is regex capture and replace syntax | | `normalize-stdout` | Normalize actual stdout with a rule `"<raw>" -> "<normalized>"` before comparing against snapshot | `ui`, `incremental` | `"<RAW>" -> "<NORMALIZED>"`, `<RAW>`/`<NORMALIZED>` is regex capture and replace syntax |
| `dont-check-compiler-stderr` | Don't check actual compiler stderr vs stderr snapshot | `ui` | N/A | | `dont-check-compiler-stderr` | Don't check actual compiler stderr vs stderr snapshot | `ui` | N/A |
| `dont-check-compiler-stdout` | Don't check actual compiler stdout vs stdout snapshot | `ui` | N/A | | `dont-check-compiler-stdout` | Don't check actual compiler stdout vs stdout snapshot | `ui` | N/A |
| `dont-require-annotations` | Don't require line annotations for the given diagnostic kind (`//~ KIND`) to be exhaustive | `ui`, `incremental` | `ERROR`, `WARN`, `NOTE`, `HELP`, `SUGGESTION` |
| `run-rustfix` | Apply all suggestions via `rustfix`, snapshot fixed output, and check fixed output builds | `ui` | N/A | | `run-rustfix` | Apply all suggestions via `rustfix`, snapshot fixed output, and check fixed output builds | `ui` | N/A |
| `rustfix-only-machine-applicable` | `run-rustfix` but only machine-applicable suggestions | `ui` | N/A | | `rustfix-only-machine-applicable` | `run-rustfix` but only machine-applicable suggestions | `ui` | N/A |
| `exec-env` | Env var to set when executing a test | `ui`, `crashes` | `<KEY>=<VALUE>` | | `exec-env` | Env var to set when executing a test | `ui`, `crashes` | `<KEY>=<VALUE>` |
@ -119,10 +121,12 @@ for more details.
These directives are used to ignore the test in some situations, which These directives are used to ignore the test in some situations, which
means the test won't be compiled or run. means the test won't be compiled or run.
* `ignore-X` where `X` is a target detail or stage will ignore the test * `ignore-X` where `X` is a target detail or other criteria on which to ignore the test (see below)
accordingly (see below)
* `only-X` is like `ignore-X`, but will *only* run the test on that target or * `only-X` is like `ignore-X`, but will *only* run the test on that target or
stage stage
* `ignore-auxiliary` is intended for files that *participate* in one or more other
main test files but that `compiletest` should not try to build the file itself.
Please backlink to which main test is actually using the auxiliary file.
* `ignore-test` always ignores the test. This can be used to temporarily disable * `ignore-test` always ignores the test. This can be used to temporarily disable
a test if it is currently not working, but you want to keep it in tree to a test if it is currently not working, but you want to keep it in tree to
re-enable it later. re-enable it later.
@ -139,8 +143,8 @@ Some examples of `X` in `ignore-X` or `only-X`:
matches that target as well as the emscripten targets. matches that target as well as the emscripten targets.
- Pointer width: `32bit`, `64bit` - Pointer width: `32bit`, `64bit`
- Endianness: `endian-big` - Endianness: `endian-big`
- Stage: `stage1`, `stage2`
- Binary format: `elf` - Binary format: `elf`
- Stage: `stage0`, `stage1`, `stage2`
- Channel: `stable`, `beta` - Channel: `stable`, `beta`
- When cross compiling: `cross-compile` - When cross compiling: `cross-compile`
- When [remote testing] is used: `remote` - When [remote testing] is used: `remote`
@ -161,9 +165,9 @@ settings:
stable support for `asm!` stable support for `asm!`
- `needs-profiler-runtime` — ignores the test if the profiler runtime was not - `needs-profiler-runtime` — ignores the test if the profiler runtime was not
enabled for the target enabled for the target
(`build.profiler = true` in rustc's `config.toml`) (`build.profiler = true` in rustc's `bootstrap.toml`)
- `needs-sanitizer-support` — ignores if the sanitizer support was not enabled - `needs-sanitizer-support` — ignores if the sanitizer support was not enabled
for the target (`sanitizers = true` in rustc's `config.toml`) for the target (`sanitizers = true` in rustc's `bootstrap.toml`)
- `needs-sanitizer-{address,hwaddress,leak,memory,thread}` — ignores if the - `needs-sanitizer-{address,hwaddress,leak,memory,thread}` — ignores if the
corresponding sanitizer is not enabled for the target (AddressSanitizer, corresponding sanitizer is not enabled for the target (AddressSanitizer,
hardware-assisted AddressSanitizer, LeakSanitizer, MemorySanitizer or hardware-assisted AddressSanitizer, LeakSanitizer, MemorySanitizer or
@ -173,7 +177,7 @@ settings:
flag, or running on fuchsia. flag, or running on fuchsia.
- `needs-unwind` — ignores if the target does not support unwinding - `needs-unwind` — ignores if the target does not support unwinding
- `needs-rust-lld` — ignores if the rust lld support is not enabled (`rust.lld = - `needs-rust-lld` — ignores if the rust lld support is not enabled (`rust.lld =
true` in `config.toml`) true` in `bootstrap.toml`)
- `needs-threads` — ignores if the target does not have threading support - `needs-threads` — ignores if the target does not have threading support
- `needs-subprocess` — ignores if the target does not have subprocess support - `needs-subprocess` — ignores if the target does not have subprocess support
- `needs-symlink` — ignores if the target does not support symlinks. This can be - `needs-symlink` — ignores if the target does not support symlinks. This can be
@ -191,12 +195,16 @@ settings:
specified atomic widths, e.g. the test with `//@ needs-target-has-atomic: 8, specified atomic widths, e.g. the test with `//@ needs-target-has-atomic: 8,
16, ptr` will only run if it supports the comma-separated list of atomic 16, ptr` will only run if it supports the comma-separated list of atomic
widths. widths.
- `needs-dynamic-linking` - ignores if target does not support dynamic linking - `needs-dynamic-linking` ignores if target does not support dynamic linking
(which is orthogonal to it being unable to create `dylib` and `cdylib` crate types) (which is orthogonal to it being unable to create `dylib` and `cdylib` crate types)
- `needs-crate-type` — ignores if target platform does not support one or more
of the comma-delimited list of specified crate types. For example,
`//@ needs-crate-type: cdylib, proc-macro` will cause the test to be ignored
on `wasm32-unknown-unknown` target because the target does not support the
`proc-macro` crate type.
The following directives will check LLVM support: The following directives will check LLVM support:
- `no-system-llvm` — ignores if the system llvm is used
- `exact-llvm-major-version: 19` — ignores if the llvm major version does not - `exact-llvm-major-version: 19` — ignores if the llvm major version does not
match the specified llvm major version. match the specified llvm major version.
- `min-llvm-version: 13.0` — ignored if the LLVM version is less than the given - `min-llvm-version: 13.0` — ignored if the LLVM version is less than the given
@ -230,14 +238,14 @@ ignoring debuggers.
### Affecting how tests are built ### Affecting how tests are built
| Directive | Explanation | Supported test suites | Possible values | | Directive | Explanation | Supported test suites | Possible values |
|---------------------|----------------------------------------------------------------------------------------------|---------------------------|------------------------------------------------------------------------------| |---------------------|----------------------------------------------------------------------------------------------|---------------------------|--------------------------------------------------------------------------------------------|
| `compile-flags` | Flags passed to `rustc` when building the test or aux file | All except for `run-make` | Any valid `rustc` flags, e.g. `-Awarnings -Dfoo`. Cannot be `-Cincremental`. | | `compile-flags` | Flags passed to `rustc` when building the test or aux file | All except for `run-make` | Any valid `rustc` flags, e.g. `-Awarnings -Dfoo`. Cannot be `-Cincremental` or `--edition` |
| `edition` | Alias for `compile-flags: --edition=xxx` | All except for `run-make` | Any valid `--edition` value | | `edition` | The edition used to build the test | All except for `run-make` | Any valid `--edition` value |
| `rustc-env` | Env var to set when running `rustc` | All except for `run-make` | `<KEY>=<VALUE>` | | `rustc-env` | Env var to set when running `rustc` | All except for `run-make` | `<KEY>=<VALUE>` |
| `unset-rustc-env` | Env var to unset when running `rustc` | All except for `run-make` | Any env var name | | `unset-rustc-env` | Env var to unset when running `rustc` | All except for `run-make` | Any env var name |
| `incremental` | Proper incremental support for tests outside of incremental test suite | `ui`, `crashes` | N/A | | `incremental` | Proper incremental support for tests outside of incremental test suite | `ui`, `crashes` | N/A |
| `no-prefer-dynamic` | Don't use `-C prefer-dynamic`, don't build as a dylib via a `--crate-type=dylib` preset flag | `ui`, `crashes` | N/A | | `no-prefer-dynamic` | Don't use `-C prefer-dynamic`, don't build as a dylib via a `--crate-type=dylib` preset flag | `ui`, `crashes` | N/A |
<div class="warning"> <div class="warning">
Tests (outside of `run-make`) that want to use incremental tests not in the Tests (outside of `run-make`) that want to use incremental tests not in the

View File

@ -21,7 +21,7 @@ The [`src/ci/docker/run.sh`] script is used to build a specific Docker image, ru
build Rust within the image, and either run tests or prepare a set of archives designed for distribution. The script will mount your local Rust source tree in read-only mode, and an `obj` directory in read-write mode. All the compiler artifacts will be stored in the `obj` directory. The shell will start out in the `obj`directory. From there, it will execute `../src/ci/run.sh` which starts the build as defined by the Docker image. build Rust within the image, and either run tests or prepare a set of archives designed for distribution. The script will mount your local Rust source tree in read-only mode, and an `obj` directory in read-write mode. All the compiler artifacts will be stored in the `obj` directory. The shell will start out in the `obj`directory. From there, it will execute `../src/ci/run.sh` which starts the build as defined by the Docker image.
You can run `src/ci/docker/run.sh <image-name>` directly. A few important notes regarding the `run.sh` script: You can run `src/ci/docker/run.sh <image-name>` directly. A few important notes regarding the `run.sh` script:
- When executed on CI, the script expects that all submodules are checked out. If some submodule that is accessed by the job is not available, the build will result in an error. You should thus make sure that you have all required submodules checked out locally. You can either do that manually through git, or set `submodules = true` in your `config.toml` and run a command such as `x build` to let bootstrap download the most important submodules (this might not be enough for the given CI job that you are trying to execute though). - When executed on CI, the script expects that all submodules are checked out. If some submodule that is accessed by the job is not available, the build will result in an error. You should thus make sure that you have all required submodules checked out locally. You can either do that manually through git, or set `submodules = true` in your `bootstrap.toml` and run a command such as `x build` to let bootstrap download the most important submodules (this might not be enough for the given CI job that you are trying to execute though).
- `<image-name>` corresponds to a single directory located in one of the `src/ci/docker/host-*` directories. Note that image name does not necessarily correspond to a job name, as some jobs execute the same image, but with different environment variables or Docker build arguments (this is a part of the complexity that makes it difficult to run CI jobs locally). - `<image-name>` corresponds to a single directory located in one of the `src/ci/docker/host-*` directories. Note that image name does not necessarily correspond to a job name, as some jobs execute the same image, but with different environment variables or Docker build arguments (this is a part of the complexity that makes it difficult to run CI jobs locally).
- If you are executing a "dist" job (job beginning with `dist-`), you should set the `DEPLOY=1` environment variable. - If you are executing a "dist" job (job beginning with `dist-`), you should set the `DEPLOY=1` environment variable.
- If you are executing an "alternative dist" job (job beginning with `dist-` and ending with `-alt`), you should set the `DEPLOY_ALT=1` environment variable. - If you are executing an "alternative dist" job (job beginning with `dist-` and ending with `-alt`), you should set the `DEPLOY_ALT=1` environment variable.

View File

@ -4,6 +4,14 @@
million lines of Rust code.[^loc] It has caught a large number of [regressions] million lines of Rust code.[^loc] It has caught a large number of [regressions]
in the past and was subsequently included in CI. in the past and was subsequently included in CI.
## What to do if the Fuchsia job breaks?
Please contact the [fuchsia][fuchsia-ping] ping group and ask them for help.
```text
@rustbot ping fuchsia
```
## Building Fuchsia in CI ## Building Fuchsia in CI
Fuchsia builds as part of the suite of bors tests that run before a pull request Fuchsia builds as part of the suite of bors tests that run before a pull request
@ -32,7 +40,7 @@ using your local Rust toolchain.
src/ci/docker/run.sh x86_64-fuchsia src/ci/docker/run.sh x86_64-fuchsia
``` ```
See the [Testing with Docker](docker.md) chapter for more details on how to run See the [Testing with Docker](../docker.md) chapter for more details on how to run
and debug jobs with Docker. and debug jobs with Docker.
Note that a Fuchsia checkout is *large* as of this writing, a checkout and Note that a Fuchsia checkout is *large* as of this writing, a checkout and
@ -162,6 +170,7 @@ rustc book][platform-support].
[`public_configs`]: https://gn.googlesource.com/gn/+/main/docs/reference.md#var_public_configs [`public_configs`]: https://gn.googlesource.com/gn/+/main/docs/reference.md#var_public_configs
[`//build/config:compiler`]: https://cs.opensource.google/fuchsia/fuchsia/+/main:build/config/BUILD.gn;l=121;drc=c26c473bef93b33117ae417893118907a026fec7 [`//build/config:compiler`]: https://cs.opensource.google/fuchsia/fuchsia/+/main:build/config/BUILD.gn;l=121;drc=c26c473bef93b33117ae417893118907a026fec7
[build system]: https://fuchsia.dev/fuchsia-src/development/build/build_system [build system]: https://fuchsia.dev/fuchsia-src/development/build/build_system
[fuchsia-ping]: ../../notification-groups/fuchsia.md
[^loc]: As of June 2024, Fuchsia had about 2 million lines of first-party Rust [^loc]: As of June 2024, Fuchsia had about 2 million lines of first-party Rust
code and a roughly equal amount of third-party code, as counted by tokei code and a roughly equal amount of third-party code, as counted by tokei

View File

@ -3,26 +3,7 @@
[Rust for Linux](https://rust-for-linux.com/) (RfL) is an effort for adding [Rust for Linux](https://rust-for-linux.com/) (RfL) is an effort for adding
support for the Rust programming language into the Linux kernel. support for the Rust programming language into the Linux kernel.
## Building Rust for Linux in CI ## What to do if the Rust for Linux job breaks?
Rust for Linux builds as part of the suite of bors tests that run before a pull
request is merged.
The workflow builds a stage1 sysroot of the Rust compiler, downloads the Linux
kernel, and tries to compile several Rust for Linux drivers and examples using
this sysroot. RfL uses several unstable compiler/language features, therefore
this workflow notifies us if a given compiler change would break it.
If you are worried that a pull request might break the Rust for Linux builder
and want to test it out before submitting it to the bors queue, simply add this
line to your PR description:
> try-job: x86_64-rust-for-linux
Then when you `@bors try` it will pick the job that builds the Rust for Linux
integration.
## What to do in case of failure
If a PR breaks the Rust for Linux CI job, then: If a PR breaks the Rust for Linux CI job, then:
@ -48,4 +29,23 @@ ping group to ask for help:
@rustbot ping rfl @rustbot ping rfl
``` ```
[rfl-ping]: ../notification-groups/rust-for-linux.md ## Building Rust for Linux in CI
Rust for Linux builds as part of the suite of bors tests that run before a pull
request is merged.
The workflow builds a stage1 sysroot of the Rust compiler, downloads the Linux
kernel, and tries to compile several Rust for Linux drivers and examples using
this sysroot. RfL uses several unstable compiler/language features, therefore
this workflow notifies us if a given compiler change would break it.
If you are worried that a pull request might break the Rust for Linux builder
and want to test it out before submitting it to the bors queue, simply add this
line to your PR description:
> try-job: x86_64-rust-for-linux
Then when you `@bors try` it will pick the job that builds the Rust for Linux
integration.
[rfl-ping]: ../../notification-groups/rust-for-linux.md

View File

@ -15,14 +15,16 @@ CI. See the [Crater chapter](crater.md) for more details.
`cargotest` is a small tool which runs `cargo test` on a few sample projects `cargotest` is a small tool which runs `cargo test` on a few sample projects
(such as `servo`, `ripgrep`, `tokei`, etc.). This runs as part of CI and ensures (such as `servo`, `ripgrep`, `tokei`, etc.). This runs as part of CI and ensures
there aren't any significant regressions. there aren't any significant regressions:
> Example: `./x test src/tools/cargotest` ```console
./x test src/tools/cargotest
```
### Large OSS Project builders ### Large OSS Project builders
We have CI jobs that build large open-source Rust projects that are used as We have CI jobs that build large open-source Rust projects that are used as
regression tests in CI. Our integration jobs build the following projects: regression tests in CI. Our integration jobs build the following projects:
- [Fuchsia](fuchsia.md) - [Fuchsia](./ecosystem-test-jobs/fuchsia.md)
- [Rust for Linux](rust-for-linux.md) - [Rust for Linux](./ecosystem-test-jobs/rust-for-linux.md)

View File

@ -38,7 +38,7 @@ directory, and `x` will essentially run `cargo test` on that package.
Examples: Examples:
| Command | Description | | Command | Description |
| ----------------------------------------- | ------------------------------------- | |-------------------------------------------|---------------------------------------|
| `./x test library/std` | Runs tests on `std` only | | `./x test library/std` | Runs tests on `std` only |
| `./x test library/core` | Runs tests on `core` only | | `./x test library/core` | Runs tests on `core` only |
| `./x test compiler/rustc_data_structures` | Runs tests on `rustc_data_structures` | | `./x test compiler/rustc_data_structures` | Runs tests on `rustc_data_structures` |
@ -86,7 +86,7 @@ above.
Examples: Examples:
| Command | Description | | Command | Description |
| ----------------------- | ------------------------------------------------------------------ | |-------------------------|--------------------------------------------------------------------|
| `./x fmt --check` | Checks formatting and exits with an error if formatting is needed. | | `./x fmt --check` | Checks formatting and exits with an error if formatting is needed. |
| `./x fmt` | Runs rustfmt across the entire codebase. | | `./x fmt` | Runs rustfmt across the entire codebase. |
| `./x test tidy --bless` | First runs rustfmt to format the codebase, then runs tidy checks. | | `./x test tidy --bless` | First runs rustfmt to format the codebase, then runs tidy checks. |
@ -155,6 +155,10 @@ chapter](ecosystem.md) for more details.
A separate infrastructure is used for testing and tracking performance of the A separate infrastructure is used for testing and tracking performance of the
compiler. See the [Performance testing chapter](perf.md) for more details. compiler. See the [Performance testing chapter](perf.md) for more details.
### Codegen backend testing
See [Codegen backend testing](./codegen-backend-tests/intro.md).
## Miscellaneous information ## Miscellaneous information
There are some other useful testing-related info at [Misc info](misc.md). There are some other useful testing-related info at [Misc info](misc.md).

View File

@ -6,25 +6,37 @@
ui/codegen/assembly test suites. It provides `core` stubs for tests that need to ui/codegen/assembly test suites. It provides `core` stubs for tests that need to
build for cross-compiled targets but do not need/want to run. build for cross-compiled targets but do not need/want to run.
A test can use [`minicore`] by specifying the `//@ add-core-stubs` directive.
Then, mark the test with `#![feature(no_core)]` + `#![no_std]` + `#![no_core]`.
Due to Edition 2015 extern prelude rules, you will probably need to declare
`minicore` as an extern crate.
Due to the `no_std` + `no_core` nature of these tests, `//@ add-core-stubs`
implies and requires that the test will be built with `-C panic=abort`.
Unwinding panics are not supported.
If you find a `core` item to be missing from the [`minicore`] stub, consider
adding it to the test auxiliary if it's likely to be used or is already needed
by more than one test.
<div class="warning"> <div class="warning">
Please note that [`minicore`] is only intended for `core` items, and explicitly Please note that [`minicore`] is only intended for `core` items, and explicitly
**not** `std` or `alloc` items because `core` items are applicable to a wider **not** `std` or `alloc` items because `core` items are applicable to a wider
range of tests. range of tests.
</div> </div>
A test can use [`minicore`] by specifying the `//@ add-core-stubs` directive.
Then, mark the test with `#![feature(no_core)]` + `#![no_std]` + `#![no_core]`.
Due to Edition 2015 extern prelude rules, you will probably need to declare
`minicore` as an extern crate.
## Implied compiler flags
Due to the `no_std` + `no_core` nature of these tests, `//@ add-core-stubs`
implies and requires that the test will be built with `-C panic=abort`.
**Unwinding panics are not supported.**
Tests will also be built with `-C force-unwind-tables=yes` to preserve CFI
directives in assembly tests.
TL;DR: `//@ add-core-stubs` implies two compiler flags:
1. `-C panic=abort`
2. `-C force-unwind-tables=yes`
## Adding more `core` stubs
If you find a `core` item to be missing from the [`minicore`] stub, consider
adding it to the test auxiliary if it's likely to be used or is already needed
by more than one test.
## Example codegen test that uses `minicore` ## Example codegen test that uses `minicore`
```rust,no_run ```rust,no_run

View File

@ -18,14 +18,14 @@ a subset of test collections, and merge queue CI will exercise all of the test
collection. collection.
</div> </div>
```bash ```text
./x test ./x test
``` ```
The test results are cached and previously successful tests are `ignored` during The test results are cached and previously successful tests are `ignored` during
testing. The stdout/stderr contents as well as a timestamp file for every test testing. The stdout/stderr contents as well as a timestamp file for every test
can be found under `build/<target-triple>/test/` for the given can be found under `build/<target-tuple>/test/` for the given
`<target-triple>`. To force-rerun a test (e.g. in case the test runner fails to `<target-tuple>`. To force-rerun a test (e.g. in case the test runner fails to
notice a change) you can use the `--force-rerun` CLI option. notice a change) you can use the `--force-rerun` CLI option.
> **Note on requirements of external dependencies** > **Note on requirements of external dependencies**
@ -45,22 +45,22 @@ tests. For example, a good "smoke test" that can be used after modifying rustc
to see if things are generally working correctly would be to exercise the `ui` to see if things are generally working correctly would be to exercise the `ui`
test suite ([`tests/ui`]): test suite ([`tests/ui`]):
```bash ```text
./x test tests/ui ./x test tests/ui
``` ```
This will run the `ui` test suite. Of course, the choice of test suites is Of course, the choice of test suites is
somewhat arbitrary, and may not suit the task you are doing. For example, if you somewhat arbitrary, and may not suit the task you are doing. For example, if you
are hacking on debuginfo, you may be better off with the debuginfo test suite: are hacking on debuginfo, you may be better off with the debuginfo test suite:
```bash ```text
./x test tests/debuginfo ./x test tests/debuginfo
``` ```
If you only need to test a specific subdirectory of tests for any given test If you only need to test a specific subdirectory of tests for any given test
suite, you can pass that directory as a filter to `./x test`: suite, you can pass that directory as a filter to `./x test`:
```bash ```text
./x test tests/ui/const-generics ./x test tests/ui/const-generics
``` ```
@ -73,7 +73,7 @@ suite, you can pass that directory as a filter to `./x test`:
Likewise, you can test a single file by passing its path: Likewise, you can test a single file by passing its path:
```bash ```text
./x test tests/ui/const-generics/const-test.rs ./x test tests/ui/const-generics/const-test.rs
``` ```
@ -81,19 +81,19 @@ Likewise, you can test a single file by passing its path:
have to use the `--test-args` argument as described have to use the `--test-args` argument as described
[below](#running-an-individual-test). [below](#running-an-individual-test).
```bash ```text
./x test src/tools/miri --test-args tests/fail/uninit/padding-enum.rs ./x test src/tools/miri --test-args tests/fail/uninit/padding-enum.rs
``` ```
### Run only the tidy script ### Run only the tidy script
```bash ```text
./x test tidy ./x test tidy
``` ```
### Run tests on the standard library ### Run tests on the standard library
```bash ```text
./x test --stage 0 library/std ./x test --stage 0 library/std
``` ```
@ -102,18 +102,18 @@ crates, you have to specify those explicitly.
### Run the tidy script and tests on the standard library ### Run the tidy script and tests on the standard library
```bash ```text
./x test --stage 0 tidy library/std ./x test --stage 0 tidy library/std
``` ```
### Run tests on the standard library using a stage 1 compiler ### Run tests on the standard library using a stage 1 compiler
```bash ```text
./x test --stage 1 library/std ./x test --stage 1 library/std
``` ```
By listing which test suites you want to run you avoid having to run tests for By listing which test suites you want to run,
components you did not change at all. you avoid having to run tests for components you did not change at all.
<div class="warning"> <div class="warning">
Note that bors only runs the tests with the full stage 2 build; therefore, while Note that bors only runs the tests with the full stage 2 build; therefore, while
@ -122,7 +122,7 @@ the tests **usually** work fine with stage 1, there are some limitations.
### Run all tests using a stage 2 compiler ### Run all tests using a stage 2 compiler
```bash ```text
./x test --stage 2 ./x test --stage 2
``` ```
@ -134,13 +134,13 @@ You almost never need to do this; CI will run these tests for you.
You may want to run unit tests on a specific file with following: You may want to run unit tests on a specific file with following:
```bash ```text
./x test compiler/rustc_data_structures/src/thin_vec/tests.rs ./x test compiler/rustc_data_structures/src/thin_vec/tests.rs
``` ```
But unfortunately, it's impossible. You should invoke the following instead: But unfortunately, it's impossible. You should invoke the following instead:
```bash ```text
./x test compiler/rustc_data_structures/ --test-args thin_vec ./x test compiler/rustc_data_structures/ --test-args thin_vec
``` ```
@ -151,7 +151,7 @@ often the test they are trying to fix. As mentioned earlier, you may pass the
full file path to achieve this, or alternatively one may invoke `x` with the full file path to achieve this, or alternatively one may invoke `x` with the
`--test-args` option: `--test-args` option:
```bash ```text
./x test tests/ui --test-args issue-1234 ./x test tests/ui --test-args issue-1234
``` ```
@ -172,25 +172,27 @@ additional arguments to the compiler when building the tests.
## Editing and updating the reference files ## Editing and updating the reference files
If you have changed the compiler's output intentionally, or you are making a new If you have changed the compiler's output intentionally, or you are making a new
test, you can pass `--bless` to the test subcommand. E.g. if some tests in test, you can pass `--bless` to the test subcommand.
`tests/ui` are failing, you can run
As an example,
if some tests in `tests/ui` are failing, you can run this command:
```text ```text
./x test tests/ui --bless ./x test tests/ui --bless
``` ```
to automatically adjust the `.stderr`, `.stdout` or `.fixed` files of It automatically adjusts the `.stderr`, `.stdout`, or `.fixed` files of all `test/ui` tests.
all tests. Of course you can also target just specific tests with the Of course you can also target just specific tests with the `--test-args your_test_name` flag,
`--test-args your_test_name` flag, just like when running the tests. just like when running the tests without the `--bless` flag.
## Configuring test running ## Configuring test running
There are a few options for running tests: There are a few options for running tests:
* `config.toml` has the `rust.verbose-tests` option. If `false`, each test will * `bootstrap.toml` has the `rust.verbose-tests` option. If `false`, each test will
print a single dot (the default). If `true`, the name of every test will be print a single dot (the default). If `true`, the name of every test will be
printed. This is equivalent to the `--quiet` option in the [Rust test printed. This is equivalent to the `--quiet` option in the [Rust test
harness](https://doc.rust-lang.org/rustc/tests/) harness](https://doc.rust-lang.org/rustc/tests/).
* The environment variable `RUST_TEST_THREADS` can be set to the number of * The environment variable `RUST_TEST_THREADS` can be set to the number of
concurrent threads to use for testing. concurrent threads to use for testing.
@ -201,7 +203,7 @@ When `--pass $mode` is passed, these tests will be forced to run under the given
`$mode` unless the directive `//@ ignore-pass` exists in the test file. For `$mode` unless the directive `//@ ignore-pass` exists in the test file. For
example, you can run all the tests in `tests/ui` as `check-pass`: example, you can run all the tests in `tests/ui` as `check-pass`:
```bash ```text
./x test tests/ui --pass check ./x test tests/ui --pass check
``` ```
@ -217,7 +219,7 @@ first look for expected output in `foo.polonius.stderr`, falling back to the
usual `foo.stderr` if not found. The following will run the UI test suite in usual `foo.stderr` if not found. The following will run the UI test suite in
Polonius mode: Polonius mode:
```bash ```text
./x test tests/ui --compare-mode=polonius ./x test tests/ui --compare-mode=polonius
``` ```
@ -230,7 +232,7 @@ just `.rs` files, so after [creating a rustup
toolchain](../building/how-to-build-and-run.md#creating-a-rustup-toolchain), you toolchain](../building/how-to-build-and-run.md#creating-a-rustup-toolchain), you
can do something like: can do something like:
```bash ```text
rustc +stage1 tests/ui/issue-1234.rs rustc +stage1 tests/ui/issue-1234.rs
``` ```
@ -250,7 +252,7 @@ execution* so be careful where it is used.
To do this, first build `remote-test-server` for the remote machine, e.g. for To do this, first build `remote-test-server` for the remote machine, e.g. for
RISC-V RISC-V
```sh ```text
./x build src/tools/remote-test-server --target riscv64gc-unknown-linux-gnu ./x build src/tools/remote-test-server --target riscv64gc-unknown-linux-gnu
``` ```
@ -262,7 +264,7 @@ On the remote machine, run the `remote-test-server` with the `--bind
0.0.0.0:12345` flag (and optionally `-v` for verbose output). Output should look 0.0.0.0:12345` flag (and optionally `-v` for verbose output). Output should look
like this: like this:
```sh ```text
$ ./remote-test-server -v --bind 0.0.0.0:12345 $ ./remote-test-server -v --bind 0.0.0.0:12345
starting test server starting test server
listening on 0.0.0.0:12345! listening on 0.0.0.0:12345!
@ -276,7 +278,7 @@ restrictive IP address when binding.
You can test if the `remote-test-server` is working by connecting to it and You can test if the `remote-test-server` is working by connecting to it and
sending `ping\n`. It should reply `pong`: sending `ping\n`. It should reply `pong`:
```sh ```text
$ nc $REMOTE_IP 12345 $ nc $REMOTE_IP 12345
ping ping
pong pong
@ -286,7 +288,7 @@ To run tests using the remote runner, set the `TEST_DEVICE_ADDR` environment
variable then use `x` as usual. For example, to run `ui` tests for a RISC-V variable then use `x` as usual. For example, to run `ui` tests for a RISC-V
machine with the IP address `1.2.3.4` use machine with the IP address `1.2.3.4` use
```sh ```text
export TEST_DEVICE_ADDR="1.2.3.4:12345" export TEST_DEVICE_ADDR="1.2.3.4:12345"
./x test tests/ui --target riscv64gc-unknown-linux-gnu ./x test tests/ui --target riscv64gc-unknown-linux-gnu
``` ```
@ -294,7 +296,7 @@ export TEST_DEVICE_ADDR="1.2.3.4:12345"
If `remote-test-server` was run with the verbose flag, output on the test If `remote-test-server` was run with the verbose flag, output on the test
machine may look something like machine may look something like
``` ```text
[...] [...]
run "/tmp/work/test1007/a" run "/tmp/work/test1007/a"
run "/tmp/work/test1008/a" run "/tmp/work/test1008/a"
@ -351,7 +353,7 @@ coordinate running tests (see [src/bootstrap/src/core/build_steps/test.rs]).
First thing to know is that it only supports linux x86_64 at the moment. We will First thing to know is that it only supports linux x86_64 at the moment. We will
extend its support later on. extend its support later on.
You need to update `codegen-backends` value in your `config.toml` file in the You need to update `codegen-backends` value in your `bootstrap.toml` file in the
`[rust]` section and add "gcc" in the array: `[rust]` section and add "gcc" in the array:
```toml ```toml
@ -360,21 +362,21 @@ codegen-backends = ["llvm", "gcc"]
Then you need to install libgccjit 12. For example with `apt`: Then you need to install libgccjit 12. For example with `apt`:
```bash ```text
$ apt install libgccjit-12-dev apt install libgccjit-12-dev
``` ```
Now you can run the following command: Now you can run the following command:
```bash ```text
$ ./x test compiler/rustc_codegen_gcc/ ./x test compiler/rustc_codegen_gcc/
``` ```
If it cannot find the `.so` library (if you installed it with `apt` for example), you If it cannot find the `.so` library (if you installed it with `apt` for example), you
need to pass the library file path with `LIBRARY_PATH`: need to pass the library file path with `LIBRARY_PATH`:
```bash ```text
$ LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/12/ ./x test compiler/rustc_codegen_gcc/ LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/12/ ./x test compiler/rustc_codegen_gcc/
``` ```
If you encounter bugs or problems, don't hesitate to open issues on the If you encounter bugs or problems, don't hesitate to open issues on the

View File

@ -202,6 +202,12 @@ several ways to match the message with the line (see the examples below):
* `~|`: Associates the error level and message with the *same* line as the * `~|`: Associates the error level and message with the *same* line as the
*previous comment*. This is more convenient than using multiple carets when *previous comment*. This is more convenient than using multiple carets when
there are multiple messages associated with the same line. there are multiple messages associated with the same line.
* `~v`: Associates the error level and message with the *next* error
annotation line. Each symbol (`v`) that you add adds a line to this, so `~vvv`
is three lines below the error annotation line.
* `~?`: Used to match error levels and messages with errors not having line
information. These can be placed on any line in the test file, but are
conventionally placed at the end.
Example: Example:
@ -270,10 +276,34 @@ fn main() {
//~| ERROR this pattern has 1 field, but the corresponding tuple struct has 3 fields [E0023] //~| ERROR this pattern has 1 field, but the corresponding tuple struct has 3 fields [E0023]
``` ```
#### Positioned above error line
Use the `//~v` idiom with number of v's in the string to indicate the number
of lines below. This is typically used in lexer or parser tests matching on errors like unclosed
delimiter or unclosed literal happening at the end of file.
```rust,ignore
// ignore-tidy-trailing-newlines
//~v ERROR this file contains an unclosed delimiter
fn main((ؼ
```
#### Error without line information
Use `//~?` to match an error without line information.
`//~?` is precise and will not match errors if their line information is available.
It should be preferred to using `error-pattern`, which is imprecise and non-exhaustive.
```rust,ignore
//@ compile-flags: --print yyyy
//~? ERROR unknown print request: `yyyy`
```
### `error-pattern` ### `error-pattern`
The `error-pattern` [directive](directives.md) can be used for messages that don't The `error-pattern` [directive](directives.md) can be used for runtime messages, which don't
have a specific span. have a specific span, or in exceptional cases, for compile time messages.
Let's think about this test: Let's think about this test:
@ -286,8 +316,8 @@ fn main() {
} }
``` ```
We want to ensure this shows "index out of bounds" but we cannot use the `ERROR` We want to ensure this shows "index out of bounds", but we cannot use the `ERROR`
annotation since the error doesn't have any span. Then it's time to use the annotation since the runtime error doesn't have any span. Then it's time to use the
`error-pattern` directive: `error-pattern` directive:
```rust,ignore ```rust,ignore
@ -300,23 +330,52 @@ fn main() {
} }
``` ```
But for strict testing, try to use the `ERROR` annotation as much as possible. Use of `error-pattern` is not recommended in general.
### Error levels For strict testing of compile time output, try to use the line annotations `//~` as much as
possible, including `//~?` annotations for diagnostics without spans.
The error levels that you can have are: If the compile time output is target dependent or too verbose, use directive
`//@ dont-require-annotations: <diagnostic-kind>` to make the line annotation checking
non-exhaustive.
Some of the compiler messages can stay uncovered by annotations in this mode.
For checking runtime output, `//@ check-run-results` may be preferable.
Only use `error-pattern` if none of the above works.
Line annotations `//~` are still checked in tests using `error-pattern`.
In exceptional cases, use `//@ compile-flags: --error-format=human` to opt out of these checks.
### Diagnostic kinds (error levels)
The diagnostic kinds that you can have are:
- `ERROR` - `ERROR`
- `WARN` or `WARNING` - `WARN` (or `WARNING`)
- `NOTE` - `NOTE`
- `HELP` and `SUGGESTION` - `HELP`
- `SUGGESTION`
You are allowed to not include a level, but you should include it at least for The `SUGGESTION` kind is used for specifying what the expected replacement text
the primary message.
The `SUGGESTION` level is used for specifying what the expected replacement text
should be for a diagnostic suggestion. should be for a diagnostic suggestion.
`ERROR` and `WARN` kinds are required to be exhaustively covered by line annotations
`//~` by default.
Other kinds only need to be line-annotated if at least one annotation of that kind appears
in the test file. For example, one `//~ NOTE` will also require all other `//~ NOTE`s in the file
to be written out explicitly.
Use directive `//@ dont-require-annotations` to opt out of exhaustive annotations.
E.g. use `//@ dont-require-annotations: NOTE` to annotate notes selectively.
Avoid using this directive for `ERROR`s and `WARN`ings, unless there's a serious reason, like
target-dependent compiler output.
Missing diagnostic kinds (`//~ message`) are currently accepted, but are being phased away.
They will match any compiler output kind, but will not force exhaustive annotations for that kind.
Prefer explicit kind and `//@ dont-require-annotations` to achieve the same effect.
UI tests use the `-A unused` flag by default to ignore all unused warnings, as UI tests use the `-A unused` flag by default to ignore all unused warnings, as
unused warnings are usually not the focus of a test. However, simple code unused warnings are usually not the focus of a test. However, simple code
samples often have unused warnings. If the test is specifically testing an samples often have unused warnings. If the test is specifically testing an
@ -353,7 +412,7 @@ would be a `.mir.stderr` and `.thir.stderr` file with the different outputs of
the different revisions. the different revisions.
> Note: cfg revisions also work inside the source code with `#[cfg]` attributes. > Note: cfg revisions also work inside the source code with `#[cfg]` attributes.
> >
> By convention, the `FALSE` cfg is used to have an always-false config. > By convention, the `FALSE` cfg is used to have an always-false config.
## Controlling pass/fail expectations ## Controlling pass/fail expectations
@ -415,6 +474,14 @@ reasons, including:
can alert the developer so they know that the associated issue has been fixed can alert the developer so they know that the associated issue has been fixed
and can possibly be closed. and can possibly be closed.
This directive takes comma-separated issue numbers as arguments, or `"unknown"`:
- `//@ known-bug: #123, #456` (when the issues are on rust-lang/rust)
- `//@ known-bug: rust-lang/chalk#123456`
(allows arbitrary text before the `#`, which is useful when the issue is on another repo)
- `//@ known-bug: unknown`
(when there is no known issue yet; preferrably open one if it does not already exist)
Do not include [error annotations](#error-annotations) in a test with Do not include [error annotations](#error-annotations) in a test with
`known-bug`. The test should still include other normal directives and `known-bug`. The test should still include other normal directives and
stdout/stderr files. stdout/stderr files.
@ -530,4 +597,27 @@ with "user-facing" Rust alone. Indeed, one could say that this slightly abuses
the term "UI" (*user* interface) and turns such UI tests from black-box tests the term "UI" (*user* interface) and turns such UI tests from black-box tests
into white-box ones. Use them carefully and sparingly. into white-box ones. Use them carefully and sparingly.
[compiler debugging]: ../compiler-debugging.md#rustc_test-attributes [compiler debugging]: ../compiler-debugging.md#rustc_-test-attributes
## UI test mode preset lint levels
By default, test suites under UI test mode (`tests/ui`, `tests/ui-fulldeps`,
but not `tests/rustdoc-ui`) will specify
- `-A unused`
- `-A internal_features`
If:
- The ui test's pass mode is below `run` (i.e. check or build).
- No compare modes are specified.
Since they can be very noisy in ui tests.
You can override them with `compile-flags` lint level flags or
in-source lint level attributes as required.
Note that the `rustfix` version will *not* have `-A unused` passed,
meaning that you may have to `#[allow(unused)]` to suppress `unused`
lints on the rustfix'd file (because we might be testing rustfix
on `unused` lints themselves).

View File

@ -1,4 +1,4 @@
# Lexing and Parsing # Lexing and parsing
The very first thing the compiler does is take the program (in UTF-8 Unicode text) The very first thing the compiler does is take the program (in UTF-8 Unicode text)
and turn it into a data format the compiler can work with more conveniently than strings. and turn it into a data format the compiler can work with more conveniently than strings.
@ -59,7 +59,7 @@ Note that while parsing, we may encounter macro definitions or invocations.
We set these aside to be expanded (see [Macro Expansion](./macro-expansion.md)). We set these aside to be expanded (see [Macro Expansion](./macro-expansion.md)).
Expansion itself may require parsing the output of a macro, which may reveal more macros to be expanded, and so on. Expansion itself may require parsing the output of a macro, which may reveal more macros to be expanded, and so on.
## More on Lexical Analysis ## More on lexical analysis
Code for lexical analysis is split between two crates: Code for lexical analysis is split between two crates:

View File

@ -185,11 +185,11 @@ rustc.
While calls to `error!`, `warn!` and `info!` are included in every build of the compiler, While calls to `error!`, `warn!` and `info!` are included in every build of the compiler,
calls to `debug!` and `trace!` are only included in the program if calls to `debug!` and `trace!` are only included in the program if
`debug-logging=true` is turned on in config.toml (it is `debug-logging=true` is turned on in bootstrap.toml (it is
turned off by default), so if you don't see `DEBUG` logs, especially turned off by default), so if you don't see `DEBUG` logs, especially
if you run the compiler with `RUSTC_LOG=rustc rustc some.rs` and only see if you run the compiler with `RUSTC_LOG=rustc rustc some.rs` and only see
`INFO` logs, make sure that `debug-logging=true` is turned on in your `INFO` logs, make sure that `debug-logging=true` is turned on in your
config.toml. bootstrap.toml.
## Logging etiquette and conventions ## Logging etiquette and conventions

View File

@ -61,7 +61,7 @@ to be pretty clearly safe and also still retains a very high hit rate
**TODO**: it looks like `pick_candidate_cache` no longer exists. In **TODO**: it looks like `pick_candidate_cache` no longer exists. In
general, is this section still accurate at all? general, is this section still accurate at all?
[`ParamEnv`]: ../param_env/param_env_summary.html [`ParamEnv`]: ../typing_parameter_envs.html
[`tcx`]: ../ty.html [`tcx`]: ../ty.html
[#18290]: https://github.com/rust-lang/rust/issues/18290 [#18290]: https://github.com/rust-lang/rust/issues/18290
[#22019]: https://github.com/rust-lang/rust/issues/22019 [#22019]: https://github.com/rust-lang/rust/issues/22019

View File

@ -183,7 +183,7 @@ in that list. If so, it is considered satisfied. More precisely, we
want to check whether there is a where-clause obligation that is for want to check whether there is a where-clause obligation that is for
the same trait (or some subtrait) and which can match against the obligation. the same trait (or some subtrait) and which can match against the obligation.
[parameter environment]: ../param_env/param_env_summary.html [parameter environment]: ../typing_parameter_envs.html
Consider this simple example: Consider this simple example:

View File

@ -61,11 +61,11 @@ Here is a summary:
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Describe the *syntax* of a type: what the user wrote (with some desugaring). | Describe the *semantics* of a type: the meaning of what the user wrote. | | Describe the *syntax* of a type: what the user wrote (with some desugaring). | Describe the *semantics* of a type: the meaning of what the user wrote. |
| Each `rustc_hir::Ty` has its own spans corresponding to the appropriate place in the program. | Doesnt correspond to a single place in the users program. | | Each `rustc_hir::Ty` has its own spans corresponding to the appropriate place in the program. | Doesnt correspond to a single place in the users program. |
| `rustc_hir::Ty` has generics and lifetimes; however, some of those lifetimes are special markers like [`LifetimeName::Implicit`][implicit]. | `ty::Ty` has the full type, including generics and lifetimes, even if the user left them out | | `rustc_hir::Ty` has generics and lifetimes; however, some of those lifetimes are special markers like [`LifetimeKind::Implicit`][implicit]. | `ty::Ty` has the full type, including generics and lifetimes, even if the user left them out |
| `fn foo(x: u32) → u32 { }` - Two `rustc_hir::Ty` representing each usage of `u32`, each has its own `Span`s, and `rustc_hir::Ty` doesnt tell us that both are the same type | `fn foo(x: u32) → u32 { }` - One `ty::Ty` for all instances of `u32` throughout the program, and `ty::Ty` tells us that both usages of `u32` mean the same type. | | `fn foo(x: u32) → u32 { }` - Two `rustc_hir::Ty` representing each usage of `u32`, each has its own `Span`s, and `rustc_hir::Ty` doesnt tell us that both are the same type | `fn foo(x: u32) → u32 { }` - One `ty::Ty` for all instances of `u32` throughout the program, and `ty::Ty` tells us that both usages of `u32` mean the same type. |
| `fn foo(x: &u32) -> &u32)` - Two `rustc_hir::Ty` again. Lifetimes for the references show up in the `rustc_hir::Ty`s using a special marker, [`LifetimeName::Implicit`][implicit]. | `fn foo(x: &u32) -> &u32)`- A single `ty::Ty`. The `ty::Ty` has the hidden lifetime param. | | `fn foo(x: &u32) -> &u32)` - Two `rustc_hir::Ty` again. Lifetimes for the references show up in the `rustc_hir::Ty`s using a special marker, [`LifetimeKind::Implicit`][implicit]. | `fn foo(x: &u32) -> &u32)`- A single `ty::Ty`. The `ty::Ty` has the hidden lifetime param. |
[implicit]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.LifetimeName.html#variant.Implicit [implicit]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.LifetimeKind.html#variant.Implicit
**Order** **Order**
@ -323,4 +323,4 @@ When looking at the debug output of `Ty` or simply talking about different types
- Generic parameters: `{name}/#{index}` e.g. `T/#0`, where `index` corresponds to its position in the list of generic parameters - Generic parameters: `{name}/#{index}` e.g. `T/#0`, where `index` corresponds to its position in the list of generic parameters
- Inference variables: `?{id}` e.g. `?x`/`?0`, where `id` identifies the inference variable - Inference variables: `?{id}` e.g. `?x`/`?0`, where `id` identifies the inference variable
- Variables from binders: `^{binder}_{index}` e.g. `^0_x`/`^0_2`, where `binder` and `index` identify which variable from which binder is being referred to - Variables from binders: `^{binder}_{index}` e.g. `^0_x`/`^0_2`, where `binder` and `index` identify which variable from which binder is being referred to
- Placeholders: `!{id}` or `!{id}_{universe}` e.g. `!x`/`!0`/`!x_2`/`!0_2`, representing some unique type in the specified universe. The universe is often elided when it is `0` - Placeholders: `!{id}` or `!{id}_{universe}` e.g. `!x`/`!0`/`!x_2`/`!0_2`, representing some unique type in the specified universe. The universe is often elided when it is `0`

View File

@ -40,7 +40,7 @@ We did not always explicitly track the set of bound vars introduced by each `Bin
``` ```
Binder( Binder(
fn(&'^1_0 &'^1 T/#0), fn(&'^1_0 &'^1 T/#0),
&[BoundVariarbleKind::Region(...)], &[BoundVariableKind::Region(...)],
) )
``` ```
This would cause all kinds of issues as the region `'^1_0` refers to a binder at a higher level than the outermost binder i.e. it is an escaping bound var. The `'^1` region (also writeable as `'^0_1`) is also ill formed as the binder it refers to does not introduce a second parameter. Modern day rustc will ICE when constructing this binder due to both of those regions, in the past we would have simply allowed this to work and then ran into issues in other parts of the codebase. This would cause all kinds of issues as the region `'^1_0` refers to a binder at a higher level than the outermost binder i.e. it is an escaping bound var. The `'^1` region (also writeable as `'^0_1`) is also ill formed as the binder it refers to does not introduce a second parameter. Modern day rustc will ICE when constructing this binder due to both of those regions, in the past we would have simply allowed this to work and then ran into issues in other parts of the codebase.

View File

@ -0,0 +1,206 @@
# Typing/Parameter Environments
<!-- toc -->
## Typing Environments
When interacting with the type system there are a few variables to consider that can affect the results of trait solving. The the set of in-scope where clauses, and what phase of the compiler type system operations are being performed in (the [`ParamEnv`][penv] and [`TypingMode`][tmode] structs respectively).
When an environment to perform type system operations in has not yet been created, the [`TypingEnv`][tenv] can be used to bundle all of the external context required into a single type.
Once a context to perform type system operations in has been created (e.g. an [`ObligationCtxt`][ocx] or [`FnCtxt`][fnctxt]) a `TypingEnv` is typically not stored anywhere as only the `TypingMode` is a property of the whole environment, whereas different `ParamEnv`s can be used on a per-goal basis.
[ocx]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_trait_selection/traits/struct.ObligationCtxt.html
[fnctxt]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_typeck/fn_ctxt/struct.FnCtxt.html
## Parameter Environemnts
### What is a `ParamEnv`
The [`ParamEnv`][penv] is a list of in-scope where-clauses, it typically corresponds to a specific item's where clauses. Some clauses are not explicitly written but are instead are implicitly added in the [`predicates_of`][predicates_of] query, such as `ConstArgHasType` or (some) implied bounds.
In most cases `ParamEnv`s are initially created via the [`param_env` query][query] which returns a `ParamEnv` derived from the provided item's where clauses. A `ParamEnv` can also be created with arbitrary sets of clauses that are not derived from a specific item, such as in [`compare_method_predicate_entailment`][method_pred_entailment] where we create a hybrid `ParamEnv` consisting of the impl's where clauses and the trait definition's function's where clauses.
---
If we have a function such as:
```rust
// `foo` would have a `ParamEnv` of:
// `[T: Sized, T: Trait, <T as Trait>::Assoc: Clone]`
fn foo<T: Trait>()
where
<T as Trait>::Assoc: Clone,
{}
```
If we were conceptually inside of `foo` (for example, type-checking or linting it) we would use this `ParamEnv` everywhere that we interact with the type system. This would allow things such as normalization (TODO: write a chapter about normalization and link it), evaluating generic constants, and proving where clauses/goals, to rely on `T` being sized, implementing `Trait`, etc.
A more concrete example:
```rust
// `foo` would have a `ParamEnv` of:
// `[T: Sized, T: Clone]`
fn foo<T: Clone>(a: T) {
// when typechecking `foo` we require all the where clauses on `requires_clone`
// to hold in order for it to be legal to call. This means we have to
// prove `T: Clone`. As we are type checking `foo` we use `foo`'s
// environment when trying to check that `T: Clone` holds.
//
// Trying to prove `T: Clone` with a `ParamEnv` of `[T: Sized, T: Clone]`
// will trivially succeed as bound we want to prove is in our environment.
requires_clone(a);
}
```
Or alternatively an example that would not compile:
```rust
// `foo2` would have a `ParamEnv` of:
// `[T: Sized]`
fn foo2<T>(a: T) {
// When typechecking `foo2` we attempt to prove `T: Clone`.
// As we are type checking `foo2` we use `foo2`'s environment
// when trying to prove `T: Clone`.
//
// Trying to prove `T: Clone` with a `ParamEnv` of `[T: Sized]` will
// fail as there is nothing in the environment telling the trait solver
// that `T` implements `Clone` and there exists no user written impl
// that could apply.
requires_clone(a);
}
```
[predicates_of]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/collect/predicates_of/fn.predicates_of.html
[method_pred_entailment]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/compare_impl_item/fn.compare_method_predicate_entailment.html
[query]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/context/struct.TyCtxt.html#method.param_env
### Acquiring a `ParamEnv`
Using the wrong [`ParamEnv`][penv] when interacting with the type system can lead to ICEs, illformed programs compiling, or erroing when we shouldn't. See [#82159](https://github.com/rust-lang/rust/pull/82159) and [#82067](https://github.com/rust-lang/rust/pull/82067) as examples of PRs that modified the compiler to use the correct param env and in the process fixed ICEs.
In the large majority of cases, when a `ParamEnv` is required it either already exists somewhere in scope, or above in the call stack and should be passed down. A non exhaustive list of places where you might find an existing `ParamEnv`:
- During typeck `FnCtxt` has a [`param_env` field][fnctxt_param_env]
- When writing late lints the `LateContext` has a [`param_env` field][latectxt_param_env]
- During well formedness checking the `WfCheckingCtxt` has a [`param_env` field][wfckctxt_param_env]
- The `TypeChecker` used for MIR Typeck has a [`param_env` field][mirtypeck_param_env]
- In the next-gen trait solver all `Goal`s have a [`param_env` field][goal_param_env] specifying what environment to prove the goal in
- When editing an existing [`TypeRelation`][typerelation] if it implements [`PredicateEmittingRelation`][predicate_emitting_relation] then a [`param_env` method][typerelation_param_env] will be available.
If you aren't sure if there's a `ParamEnv` in scope somewhere that can be used it can be worth opening a thread in the [`#t-compiler/help`][compiler_help] zulip stream where someone may be able to point out where a `ParamEnv` can be acquired from.
Manually constructing a `ParamEnv` is typically only needed at the start of some kind of top level analysis (e.g. hir typeck or borrow checking). In such cases there are three ways it can be done:
- Calling the [`tcx.param_env(def_id)` query][param_env_query] which returns the environment associated with a given definition.
- Creating an empty environment with [`ParamEnv::empty`][env_empty].
- Using [`ParamEnv::new`][param_env_new] to construct an env with an arbitrary set of where clauses. Then calling [`traits::normalize_param_env_or_error`][normalize_env_or_error] to handle normalizing and elaborating all the where clauses in the env.
Using the `param_env` query is by far the most common way to construct a `ParamEnv` as most of the time the compiler is performing an analysis as part of some specific definition.
Creating an empty environment with `ParamEnv::empty` is typically only done either in codegen (indirectly via [`TypingEnv::fully_monomorphized`][tenv_mono]), or as part of some analysis that do not expect to ever encounter generic parameters (e.g. various parts of coherence/orphan check).
Creating an env from an arbitrary set of where clauses is usually unnecessary and should only be done if the environment you need does not correspond to an actual item in the source code (e.g. [`compare_method_predicate_entailment`][method_pred_entailment]).
[param_env_new]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html#method.new
[normalize_env_or_error]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_trait_selection/traits/fn.normalize_param_env_or_error.html
[fnctxt_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_typeck/fn_ctxt/struct.FnCtxt.html#structfield.param_env
[latectxt_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_lint/context/struct.LateContext.html#structfield.param_env
[wfckctxt_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/wfcheck/struct.WfCheckingCtxt.html#structfield.param_env
[goal_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_infer/infer/canonical/ir/solve/struct.Goal.html#structfield.param_env
[typerelation_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_infer/infer/trait.PredicateEmittingRelation.html#tymethod.param_env
[typerelation]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/relate/trait.TypeRelation.html
[mirtypeck_param_env]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_borrowck/type_check/struct.TypeChecker.html#structfield.param_env
[env_empty]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html#method.empty
[param_env_query]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_typeck/fn_ctxt/struct.FnCtxt.html#structfield.param_env
[method_pred_entailment]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir_analysis/check/compare_impl_item/fn.compare_method_predicate_entailment.html
[predicate_emitting_relation]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/relate/combine/trait.PredicateEmittingRelation.html
[tenv_mono]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.TypingEnv.html#method.fully_monomorphized
[compiler_help]: https://rust-lang.zulipchat.com/#narrow/channel/182449-t-compiler.2Fhelp
### How are `ParamEnv`s constructed
Creating a [`ParamEnv`][pe] is more complicated than simply using the list of where clauses defined on an item as written by the user. We need to both elaborate supertraits into the env and fully normalize all aliases. This logic is handled by [`traits::normalize_param_env_or_error`][normalize_env_or_error] (even though it does not mention anything about elaboration).
#### Elaborating supertraits
When we have a function such as `fn foo<T: Copy>()` we would like to be able to prove `T: Clone` inside of the function as the `Copy` trait has a `Clone` supertrait. Constructing a `ParamEnv` looks at all of the trait bounds in the env and explicitly adds new where clauses to the `ParamEnv` for any supertraits found on the traits.
A concrete example would be the following function:
```rust
trait Trait: SuperTrait {}
trait SuperTrait: SuperSuperTrait {}
// `bar`'s unelaborated `ParamEnv` would be:
// `[T: Sized, T: Copy, T: Trait]`
fn bar<T: Copy + Trait>(a: T) {
requires_impl(a);
}
fn requires_impl<T: Clone + SuperSuperTrait>(a: T) {}
```
If we did not elaborate the env then the `requires_impl` call would fail to typecheck as we would not be able to prove `T: Clone` or `T: SuperSuperTrait`. In practice we elaborate the env which means that `bar`'s `ParamEnv` is actually:
`[T: Sized, T: Copy, T: Clone, T: Trait, T: SuperTrait, T: SuperSuperTrait]`
This allows us to prove `T: Clone` and `T: SuperSuperTrait` when type checking `bar`.
The `Clone` trait has a `Sized` supertrait however we do not end up with two `T: Sized` bounds in the env (one for the supertrait and one for the implicitly added `T: Sized` bound) as the elaboration process (implemented via [`util::elaborate`][elaborate]) deduplicates where clauses.
A side effect of this is that even if no actual elaboration of supertraits takes place, the existing where clauses in the env are _also_ deduplicated. See the following example:
```rust
trait Trait {}
// The unelaborated `ParamEnv` would be:
// `[T: Sized, T: Trait, T: Trait]`
// but after elaboration it would be:
// `[T: Sized, T: Trait]`
fn foo<T: Trait + Trait>() {}
```
The [next-gen trait solver][next-gen-solver] also requires this elaboration to take place.
[elaborate]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_infer/traits/util/fn.elaborate.html
[next-gen-solver]: ./solve/trait-solving.md
#### Normalizing all bounds
In the old trait solver the where clauses stored in `ParamEnv` are required to be fully normalized as otherwise the trait solver will not function correctly. A concrete example of needing to normalize the `ParamEnv` is the following:
```rust
trait Trait<T> {
type Assoc;
}
trait Other {
type Bar;
}
impl<T> Other for T {
type Bar = u32;
}
// `foo`'s unnormalized `ParamEnv` would be:
// `[T: Sized, U: Sized, U: Trait<T::Bar>]`
fn foo<T, U>(a: U)
where
U: Trait<<T as Other>::Bar>,
{
requires_impl(a);
}
fn requires_impl<U: Trait<u32>>(_: U) {}
```
As humans we can tell that `<T as Other>::Bar` is equal to `u32` so the trait bound on `U` is equivalent to `U: Trait<u32>`. In practice trying to prove `U: Trait<u32>` in the old solver in this environment would fail as it is unable to determine that `<T as Other>::Bar` is equal to `u32`.
To work around this we normalize `ParamEnv`'s after constructing them so that `foo`'s `ParamEnv` is actually: `[T: Sized, U: Sized, U: Trait<u32>]` which means the trait solver is now able to use the `U: Trait<u32>` in the `ParamEnv` to determine that the trait bound `U: Trait<u32>` holds.
This workaround does not work in all cases as normalizing associated types requires a `ParamEnv` which introduces a bootstrapping problem. We need a normalized `ParamEnv` in order for normalization to give correct results, but we need to normalize to get that `ParamEnv`. Currently we normalize the `ParamEnv` once using the unnormalized param env and it tends to give okay results in practice even though there are some examples where this breaks ([example]).
In the next-gen trait solver the requirement for all where clauses in the `ParamEnv` to be fully normalized is not present and so we do not normalize when constructing `ParamEnv`s.
[example]: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=e6933265ea3e84eaa47019465739992c
[pe]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html
[normalize_env_or_error]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_trait_selection/traits/fn.normalize_param_env_or_error.html
## Typing Modes
Depending on what context we are performing type system operations in, different behaviour may be required. For example during coherence there are stronger requirements about when we can consider goals to not hold or when we can consider types to be unequal.
Tracking which "phase" of the compiler type system operations are being performed in is done by the [`TypingMode`][tenv] enum. The documentation on the `TypingMode` enum is quite good so instead of repeating it here verbatim we would recommend reading the API documentation directly.
[penv]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/struct.ParamEnv.html
[tenv]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_type_ir/infer_ctxt/enum.TypingMode.html
[tmode]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/ty/type.TypingMode.html

View File

@ -1,4 +1,4 @@
# Unsafety Checking # Unsafety checking
Certain expressions in Rust can violate memory safety and as such need to be Certain expressions in Rust can violate memory safety and as such need to be
inside an `unsafe` block or function. The compiler will also warn if an unsafe inside an `unsafe` block or function. The compiler will also warn if an unsafe

View File

@ -221,7 +221,7 @@ There are a couple of things that may happen for some PRs during the review proc
some merge conflicts with other PRs that happen to get merged first. You some merge conflicts with other PRs that happen to get merged first. You
should fix these merge conflicts using the normal git procedures. should fix these merge conflicts using the normal git procedures.
[crater]: ./tests/intro.html#crater [crater]: ./tests/crater.html
If you are not doing a new feature or something like that (e.g. if you are If you are not doing a new feature or something like that (e.g. if you are
fixing a bug), then that's it! Thanks for your contribution :) fixing a bug), then that's it! Thanks for your contribution :)

View File

@ -7,5 +7,9 @@ allow-unauthenticated = [
"blocked", "blocked",
] ]
[no-mentions]
[canonicalize-issue-links]
# Automatically close and reopen PRs made by bots to run CI on them # Automatically close and reopen PRs made by bots to run CI on them
[bot-pull-requests] [bot-pull-requests]