Rollup merge of #142743 - tshepang:rdg-push, r=jieyouxu

rustc-dev-guide subtree update

r? ``@ghost``
This commit is contained in:
Jakub Beránek 2025-06-20 20:03:22 +02:00 committed by GitHub
commit 4e8735e8d3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 272 additions and 116 deletions

View File

@ -1 +1 @@
14346303d760027e53214e705109a62c0f00b214
d1d8e386c5e84c4ba857f56c3291f73c27e2d62a

View File

@ -101,6 +101,8 @@
- [The `rustdoc` test suite](./rustdoc-internals/rustdoc-test-suite.md)
- [The `rustdoc-gui` test suite](./rustdoc-internals/rustdoc-gui-test-suite.md)
- [The `rustdoc-json` test suite](./rustdoc-internals/rustdoc-json-test-suite.md)
- [GPU offload internals](./offload/internals.md)
- [Installation](./offload/installation.md)
- [Autodiff internals](./autodiff/internals.md)
- [Installation](./autodiff/installation.md)
- [How to debug](./autodiff/debugging.md)
@ -121,8 +123,9 @@
- [Feature gate checking](./feature-gate-ck.md)
- [Lang Items](./lang-items.md)
- [The HIR (High-level IR)](./hir.md)
- [Lowering AST to HIR](./ast-lowering.md)
- [Debugging](./hir-debugging.md)
- [Lowering AST to HIR](./hir/lowering.md)
- [Ambig/Unambig Types and Consts](./hir/ambig-unambig-ty-and-consts.md)
- [Debugging](./hir/debugging.md)
- [The THIR (Typed High-level IR)](./thir.md)
- [The MIR (Mid-level IR)](./mir/index.md)
- [MIR construction](./mir/construction.md)
@ -181,7 +184,7 @@
- [Significant changes and quirks](./solve/significant-changes.md)
- [`Unsize` and `CoerceUnsized` traits](./traits/unsize.md)
- [Type checking](./type-checking.md)
- [Method Lookup](./method-lookup.md)
- [Method lookup](./method-lookup.md)
- [Variance](./variance.md)
- [Coherence checking](./coherence.md)
- [Opaque types](./opaque-types-type-alias-impl-trait.md)
@ -189,7 +192,7 @@
- [Return Position Impl Trait In Trait](./return-position-impl-trait-in-trait.md)
- [Region inference restrictions][opaque-infer]
- [Const condition checking](./effects.md)
- [Pattern and Exhaustiveness Checking](./pat-exhaustive-checking.md)
- [Pattern and exhaustiveness checking](./pat-exhaustive-checking.md)
- [Unsafety checking](./unsafety-checking.md)
- [MIR dataflow](./mir/dataflow.md)
- [Drop elaboration](./mir/drop-elaboration.md)
@ -209,7 +212,7 @@
- [Closure capture inference](./closure.md)
- [Async closures/"coroutine-closures"](coroutine-closures.md)
# MIR to Binaries
# MIR to binaries
- [Prologue](./part-5-intro.md)
- [MIR optimizations](./mir/optimizations.md)
@ -218,15 +221,15 @@
- [Interpreter](./const-eval/interpret.md)
- [Monomorphization](./backend/monomorph.md)
- [Lowering MIR](./backend/lowering-mir.md)
- [Code Generation](./backend/codegen.md)
- [Code generation](./backend/codegen.md)
- [Updating LLVM](./backend/updating-llvm.md)
- [Debugging LLVM](./backend/debugging.md)
- [Backend Agnostic Codegen](./backend/backend-agnostic.md)
- [Implicit Caller Location](./backend/implicit-caller-location.md)
- [Libraries and Metadata](./backend/libs-and-metadata.md)
- [Profile-guided Optimization](./profile-guided-optimization.md)
- [LLVM Source-Based Code Coverage](./llvm-coverage-instrumentation.md)
- [Sanitizers Support](./sanitizers.md)
- [Implicit caller location](./backend/implicit-caller-location.md)
- [Libraries and metadata](./backend/libs-and-metadata.md)
- [Profile-guided optimization](./profile-guided-optimization.md)
- [LLVM source-based code coverage](./llvm-coverage-instrumentation.md)
- [Sanitizers support](./sanitizers.md)
- [Debugging support in the Rust compiler](./debugging-support-in-rustc.md)
---

View File

@ -1,4 +1,4 @@
# Implicit Caller Location
# Implicit caller location
<!-- toc -->
@ -8,7 +8,7 @@ adds the [`#[track_caller]`][attr-reference] attribute for functions, the
[`caller_location`][intrinsic] intrinsic, and the stabilization-friendly
[`core::panic::Location::caller`][wrapper] wrapper.
## Motivating Example
## Motivating example
Take this example program:
@ -39,7 +39,7 @@ These error messages are achieved through a combination of changes to `panic!` i
of `core::panic::Location::caller` and a number of `#[track_caller]` annotations in the standard
library which propagate caller information.
## Reading Caller Location
## Reading caller location
Previously, `panic!` made use of the `file!()`, `line!()`, and `column!()` macros to construct a
[`Location`] pointing to where the panic occurred. These macros couldn't be given an overridden
@ -51,7 +51,7 @@ was expanded. This function is itself annotated with `#[track_caller]` and wraps
[`caller_location`][intrinsic] compiler intrinsic implemented by rustc. This intrinsic is easiest
explained in terms of how it works in a `const` context.
## Caller Location in `const`
## Caller location in `const`
There are two main phases to returning the caller location in a const context: walking up the stack
to find the right location and allocating a const value to return.
@ -138,7 +138,7 @@ fn main() {
}
```
### Dynamic Dispatch
### Dynamic dispatch
In codegen contexts we have to modify the callee ABI to pass this information down the stack, but
the attribute expressly does *not* modify the type of the function. The ABI change must be
@ -156,7 +156,7 @@ probably the best we can do without modifying fully-stabilized type signatures.
> whether we'll be called in a const context (safe to ignore shim) or in a codegen context (unsafe
> to ignore shim). Even if we did know, the results from const and codegen contexts must agree.
## The Attribute
## The attribute
The `#[track_caller]` attribute is checked alongside other codegen attributes to ensure the
function:

View File

@ -1,4 +1,4 @@
# Libraries and Metadata
# Libraries and metadata
When the compiler sees a reference to an external crate, it needs to load some
information about that crate. This chapter gives an overview of that process,

View File

@ -174,8 +174,8 @@ compiler, you can use it instead of the JSON file for both arguments.
## Promoting a target from tier 2 (target) to tier 2 (host)
There are two levels of tier 2 targets:
a) Targets that are only cross-compiled (`rustup target add`)
b) Targets that [have a native toolchain][tier2-native] (`rustup toolchain install`)
- Targets that are only cross-compiled (`rustup target add`)
- Targets that [have a native toolchain][tier2-native] (`rustup toolchain install`)
[tier2-native]: https://doc.rust-lang.org/nightly/rustc/target-tier-policy.html#tier-2-with-host-tools

View File

@ -364,7 +364,7 @@ To find documentation-related issues, use the [A-docs label].
You can find documentation style guidelines in [RFC 1574].
To build the standard library documentation, use `x doc --stage 0 library --open`.
To build the standard library documentation, use `x doc --stage 1 library --open`.
To build the documentation for a book (e.g. the unstable book), use `x doc src/doc/unstable-book.`
Results should appear in `build/host/doc`, as well as automatically open in your default browser.
See [Building Documentation](./building/compiler-documenting.md#building-documentation) for more

View File

@ -553,7 +553,7 @@ compiler](#linting-early-in-the-compiler).
[AST nodes]: the-parser.md
[AST lowering]: ast-lowering.md
[AST lowering]: ./hir/lowering.md
[HIR nodes]: hir.md
[MIR nodes]: mir/index.md
[macro expansion]: macro-expansion.md

View File

@ -5,7 +5,7 @@
The HIR "High-Level Intermediate Representation" is the primary IR used
in most of rustc. It is a compiler-friendly representation of the abstract
syntax tree (AST) that is generated after parsing, macro expansion, and name
resolution (see [Lowering](./ast-lowering.html) for how the HIR is created).
resolution (see [Lowering](./hir/lowering.md) for how the HIR is created).
Many parts of HIR resemble Rust surface syntax quite closely, with
the exception that some of Rust's expression forms have been desugared away.
For example, `for` loops are converted into a `loop` and do not appear in

View File

@ -0,0 +1,63 @@
# Ambig/Unambig Types and Consts
Types and Consts args in the HIR can be in two kinds of positions ambiguous (ambig) or unambiguous (unambig). Ambig positions are where
it would be valid to parse either a type or a const, unambig positions are where only one kind would be valid to
parse.
```rust
fn func<T, const N: usize>(arg: T) {
// ^ Unambig type position
let a: _ = arg;
// ^ Unambig type position
func::<T, N>(arg);
// ^ ^
// ^^^^ Ambig position
let _: [u8; 10];
// ^^ ^^ Unambig const position
// ^^ Unambig type position
}
```
Most types/consts in ambig positions are able to be disambiguated as either a type or const during parsing. Single segment paths are always represented as types in the AST but may get resolved to a const parameter during name resolution, then lowered to a const argument during ast-lowering. The only generic arguments which remain ambiguous after lowering are inferred generic arguments (`_`) in path segments. For example, in `Foo<_>` it is not clear whether the `_` argument is an inferred type argument, or an inferred const argument.
In unambig positions, inferred arguments are represented with [`hir::TyKind::Infer`][ty_infer] or [`hir::ConstArgKind::Infer`][const_infer] depending on whether it is a type or const position respectively.
In ambig positions, inferred arguments are represented with `hir::GenericArg::Infer`.
A naive implementation of this would result in there being potentially 5 places where you might think an inferred type/const could be found in the HIR from looking at the structure of the HIR:
1. In unambig type position as a `hir::TyKind::Infer`
2. In unambig const arg position as a `hir::ConstArgKind::Infer`
3. In an ambig position as a [`GenericArg::Type(TyKind::Infer)`][generic_arg_ty]
4. In an ambig position as a [`GenericArg::Const(ConstArgKind::Infer)`][generic_arg_const]
5. In an ambig position as a [`GenericArg::Infer`][generic_arg_infer]
Note that places 3 and 4 would never actually be possible to encounter as we always lower to `GenericArg::Infer` in generic arg position.
This has a few failure modes:
- People may write visitors which check for `GenericArg::Infer` but forget to check for `hir::TyKind/ConstArgKind::Infer`, only handling infers in ambig positions by accident.
- People may write visitors which check for `hir::TyKind/ConstArgKind::Infer` but forget to check for `GenericArg::Infer`, only handling infers in unambig positions by accident.
- People may write visitors which check for `GenerArg::Type/Const(TyKind/ConstArgKind::Infer)` and `GenerigArg::Infer`, not realising that we never represent inferred types/consts in ambig positions as a `GenericArg::Type/Const`.
- People may write visitors which check for *only* `TyKind::Infer` and not `ConstArgKind::Infer` forgetting that there are also inferred const arguments (and vice versa).
To make writing HIR visitors less error prone when caring about inferred types/consts we have a relatively complex system:
1. We have different types in the compiler for when a type or const is in an unambig or ambig position, `hir::Ty<AmbigArg>` and `hir::Ty<()>`. [`AmbigArg`][ambig_arg] is an uninhabited type which we use in the `Infer` variant of `TyKind` and `ConstArgKind` to selectively "disable" it if we are in an ambig position.
2. The [`visit_ty`][visit_ty] and [`visit_const_arg`][visit_const_arg] methods on HIR visitors only accept the ambig position versions of types/consts. Unambig types/consts are implicitly converted to ambig types/consts during the visiting process, with the `Infer` variant handled by a dedicated [`visit_infer`][visit_infer] method.
This has a number of benefits:
- It's clear that `GenericArg::Type/Const` cannot represent inferred type/const arguments
- Implementors of `visit_ty` and `visit_const_arg` will never encounter inferred types/consts making it impossible to write a visitor that seems to work right but handles edge cases wrong
- The `visit_infer` method handles *all* cases of inferred type/consts in the HIR making it easy for visitors to handle inferred type/consts in one dedicated place and not forget cases
[ty_infer]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.TyKind.html#variant.Infer
[const_infer]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.ConstArgKind.html#variant.Infer
[generic_arg_ty]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.GenericArg.html#variant.Type
[generic_arg_const]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.GenericArg.html#variant.Const
[generic_arg_infer]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.GenericArg.html#variant.Infer
[ambig_arg]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.AmbigArg.html
[visit_ty]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/intravisit/trait.Visitor.html#method.visit_ty
[visit_const_arg]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/intravisit/trait.Visitor.html#method.visit_const_arg
[visit_infer]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/intravisit/trait.Visitor.html#method.visit_infer

View File

@ -1,6 +1,6 @@
# AST lowering
The AST lowering step converts AST to [HIR](hir.html).
The AST lowering step converts AST to [HIR](../hir.md).
This means many structures are removed if they are irrelevant
for type analysis or similar syntax agnostic analyses. Examples
of such structures include but are not limited to

View File

@ -1,4 +1,4 @@
# LLVM Source-Based Code Coverage
# LLVM source-based code coverage
<!-- toc -->

View File

@ -0,0 +1,71 @@
# Installation
In the future, `std::offload` should become available in nightly builds for users. For now, everyone still needs to build rustc from source.
## Build instructions
First you need to clone and configure the Rust repository:
```bash
git clone --depth=1 git@github.com:rust-lang/rust.git
cd rust
./configure --enable-llvm-link-shared --release-channel=nightly --enable-llvm-assertions --enable-offload --enable-enzyme --enable-clang --enable-lld --enable-option-checking --enable-ninja --disable-docs
```
Afterwards you can build rustc using:
```bash
./x.py build --stage 1 library
```
Afterwards rustc toolchain link will allow you to use it through cargo:
```
rustup toolchain link offload build/host/stage1
rustup toolchain install nightly # enables -Z unstable-options
```
## Build instruction for LLVM itself
```bash
git clone --depth=1 git@github.com:llvm/llvm-project.git
cd llvm-project
mkdir build
cd build
cmake -G Ninja ../llvm -DLLVM_TARGETS_TO_BUILD="host,AMDGPU,NVPTX" -DLLVM_ENABLE_ASSERTIONS=ON -DLLVM_ENABLE_PROJECTS="clang;lld" -DLLVM_ENABLE_RUNTIMES="offload,openmp" -DLLVM_ENABLE_PLUGINS=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=.
ninja
ninja install
```
This gives you a working LLVM build.
## Testing
run
```
./x.py test --stage 1 tests/codegen/gpu_offload
```
## Usage
It is important to use a clang compiler build on the same llvm as rustc. Just calling clang without the full path will likely use your system clang, which probably will be incompatible.
```
/absolute/path/to/rust/build/x86_64-unknown-linux-gnu/stage1/bin/rustc --edition=2024 --crate-type cdylib src/main.rs --emit=llvm-ir -O -C lto=fat -Cpanic=abort -Zoffload=Enable
/absolute/path/to/rust/build/x86_64-unknown-linux-gnu/llvm/bin/clang++ -fopenmp --offload-arch=native -g -O3 main.ll -o main -save-temps
LIBOMPTARGET_INFO=-1 ./main
```
The first step will generate a `main.ll` file, which has enough instructions to cause the offload runtime to move data to and from a gpu.
The second step will use clang as the compilation driver to compile our IR file down to a working binary. Only a very small Rust subset will work out of the box here, unless
you use features like build-std, which are not covered by this guide. Look at the codegen test to get a feeling for how to write a working example.
In the last step you can run your binary, if all went well you will see a data transfer being reported:
```
omptarget device 0 info: Entering OpenMP data region with being_mapper at unknown:0:0 with 1 arguments:
omptarget device 0 info: tofrom(unknown)[1024]
omptarget device 0 info: Creating new map entry with HstPtrBase=0x00007fffffff9540, HstPtrBegin=0x00007fffffff9540, TgtAllocBegin=0x0000155547200000, TgtPtrBegin=0x0000155547200000, Size=1024, DynRefCount=1, HoldRefCount=0, Name=unknown
omptarget device 0 info: Copying data from host to device, HstPtr=0x00007fffffff9540, TgtPtr=0x0000155547200000, Size=1024, Name=unknown
omptarget device 0 info: OpenMP Host-Device pointer mappings after block at unknown:0:0:
omptarget device 0 info: Host Ptr Target Ptr Size (B) DynRefCount HoldRefCount Declaration
omptarget device 0 info: 0x00007fffffff9540 0x0000155547200000 1024 1 0 unknown at unknown:0:0
// some other output
omptarget device 0 info: Exiting OpenMP data region with end_mapper at unknown:0:0 with 1 arguments:
omptarget device 0 info: tofrom(unknown)[1024]
omptarget device 0 info: Mapping exists with HstPtrBegin=0x00007fffffff9540, TgtPtrBegin=0x0000155547200000, Size=1024, DynRefCount=0 (decremented, delayed deletion), HoldRefCount=0
omptarget device 0 info: Copying data from device to host, TgtPtr=0x0000155547200000, HstPtr=0x00007fffffff9540, Size=1024, Name=unknown
omptarget device 0 info: Removing map entry with HstPtrBegin=0x00007fffffff9540, TgtPtrBegin=0x0000155547200000, Size=1024, Name=unknown
```

9
src/offload/internals.md Normal file
View File

@ -0,0 +1,9 @@
# std::offload
This module is under active development. Once upstream, it should allow Rust developers to run Rust code on GPUs.
We aim to develop a `rusty` GPU programming interface, which is safe, convenient and sufficiently fast by default.
This includes automatic data movement to and from the GPU, in a efficient way. We will (later)
also offer more advanced, possibly unsafe, interfaces which allow a higher degree of control.
The implementation is based on LLVM's "offload" project, which is already used by OpenMP to run Fortran or C++ code on GPUs.
While the project is under development, users will need to call other compilers like clang to finish the compilation process.

View File

@ -410,7 +410,7 @@ For more details on bootstrapping, see
- Guide: [The HIR](hir.md)
- Guide: [Identifiers in the HIR](hir.md#identifiers-in-the-hir)
- Guide: [The `HIR` Map](hir.md#the-hir-map)
- Guide: [Lowering `AST` to `HIR`](ast-lowering.md)
- Guide: [Lowering `AST` to `HIR`](./hir/lowering.md)
- How to view `HIR` representation for your code `cargo rustc -- -Z unpretty=hir-tree`
- Rustc `HIR` definition: [`rustc_hir`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/index.html)
- Main entry point: **TODO**

View File

@ -1,4 +1,4 @@
# From MIR to Binaries
# From MIR to binaries
All of the preceding chapters of this guide have one thing in common:
we never generated any executable machine code at all!

View File

@ -1,4 +1,4 @@
# Pattern and Exhaustiveness Checking
# Pattern and exhaustiveness checking
In Rust, pattern matching and bindings have a few very helpful properties. The
compiler will check that bindings are irrefutable when made and that match arms

View File

@ -1,4 +1,4 @@
# Profile Guided Optimization
# Profile-guided optimization
<!-- toc -->
@ -6,7 +6,7 @@
This chapter describes what PGO is and how the support for it is
implemented in `rustc`.
## What Is Profiled-Guided Optimization?
## What is profiled-guided optimization?
The basic concept of PGO is to collect data about the typical execution of
a program (e.g. which branches it is likely to take) and then use this data
@ -52,7 +52,7 @@ instrumentation, via the experimental option
[`-C instrument-coverage`](./llvm-coverage-instrumentation.md), but using these
coverage results for PGO has not been attempted at this time.
### Overall Workflow
### Overall workflow
Generating a PGO-optimized program involves the following four steps:
@ -62,12 +62,12 @@ Generating a PGO-optimized program involves the following four steps:
4. Compile the program again, this time making use of the profiling data
(e.g. `rustc -C profile-use=merged.profdata main.rs`)
### Compile-Time Aspects
### Compile-time aspects
Depending on which step in the above workflow we are in, two different things
can happen at compile time:
#### Create Binaries with Instrumentation
#### Create binaries with instrumentation
As mentioned above, the profiling instrumentation is added by LLVM.
`rustc` instructs LLVM to do so [by setting the appropriate][pgo-gen-passmanager]
@ -88,7 +88,7 @@ runtime are not removed [by marking the with the right export level][pgo-gen-sym
[pgo-gen-symbols]:https://github.com/rust-lang/rust/blob/1.34.1/src/librustc_codegen_ssa/back/symbol_export.rs#L212-L225
#### Compile Binaries Where Optimizations Make Use Of Profiling Data
#### Compile binaries where optimizations make use of profiling data
In the final step of the workflow described above, the program is compiled
again, with the compiler using the gathered profiling data in order to drive
@ -106,7 +106,7 @@ LLVM does the rest (e.g. setting branch weights, marking functions with
`cold` or `inlinehint`, etc).
### Runtime Aspects
### Runtime aspects
Instrumentation-based approaches always also have a runtime component, i.e.
once we have an instrumented program, that program needs to be run in order
@ -134,7 +134,7 @@ instrumentation artifacts show up in LLVM IR.
[rmake-tests]: https://github.com/rust-lang/rust/tree/master/tests/run-make
[codegen-test]: https://github.com/rust-lang/rust/blob/master/tests/codegen/pgo-instrumentation.rs
## Additional Information
## Additional information
Clang's documentation contains a good overview on [PGO in LLVM][llvm-pgo].

View File

@ -7,8 +7,8 @@ This is a guide for how to profile rustc with [perf](https://perf.wiki.kernel.or
- Get a clean checkout of rust-lang/master, or whatever it is you want
to profile.
- Set the following settings in your `bootstrap.toml`:
- `debuginfo-level = 1` - enables line debuginfo
- `jemalloc = false` - lets you do memory use profiling with valgrind
- `rust.debuginfo-level = 1` - enables line debuginfo
- `rust.jemalloc = false` - lets you do memory use profiling with valgrind
- leave everything else the defaults
- Run `./x build` to get a full build
- Make a rustup toolchain pointing to that result

View File

@ -1,4 +1,4 @@
# Incremental Compilation in detail
# Incremental compilation in detail
<!-- toc -->
@ -66,7 +66,7 @@ because it reads the up-to-date version of `Hir(bar)`. Also, we re-run
`type_check_item(bar)` because result of `type_of(bar)` might have changed.
## The Problem With The Basic Algorithm: False Positives
## The problem with the basic algorithm: false positives
If you read the previous paragraph carefully you'll notice that it says that
`type_of(bar)` *might* have changed because one of its inputs has changed.
@ -93,7 +93,7 @@ of examples like this and small changes to the input often potentially affect
very large parts of the output binaries. As a consequence, we had to make the
change detection system smarter and more accurate.
## Improving Accuracy: The red-green Algorithm
## Improving accuracy: the red-green algorithm
The "false positives" problem can be solved by interleaving change detection
and query re-evaluation. Instead of walking the graph all the way to the
@ -191,7 +191,7 @@ then itself involve recursively invoking more queries, which can mean we come ba
to the `try_mark_green()` algorithm for the dependencies recursively.
## The Real World: How Persistence Makes Everything Complicated
## The real world: how persistence makes everything complicated
The sections above described the underlying algorithm for incremental
compilation but because the compiler process exits after being finished and
@ -258,7 +258,7 @@ the `LocalId`s within it are still the same.
### Checking Query Results For Changes: HashStable And Fingerprints
### Checking query results for changes: `HashStable` and `Fingerprint`s
In order to do red-green-marking we often need to check if the result of a
query has changed compared to the result it had during the previous
@ -306,7 +306,7 @@ This approach works rather well but it's not without flaws:
their stable equivalents while doing the hashing.
### A Tale Of Two DepGraphs: The Old And The New
### A tale of two `DepGraph`s: the old and the new
The initial description of dependency tracking glosses over a few details
that quickly become a head scratcher when actually trying to implement things.
@ -344,7 +344,7 @@ new graph is serialized out to disk, alongside the query result cache, and can
act as the previous dep-graph in a subsequent compilation session.
### Didn't You Forget Something?: Cache Promotion
### Didn't you forget something?: cache promotion
The system described so far has a somewhat subtle property: If all inputs of a
dep-node are green then the dep-node itself can be marked as green without
@ -374,7 +374,7 @@ the result cache doesn't unnecessarily shrink again.
# Incremental Compilation and the Compiler Backend
# Incremental compilation and the compiler backend
The compiler backend, the part involving LLVM, is using the query system but
it is not implemented in terms of queries itself. As a consequence it does not
@ -406,7 +406,7 @@ would save.
## Query Modifiers
## Query modifiers
The query system allows for applying [modifiers][mod] to queries. These
modifiers affect certain aspects of how the system treats the query with
@ -472,7 +472,7 @@ respect to incremental compilation:
[mod]: ../query.html#adding-a-new-kind-of-query
## The Projection Query Pattern
## The projection query pattern
It's interesting to note that `eval_always` and `no_hash` can be used together
in the so-called "projection query" pattern. It is often the case that there is
@ -516,7 +516,7 @@ because we have the projections to take care of keeping things green as much
as possible.
# Shortcomings of the Current System
# Shortcomings of the current system
There are many things that still can be improved.

View File

@ -2,7 +2,7 @@
<!-- toc -->
As described in [the high-level overview of the compiler][hl], the Rust compiler
As described in [Overview of the compiler], the Rust compiler
is still (as of <!-- date-check --> July 2021) transitioning from a
traditional "pass-based" setup to a "demand-driven" system. The compiler query
system is the key to rustc's demand-driven organization.
@ -13,7 +13,7 @@ there is a query called `type_of` that, given the [`DefId`] of
some item, will compute the type of that item and return it to you.
[`DefId`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_span/def_id/struct.DefId.html
[hl]: ./compiler-src.md
[Overview of the compiler]: overview.md#queries
Query execution is *memoized*. The first time you invoke a
query, it will go do the computation, but the next time, the result is
@ -37,12 +37,15 @@ will in turn demand information about that crate, starting from the
actual parsing.
Although this vision is not fully realized, large sections of the
compiler (for example, generating [MIR](./mir/index.md)) currently work exactly like this.
compiler (for example, generating [MIR]) currently work exactly like this.
[^incr-comp-detail]: The ["Incremental Compilation in Detail](queries/incremental-compilation-in-detail.md) chapter gives a more
[^incr-comp-detail]: The [Incremental compilation in detail] chapter gives a more
in-depth description of what queries are and how they work.
If you intend to write a query of your own, this is a good read.
[Incremental compilation in detail]: queries/incremental-compilation-in-detail.md
[MIR]: mir/index.md
## Invoking queries
Invoking a query is simple. The [`TyCtxt`] ("type context") struct offers a method
@ -67,9 +70,15 @@ are cheaply cloneable; insert an `Rc` if necessary).
### Providers
If, however, the query is *not* in the cache, then the compiler will
try to find a suitable **provider**. A provider is a function that has
been defined and linked into the compiler somewhere that contains the
code to compute the result of the query.
call the corresponding **provider** function. A provider is a function
implemented in a specific module and **manually registered** into the
[`Providers`][providers_struct] struct during compiler initialization.
The macro system generates the [`Providers`][providers_struct] struct,
which acts as a function table for all query implementations, where each
field is a function pointer to the actual provider.
**Note:** The `Providers` struct is generated by macros and acts as a function table for all query implementations.
It is **not** a Rust trait, but a plain struct with function pointer fields.
**Providers are defined per-crate.** The compiler maintains,
internally, a table of providers for every crate, at least
@ -97,6 +106,17 @@ fn provider<'tcx>(
Providers take two arguments: the `tcx` and the query key.
They return the result of the query.
N.B. Most of the `rustc_*` crates only provide **local
providers**. Almost all **extern providers** wind up going through the
[`rustc_metadata` crate][rustc_metadata], which loads the information
from the crate metadata. But in some cases there are crates that
provide queries for *both* local and external crates, in which case
they define both a `provide` and a `provide_extern` function, through
[`wasm_import_module_map`][wasm_import_module_map], that `rustc_driver` can invoke.
[rustc_metadata]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_metadata/index.html
[wasm_import_module_map]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/back/symbol_export/fn.wasm_import_module_map.html
### How providers are set up
When the tcx is created, it is given the providers by its creator using
@ -108,19 +128,16 @@ the macros here, but it is basically a big list of function pointers:
```rust,ignore
struct Providers {
type_of: for<'tcx> fn(TyCtxt<'tcx>, DefId) -> Ty<'tcx>,
...
// ... one field for each query
}
```
At present, we have one copy of the struct for local crates, and one
for external crates, though the plan is that we may eventually have
one per crate.
#### How are providers registered?
These `Providers` structs are ultimately created and populated by
`rustc_driver`, but it does this by distributing the work
throughout the other `rustc_*` crates. This is done by invoking
various [`provide`][provide_fn] functions. These functions tend to look
something like this:
The `Providers` struct is filled in during compiler initialization, mainly by the `rustc_driver` crate.
But the actual provider functions are implemented in various `rustc_*` crates (like `rustc_middle`, `rustc_hir_analysis`, etc).
To register providers, each crate exposes a [`provide`][provide_fn] function that looks like this:
[provide_fn]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/hir/fn.provide.html
@ -128,41 +145,34 @@ something like this:
pub fn provide(providers: &mut Providers) {
*providers = Providers {
type_of,
// ... add more providers here
..*providers
};
}
```
That is, they take an `&mut Providers` and mutate it in place. Usually
we use the formulation above just because it looks nice, but you could
as well do `providers.type_of = type_of`, which would be equivalent.
(Here, `type_of` would be a top-level function, defined as we saw
before.) So, if we want to add a provider for some other query,
let's call it `fubar`, into the crate above, we might modify the `provide()`
function like so:
- This function takes a mutable reference to the `Providers` struct and sets the fields to point to the correct provider functions.
- You can also assign fields individually, e.g. `providers.type_of = type_of;`.
#### Adding a new provider
Suppose you want to add a new query called `fubar`. You would:
1. Implement the provider function:
```rust,ignore
fn fubar<'tcx>(tcx: TyCtxt<'tcx>, key: DefId) -> Fubar<'tcx> { ... }
```
2. Register it in the `provide` function:
```rust,ignore
pub fn provide(providers: &mut Providers) {
*providers = Providers {
type_of,
fubar,
..*providers
};
}
fn fubar<'tcx>(tcx: TyCtxt<'tcx>, key: DefId) -> Fubar<'tcx> { ... }
```
N.B. Most of the `rustc_*` crates only provide **local
providers**. Almost all **extern providers** wind up going through the
[`rustc_metadata` crate][rustc_metadata], which loads the information
from the crate metadata. But in some cases there are crates that
provide queries for *both* local and external crates, in which case
they define both a `provide` and a `provide_extern` function, through
[`wasm_import_module_map`][wasm_import_module_map], that `rustc_driver` can invoke.
[rustc_metadata]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_metadata/index.html
[wasm_import_module_map]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/back/symbol_export/fn.wasm_import_module_map.html
---
## Adding a new query

View File

@ -1,4 +1,4 @@
# Sanitizers Support
# Sanitizers support
The rustc compiler contains support for following sanitizers:

View File

@ -62,8 +62,8 @@ Here is a summary:
| Describe the *syntax* of a type: what the user wrote (with some desugaring). | Describe the *semantics* of a type: the meaning of what the user wrote. |
| Each `rustc_hir::Ty` has its own spans corresponding to the appropriate place in the program. | Doesnt correspond to a single place in the users program. |
| `rustc_hir::Ty` has generics and lifetimes; however, some of those lifetimes are special markers like [`LifetimeKind::Implicit`][implicit]. | `ty::Ty` has the full type, including generics and lifetimes, even if the user left them out |
| `fn foo(x: u32) u32 { }` - Two `rustc_hir::Ty` representing each usage of `u32`, each has its own `Span`s, and `rustc_hir::Ty` doesnt tell us that both are the same type | `fn foo(x: u32) u32 { }` - One `ty::Ty` for all instances of `u32` throughout the program, and `ty::Ty` tells us that both usages of `u32` mean the same type. |
| `fn foo(x: &u32) -> &u32)` - Two `rustc_hir::Ty` again. Lifetimes for the references show up in the `rustc_hir::Ty`s using a special marker, [`LifetimeKind::Implicit`][implicit]. | `fn foo(x: &u32) -> &u32)`- A single `ty::Ty`. The `ty::Ty` has the hidden lifetime param. |
| `fn foo(x: u32) -> u32 { }` - Two `rustc_hir::Ty` representing each usage of `u32`, each has its own `Span`s, and `rustc_hir::Ty` doesnt tell us that both are the same type | `fn foo(x: u32) -> u32 { }` - One `ty::Ty` for all instances of `u32` throughout the program, and `ty::Ty` tells us that both usages of `u32` mean the same type. |
| `fn foo(x: &u32) -> &u32 { }` - Two `rustc_hir::Ty` again. Lifetimes for the references show up in the `rustc_hir::Ty`s using a special marker, [`LifetimeKind::Implicit`][implicit]. | `fn foo(x: &u32) -> &u32 { }`- A single `ty::Ty`. The `ty::Ty` has the hidden lifetime param. |
[implicit]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_hir/hir/enum.LifetimeKind.html#variant.Implicit