handle global trait bounds defining assoc types
This also fixes the compare-mode for
- tests/ui/coherence/coherent-due-to-fulfill.rs
- tests/ui/codegen/mono-impossible-2.rs
- tests/ui/trivial-bounds/trivial-bounds-inconsistent-projection.rs
- tests/ui/nll/issue-61320-normalize.rs
I first considered the alternative to always prefer where-bounds during normalization, regardless of how the trait goal has been proven by changing `fn merge_candidates` instead. ecda83b30f/compiler/rustc_next_trait_solver/src/solve/assembly/mod.rs (L785)
This approach is more restrictive than behavior of the old solver to avoid mismatches between trait and normalization goals. This may be breaking in case the where-bound adds unnecessary region constraints and we currently don't ever try to normalize an associated type. I would like to detect these cases and change the approach to exactly match the old solver if required. I want to minimize cases where attempting to normalize in more places causes code to break.
r? `@compiler-errors`
Add missing check for async body when suggesting await on futures.
Currently the compiler suggests adding `.await` to resolve some type conflicts without checking if the conflict happens in an async context. This can lead to the compiler suggesting `.await` in function signatures where it is invalid. Example:
```rs
trait A {
fn a() -> impl Future<Output = ()>;
}
struct B;
impl A for B {
fn a() -> impl Future<Output = impl Future<Output = ()>> {
async { async { () } }
}
}
```
```
error[E0271]: expected `impl Future<Output = impl Future<Output = ()>>` to be a future that resolves to `()`, but it resolves to `impl Future<Output = ()>`
--> bug.rs:6:15
|
6 | fn a() -> impl Future<Output = impl Future<Output = ()>> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `()`, found future
|
note: calling an async function returns a future
--> bug.rs:6:15
|
6 | fn a() -> impl Future<Output = impl Future<Output = ()>> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
note: required by a bound in `A::{synthetic#0}`
--> bug.rs:2:27
|
2 | fn a() -> impl Future<Output = ()>;
| ^^^^^^^^^^^ required by this bound in `A::{synthetic#0}`
help: consider `await`ing on the `Future`
|
6 | fn a() -> impl Future<Output = impl Future<Output = ()>>.await {
| ++++++
```
The documentation of suggest_await_on_expect_found (`compiler/rustc_trait_selection/src/error_reporting/infer/suggest.rs:156`) even mentions such a check but does not actually implement it.
This PR adds that check to ensure `.await` is only suggested within async blocks.
There were 3 unit tests whose expected output needed to be changed because they had the suggestion outside of async. One of them (`tests/ui/async-await/dont-suggest-missing-await.rs`) actually tests that exact problem but expects it to be present.
Thanks to `@llenck` for initially noticing the bug and helping with fixing it
Implement `ByteStr` and `ByteString` types
Approved ACP: https://github.com/rust-lang/libs-team/issues/502
Tracking issue: https://github.com/rust-lang/rust/issues/134915
These types represent human-readable strings that are conventionally,
but not always, UTF-8. The `Debug` impl prints non-UTF-8 bytes using
escape sequences, and the `Display` impl uses the Unicode replacement
character.
This is a minimal implementation of these types and associated trait
impls. It does not add any helper methods to other types such as `[u8]`
or `Vec<u8>`.
I've omitted a few implementations of `AsRef`, `AsMut`, and `Borrow`,
when those would be the second implementation for a type (counting the
`T` impl), to avoid potential inference failures. We can attempt to add
more impls later in standalone commits, and run them through crater.
In addition to the `bstr` feature, I've added a `bstr_internals` feature
for APIs provided by `core` for use by `alloc` but not currently
intended for stabilization.
This API and its implementation are based *heavily* on the `bstr` crate
by Andrew Gallant (`@BurntSushi).`
r? `@BurntSushi`
Refactor `fmt::Display` impls in rustdoc
This PR does a couple of things, with the intention of cleaning up and streamlining some of the `fmt::Display` impls in rustdoc:
1. Use the unstable [`fmt::from_fn`](https://github.com/rust-lang/rust/issues/117729) instead of open-coding it.
2. ~~Replace bespoke implementations of `Itertools::format` with the method itself.~~
4. Some more minor cleanups - DRY, remove unnecessary calls to `Symbol::as_str()`, replace some `format!()` calls with lazier options
The changes are mostly cosmetic but some of them might have a slight positive effect on performance.
tests: Port `jobserver-error` to rmake.rs
Part of #121876.
This PR ports `tests/run-make/jobserver-error` to rmake.rs, and is basically #128789 slightly adjusted.
The complexity involved here is mostly how to get `/dev/null/` piping to fd 3 working with std `Command`, whereas with a shell this is much easier (as is evident with the `Makefile` version).
Supersedes #128789.
This PR is co-authored with `@Oneirical` and `@coolreader18.`
try-job: aarch64-gnu
try-job: i686-gnu-1
try-job: x86_64-gnu-debug
try-job: x86_64-gnu-llvm-18-1
Add test for checking used glibc symbols
This test checks that we do not use too new glibc symbols in the compiler on x64 GNU Linux, in order not to break our [glibc promises](https://blog.rust-lang.org/2022/08/01/Increasing-glibc-kernel-requirements.html).
One thing that isn't solved in the PR yet is to make sure that this test will only run on `dist` CI, more specifically on the `dist-x86_64-linux` runner, in the opt-dist post-optimization tests (it can fail elsewhere, that doesn't matter). Any suggestions on how to do that are welcome.
Fixes: https://github.com/rust-lang/rust/issues/134037
r? `@jieyouxu`
Update our range `assume`s to the format that LLVM prefers
I found out in https://github.com/llvm/llvm-project/issues/123278#issuecomment-2597440158 that the way I started emitting the `assume`s in #109993 was suboptimal, and as seen in that LLVM issue the way we're doing it -- with two `assume`s sometimes -- can at times lead to CVP/SCCP not realize what's happening because one of them turns into a `ne` instead of conveying a range.
So this updates how it's emitted from
```
assume( x >= LOW );
assume( x <= HIGH );
```
or
```
// (for ranges that wrap the range)
assume( (x <= LOW) | (x >= HIGH) );
```
to
```
assume( (x - LOW) <= (HIGH - LOW) );
```
so that we don't need multiple `icmp`s nor multiple `assume`s for a single value, and both wrappping and non-wrapping ranges emit the same shape.
(And we don't bother emitting the subtraction if `LOW` is zero, since that's trivial for us to check too.)
remove support for the (unstable) #[start] attribute
As explained by `@Noratrieb:`
`#[start]` should be deleted. It's nothing but an accidentally leaked implementation detail that's a not very useful mix between "portable" entrypoint logic and bad abstraction.
I think the way the stable user-facing entrypoint should work (and works today on stable) is pretty simple:
- `std`-using cross-platform programs should use `fn main()`. the compiler, together with `std`, will then ensure that code ends up at `main` (by having a platform-specific entrypoint that gets directed through `lang_start` in `std` to `main` - but that's just an implementation detail)
- `no_std` platform-specific programs should use `#![no_main]` and define their own platform-specific entrypoint symbol with `#[no_mangle]`, like `main`, `_start`, `WinMain` or `my_embedded_platform_wants_to_start_here`. most of them only support a single platform anyways, and need cfg for the different platform's ways of passing arguments or other things *anyways*
`#[start]` is in a super weird position of being neither of those two. It tries to pretend that it's cross-platform, but its signature is a total lie. Those arguments are just stubbed out to zero on ~~Windows~~ wasm, for example. It also only handles the platform-specific entrypoints for a few platforms that are supported by `std`, like Windows or Unix-likes. `my_embedded_platform_wants_to_start_here` can't use it, and neither could a libc-less Linux program.
So we have an attribute that only works in some cases anyways, that has a signature that's a total lie (and a signature that, as I might want to add, has changed recently, and that I definitely would not be comfortable giving *any* stability guarantees on), and where there's a pretty easy way to get things working without it in the first place.
Note that this feature has **not** been RFCed in the first place.
*This comment was posted [in May](https://github.com/rust-lang/rust/issues/29633#issuecomment-2088596042) and so far nobody spoke up in that issue with a usecase that would require keeping the attribute.*
Closes https://github.com/rust-lang/rust/issues/29633
try-job: x86_64-gnu-nopt
try-job: x86_64-msvc-1
try-job: x86_64-msvc-2
try-job: test-various
Rework dyn trait lowering to stop being so intertwined with trait alias expansion
This PR reworks the trait object lowering code to stop handling trait aliases so funky, and removes the `TraitAliasExpander` in favor of a much simpler design. This refactoring is important for making the code that I'm writing in https://github.com/rust-lang/rust/pull/133397 understandable and easy to maintain, so the diagnostics regressions are IMO inevitable.
In the old trait object lowering code, we used to be a bit sloppy with the lists of traits in their unexpanded and expanded forms. This PR largely rewrites this logic to expand the trait aliases *once* and handle them more responsibly throughout afterwards.
Please review this with whitespace disabled.
r? lcnr
Fix dev guide docs for error-pattern
I know it would have made more sense to make this PR to the dev guide repo but I had already made the fix before I realized that.
r? `@jieyouxu`
Stable Hash: Ignore all HirIds that just identify the node itself
This should provide better incremental caching, but it seems there is more to it.
These IDs also serve no purpose being in the stable hash of the item they refer to, only when referring to *another* item is it important that we hash the `HirId`. So we can at least avoid the cost during stable hashing, even if we don't benefit from it by avoiding some queries' caches from being invalidated
Unsure how to make sure we do this right by construction. Would be nice to do something type based
Add gpu-kernel calling convention
The amdgpu-kernel calling convention was reverted in commit f6b21e90d1ec01081bc2619efb68af6788a63d65 (#120495 and https://github.com/rust-lang/rust-analyzer/pull/16463) due to inactivity in the amdgpu target.
Introduce a `gpu-kernel` calling convention that translates to `ptx_kernel` or `amdgpu_kernel`, depending on the target that rust compiles for.
Tracking issue: #135467
amdgpu target tracking issue: #135024
Use trait definition cycle detection for trait alias definitions, too
fixes#133901
In general doing this for `All` is not right, but this code path is specifically for traits and trait aliases, and there we only ever use `All` for trait aliases.