Use common (American) spellings

Co-Authored-By: Yuki Okushi <huyuumi.dev@gmail.com>
This commit is contained in:
Who? Me?! 2020-03-08 21:21:33 -05:00
parent 0dd7f6291c
commit 0e05dea45a
3 changed files with 11 additions and 11 deletions

View File

@ -48,8 +48,8 @@ There are a few benefits to using LLVM:
Once LLVM IR for all of the functions and statics, etc is built, it is time to
start running LLVM and its optimization passes. LLVM IR is grouped into
"modules". Multiple "modules" can be codegened at the same time to aid in
multi-core utilisation. These "modules" are what we refer to as _codegen
units_. These units were established way back during monomorphisation
multi-core utilization. These "modules" are what we refer to as _codegen
units_. These units were established way back during monomorphization
collection phase.
Once LLVM produces objects from these modules, these objects are passed to the
@ -57,8 +57,8 @@ linker along with, optionally, the metadata object and an archive or an
executable is produced.
It is not necessarily the codegen phase described above that runs the
optimisations. With certain kinds of LTO, the optimisation might happen at the
linking time instead. It is also possible for some optimisations to happen
optimizations. With certain kinds of LTO, the optimization might happen at the
linking time instead. It is also possible for some optimizations to happen
before objects are passed on to the linker and some to happen during the
linking.

View File

@ -2,7 +2,7 @@
Now that we have a list of symbols to generate from the collector, we need to
generate some sort of codegen IR. In this chapter, we will assume LLVM IR,
since that's what rustc usually uses. The actual monomorphisation is performed
since that's what rustc usually uses. The actual monomorphization is performed
as we go, while we do the translation.
Recall that the backend is started by
@ -34,13 +34,13 @@ Before a function is translated a number of simple and primitive analysis
passes will run to help us generate simpler and more efficient LLVM IR. An
example of such an analysis pass would be figuring out which variables are
SSA-like, so that we can translate them to SSA directly rather than relying on
LLVM's `mem2reg` for those variables. The anayses can be found in
LLVM's `mem2reg` for those variables. The analysis can be found in
[`rustc_codegen_ssa::mir::analyze`][mirana].
[mirana]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/mir/analyze/index.html
Usually a single MIR basic block will map to a LLVM basic block, with very few
exceptions: intrinsic or function calls and less basic MIR statemenets like
exceptions: intrinsic or function calls and less basic MIR statements like
`assert` can result in multiple basic blocks. This is a perfect lede into the
non-portable LLVM-specific part of the code generation. Intrinsic generation is
fairly easy to understand as it involves very few abstraction levels in between

View File

@ -41,7 +41,7 @@ fn main() {
}
```
The monomorphisation collector will give you a list of `[main, banana,
The monomorphization collector will give you a list of `[main, banana,
peach::<u64>]`. These are the functions that will have machine code generated
for them. Collector will also add things like statics to that list.
@ -49,9 +49,9 @@ See [the collector rustdocs][collect] for more info.
[collect]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_mir/monomorphize/collector/index.html
The monomorphisation collector is run just before MIR lowering and codegen.
The monomorphization collector is run just before MIR lowering and codegen.
[`rustc_codegen_ssa::base::codegen_crate`][codegen1] calls the
[`collect_and_partition_mono_items`][mono] query, which does monomorphisation
[`collect_and_partition_mono_items`][mono] query, which does monomorphization
collection and then partitions them into [codegen
units](../appendix/glossary.md).
@ -60,7 +60,7 @@ units](../appendix/glossary.md).
## Polymorphization
As mentioned above, monomorphisation produces fast code, but it comes at the
As mentioned above, monomorphization produces fast code, but it comes at the
cost of compile time and binary size. [MIR
optimizations](../mir/optimizations.md) can help a bit with this. Another
optimization currently under development is called _polymorphization_.