Use an automatic algorithm to generate strength reduction code.
You give it all the linear combination (a*x+b*y) instructions in your
architecture, it figures out the rest.
Just amd64 and arm64 for now.
Fixes#67575
Change-Id: I35c69382bebb1d2abf4bb4e7c43fd8548c6c59a1
Reviewed-on: https://go-review.googlesource.com/c/go/+/626998
Reviewed-by: Jakub Ciolek <jakub@ciolek.dev>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
For riscv64/rva22u64 and above, we can intrinsify math/bits.OnesCount
using the CPOP/CPOPW machine instructions. Since the native Go
implementation of OnesCount is relatively expensive, it is also
worth emitting a check for Zbb support when compiled for rva20u64.
On a Banana Pi F3, with GORISCV64=rva22u64:
│ oc.1 │ oc.2 │
│ sec/op │ sec/op vs base │
OnesCount-8 16.930n ± 0% 4.389n ± 0% -74.08% (p=0.000 n=10)
OnesCount8-8 5.642n ± 0% 5.016n ± 0% -11.10% (p=0.000 n=10)
OnesCount16-8 9.404n ± 0% 5.015n ± 0% -46.67% (p=0.000 n=10)
OnesCount32-8 13.165n ± 0% 4.388n ± 0% -66.67% (p=0.000 n=10)
OnesCount64-8 16.300n ± 0% 4.388n ± 0% -73.08% (p=0.000 n=10)
geomean 11.40n 4.629n -59.40%
On a Banana Pi F3, compiled with GORISCV64=rva20u64 and with Zbb
detection enabled:
│ oc.3 │ oc.4 │
│ sec/op │ sec/op vs base │
OnesCount-8 16.930n ± 0% 5.643n ± 0% -66.67% (p=0.000 n=10)
OnesCount8-8 5.642n ± 0% 5.642n ± 0% ~ (p=0.447 n=10)
OnesCount16-8 10.030n ± 0% 6.896n ± 0% -31.25% (p=0.000 n=10)
OnesCount32-8 13.170n ± 0% 5.642n ± 0% -57.16% (p=0.000 n=10)
OnesCount64-8 16.300n ± 0% 5.642n ± 0% -65.39% (p=0.000 n=10)
geomean 11.55n 5.873n -49.16%
On a Banana Pi F3, compiled with GORISCV64=rva20u64 but with Zbb
detection disabled:
│ oc.3 │ oc.5 │
│ sec/op │ sec/op vs base │
OnesCount-8 16.93n ± 0% 29.47n ± 0% +74.07% (p=0.000 n=10)
OnesCount8-8 5.642n ± 0% 5.643n ± 0% ~ (p=0.191 n=10)
OnesCount16-8 10.03n ± 0% 15.05n ± 0% +50.05% (p=0.000 n=10)
OnesCount32-8 13.17n ± 0% 18.18n ± 0% +38.04% (p=0.000 n=10)
OnesCount64-8 16.30n ± 0% 21.94n ± 0% +34.60% (p=0.000 n=10)
geomean 11.55n 15.84n +37.16%
For hardware without Zbb, this adds ~5ns overhead, while for hardware
with Zbb we achieve a performance gain up of up to 11ns. It is worth
noting that OnesCount8 is cheap enough that it is preferable to stick
with the generic version in this case.
Change-Id: Id657e40e0dd1b1ab8cc0fe0f8a68df4c9f2d7da5
Reviewed-on: https://go-review.googlesource.com/c/go/+/660856
Reviewed-by: Carlos Amedee <carlos@golang.org>
Reviewed-by: Meng Zhuo <mengzhuo1203@gmail.com>
Reviewed-by: Mark Ryan <markdryan@rivosinc.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
That's where the unified IR writer expects it.
Fixes#73476
Change-Id: Ic22bd8dee5be5991e6d126ae3f6eccb2acdc0b19
Reviewed-on: https://go-review.googlesource.com/c/go/+/667415
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
When replacing a loop where the iteration variable has a named type,
we need to compute the last iteration value as i = T(len(a)-1), not
just i = len(a)-1.
Fixes#73491
Change-Id: Ic1cc3bdf8571a40c10060f929a9db8a888de2b70
Reviewed-on: https://go-review.googlesource.com/c/go/+/667815
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@google.com>
Reviewed-by: Junyang Shao <shaojunyang@google.com>
Reviewed-by: Keith Randall <khr@google.com>
The full 64x64->128 multiply comes up when using bits.Mul64.
The 64x64->64+overflow multiply comes up in unsafe.Slice when using
a constant length.
Change-Id: I298515162ca07d804b2d699d03bc957ca30a4ebc
Reviewed-on: https://go-review.googlesource.com/c/go/+/667175
Reviewed-by: Junyang Shao <shaojunyang@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
For any len() which requires the evaluation of its arg (according to the spec).
Update #72844
Change-Id: Id2b0bcc78073a6d5051abd000131dafdf65e7f26
Reviewed-on: https://go-review.googlesource.com/c/go/+/658097
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
If the thing we're ranging over is an array or ptr to array, and
it doesn't have a function call or channel receive in it, then we
shouldn't evaluate it.
Typecheck the ranged-over value as a constant in that case.
That makes the unified exporter replace the range expression
with a constant int.
Change-Id: I0d4ea081de70d20cf6d1fa8d25ef6cb021975554
Reviewed-on: https://go-review.googlesource.com/c/go/+/659317
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Robert Griesemer <gri@google.com>
Broadly speaking, escape analysis has two main phases. First, it
traverses the AST while building a data-flow graph of locations and
edges. Second, during "solve", it repeatedly walks the data-flow graph
while carefully propagating information about each location, including
whether a location's address reaches the heap.
Once escape analysis is in the solve phase and repeatedly walking the
data-flow graph, almost all the information it needs is within the
location graph, with a notable exception being the ir.Class of an
ir.Name, which currently must be checked by following a pointer from
the location to its ir.Node.
For typical graphs, that does not matter much, but if the graph becomes
large enough, cache misses in the inner solve loop start to matter more,
and the class is checked many times in the inner loop.
We therefore store the class information on the location in the graph
to reduce how much memory we need to load in the inner loop.
The package github.com/microsoft/typescript-go/internal/checker
has many locations, and compilation currently spends most of its time
in escape analysis.
This CL gives roughly a 30% speedup for wall clock compilation time
for the checker package:
go1.24.0: 91.79s
this CL: 64.98s
Linux perf shows a healthy reduction for example in l2_request.miss and
dTLB-load-misses on an amd64 test VM.
We could tweak things a bit more, though initial review feedback
has suggested it would be good to get this in as it stands.
Subsequent CLs in this stack give larger improvements.
Updates #72815
Change-Id: I3117430dff684c99e6da1e0d7763869873379238
Reviewed-on: https://go-review.googlesource.com/c/go/+/657295
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Jake Bailey <jacob.b.bailey@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
To simplify the code a bit.
Change-Id: Ia72f576de59ff161ec389a4992bb635f89783540
GitHub-Last-Rev: eaec8216be
GitHub-Pull-Request: golang/go#73411
Reviewed-on: https://go-review.googlesource.com/c/go/+/666117
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
CL 653856 enabled stack allocation of variable-sized makeslice results.
This CL adds debug hashing of that change, plus a debug flag
to control the byte threshold used.
The debug hashing machinery means we also now have a way to disable just
the CL 653856 optimization by doing -gcflags='all=-d=variablemakehash=n'
or similar, though the stderr output will then typically have many
lines of debug hash output.
Using this CL plus the bisect command, I was able to retroactively
find one of the lines of code responsible for #73199:
$ bisect -compile=variablemake go test -skip TestListWireGuardDrivers
[...]
bisect: FOUND failing change set
--- change set #1 (enabling changes causes failure)
./security_windows.go:1321:38 (variablemake)
./security_windows.go:1321:38 (variablemake)
---
Previously, I had tracked down those lines by diffing '-gcflags=-m=1'
output and brief code inspection, but seeing the bisect was very nice.
This CL also adds a compiler debug flag to control the threshold for
stack allocation of variably sized make results. This can help
us identify more code that is relying on certain stack allocations.
This might be a temporary flag that we delete prior to Go 1.25
(given we would not want people to rely on it), or maybe it
might make sense to keep it for some period of time beyond the release
of Go 1.25 to help the ecosystem shake out other bugs.
Using these two flags together (and picking a threshold of 64 rather
than the default of 32), it looks for example like this
x/sys/windows code might be relying on stack allocation of
a byte slice:
$ bisect -compile=variablemake go test -gcflags=-d=variablemakethreshold=64 -skip TestListWireGuardDrivers
[...]
bisect: FOUND failing change set
--- change set #1 (enabling changes causes failure)
./syscall_windows_test.go:1178:16 (variablemake)
Updates #73199Fixes#73253
Change-Id: I160179a0e3c148c3ea86be5c9b6cea8a52c3e5b7
Reviewed-on: https://go-review.googlesource.com/c/go/+/663795
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
When the compiler builds a Go package with DWARF 5 generation enabled,
it emits relocations into various generated DWARF symbols (ex:
SDWARFFCN) that use the R_DWTXTADDR_* flavor of relocations. The
specific size of this relocation is selected based on the total number
of functions in the package -- if the package is tiny (just a couple
funcs) we can use R_DWTXTADDR_U1 relocs (which target just a byte); if
the package is larger we might need to use the 2-byte or 3-byte flavor
of this reloc.
Prior to this patch, the strategy used to pick the right relocation
size was flawed in that it didn't take into account packages with
assembly code. For example, if you have a package P with 200 funcs
written in Go source and 200 funcs written in assembly, you can't use
the R_DWTXTADDR_U1 reloc flavor for indirect text references since the
real function count for the package (asm + go) exceeds 255.
The new strategy (with this patch) is to have the compiler look at the
"symabis" file to determine the count of assembly functions. For the
assembler, rather than create additional plumbing to pass in the Go
source func count we just use an dummy (artificially high) function
count so as to select a relocation that will be large enough.
Fixes#72810.
Updates #26379.
Change-Id: I98d04f3c6aacca1dafe1f1610c99c77db290d1d8
Reviewed-on: https://go-review.googlesource.com/c/go/+/663235
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Give people a way to turn this optimization off.
(Currently the constant-sized make() stack allocation is not disabled
with -N. Kinda inconsistent, but oh well, probably worse to change it now.)
Update #73253
Change-Id: Idb9ffde444f34e70673147fd6a962368904a7a55
Reviewed-on: https://go-review.googlesource.com/c/go/+/664655
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Jorropo <jorropo.pgm@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: t hepudds <thepudds1460@gmail.com>
It's really only needed for stores and store-like instructions
(atomic exchange, compare-and-swap, ...).
Fixes#73180
Change-Id: I8ecd833a301355adf0fa4bff43250091640c6226
Reviewed-on: https://go-review.googlesource.com/c/go/+/663155
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
On 32-bit systems, these need to be aligned to 8 bytes, even though the
typechecker doesn't tell us that.
The 64-bit allocations might be the target of atomic operations
that require 64-bit alignment.
Fixes 386 longtest builder.
Fixes#73173
Change-Id: I68f6a4f40c7051d29c57ecd560c8d920876a56a6
Reviewed-on: https://go-review.googlesource.com/c/go/+/663015
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
This lets us get rid of lots of specialized opcodes for storing zero.
Instead, use regular store opcodes that just happen to use the zero
register as one of their inputs.
Change-Id: I2902a6f9b0831cb598df45189ca6bb57221bef72
Reviewed-on: https://go-review.googlesource.com/c/go/+/633075
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
For variable-sized allocations.
Turns out that we already implement the correct escape semantics
for this case. Even when the result of the "make" does not escape,
everything assigned into it does.
Change-Id: Ia123c538d39f2f1e1581c24e4135a65af3821c5e
Reviewed-on: https://go-review.googlesource.com/c/go/+/657937
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Robert Griesemer <gri@google.com>
If multiple small scalars escape to the heap, allocate them together
with a single allocation. They are going to be aggregated together
in the tiny allocator anyway, might as well do just one runtime call.
Change-Id: I4317e29235af63de378a26436a18d7fb0c39e41f
Reviewed-on: https://go-review.googlesource.com/c/go/+/648536
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Instead of always allocating variable-sized "make" calls on the heap,
allocate a small, constant-sized array on the stack and use that array
as the backing store if it is big enough.
Requires the result of the "make" doesn't escape.
if cap <= K {
var arr [K]E
slice = arr[:len:cap]
} else {
slice = makeslice(E, len, cap)
}
Pretty conservatively for now, K = 32/sizeof(E). The slice header is
already 24 bytes, so wasting 32 bytes of stack if the requested size
is too big isn't that bad. Larger would waste more stack space but
maybe avoid more allocations.
This CL also requires the element type be pointer-free. Maybe we
could relax that at some point, but it is hard. If the element type
has pointers we can get heap->stack pointers (in the case where the
requested size is too big and the slice is heap allocated).
Note that this only handles the case of makeslice called directly from
compiler-generated code. It does not handle slices built in the
runtime on behalf of the program (e.g. in growslice). Some of those
are currently handled by passing in a tmpBuf (e.g. concatstrings),
but we could probably do more.
Change-Id: I8378efad527cd00d25948a80b82a68d88fbd93a1
Reviewed-on: https://go-review.googlesource.com/c/go/+/653856
Reviewed-by: Robert Griesemer <gri@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Improve the compiler's store-to-load forwarding optimization by relaxing the
type comparison condition. Instead of requiring exact type equality (CMPeq),
we now use copyCompatibleType which allows forwarding between compatible
types where safe.
Fix several size comparison bugs in the nested store patterns. Previously,
we were comparing the size of the outer store with the load type,
rather than comparing with the size of the actual store being forwarded
from.
Skip OpConvert in dead store elimination to help get rid of dead stores such
as zeroing slices. OpConvert, like OpInlMark, doesn't really use the memory.
This optimization is particularly beneficial for code that creates slices with
computed pointers, such as the runtime's heapBitsSlice function, where
intermediate calculations were previously causing the compiler to miss
store-to-load forwarding opportunities.
Local sweet run result on an x86_64 laptop:
│ Orig.res │ Hopt.res │
│ sec/op │ sec/op vs base │
BiogoIgor-8 5.303 ± 1% 5.322 ± 1% ~ (p=0.190 n=10)
BiogoKrishna-8 7.894 ± 1% 7.828 ± 2% ~ (p=0.190 n=10)
BleveIndexBatch100-8 2.257 ± 1% 2.248 ± 2% ~ (p=0.529 n=10)
EtcdPut-8 30.12m ± 1% 30.03m ± 1% ~ (p=0.796 n=10)
EtcdSTM-8 127.1m ± 1% 126.2m ± 0% -0.74% (p=0.023 n=10)
GoBuildKubelet-8 52.21 ± 0% 52.05 ± 1% ~ (p=0.063 n=10)
GoBuildKubeletLink-8 4.342 ± 1% 4.305 ± 0% -0.85% (p=0.000 n=10)
GoBuildIstioctl-8 43.33 ± 0% 43.24 ± 0% -0.22% (p=0.015 n=10)
GoBuildIstioctlLink-8 4.604 ± 1% 4.598 ± 0% ~ (p=0.063 n=10)
GoBuildFrontend-8 15.33 ± 0% 15.29 ± 0% ~ (p=0.143 n=10)
GoBuildFrontendLink-8 740.0m ± 1% 737.7m ± 1% ~ (p=0.912 n=10)
GopherLuaKNucleotide-8 9.590 ± 1% 9.656 ± 1% ~ (p=0.165 n=10)
MarkdownRenderXHTML-8 96.97m ± 1% 97.26m ± 2% ~ (p=0.105 n=10)
Tile38QueryLoad-8 335.9µ ± 1% 335.6µ ± 1% ~ (p=0.481 n=10)
geomean 1.336 1.333 -0.22%
Change-Id: I031552623e6d5a3b1b5be8325e6314706e45534f
Reviewed-on: https://go-review.googlesource.com/c/go/+/662075
Reviewed-by: Carlos Amedee <carlos@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Carlos Amedee <carlos@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
This change borrows code from CL 631356 by Emmanuel Odeke (thanks!).
Fixes#70549.
Change-Id: Id6f794ea2a95b4297999456f22c6e02890fce13b
Reviewed-on: https://go-review.googlesource.com/c/go/+/662775
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Robert Griesemer <gri@google.com>
Reviewed-by: Robert Griesemer <gri@google.com>
Reviewed-by: Mark Freeman <mark@golang.org>
When both a direct call and an interface call appear on the same line,
PGO devirtualization may make a suboptimal decision. In some cases,
the directly called function becomes a candidate for devirtualization
if no other relevant outgoing edges with non-zero weight exist for the
caller's IRNode in the WeightedCG. The edge to this candidate is
considered the hottest. Despite having zero weight, this edge still
causes the interface call to be devirtualized.
This CL prevents devirtualization when the weight of the hottest edge
is 0.
Fixes#72092
Change-Id: I06c0c5e080398d86f832e09244aceaa4aeb98721
Reviewed-on: https://go-review.googlesource.com/c/go/+/655475
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Change-Id: I52e5de04d137238d6f6779edcc662f5c7433c61e
Reviewed-on: https://go-review.googlesource.com/c/go/+/660195
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Robert Griesemer <gri@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Remove the 'NoInline' field from CallExpr stucture, as it's no longer
used after enabling of tail call inlining.
Change-Id: Ief3ada9938589e7a2f181582ef2758ebc4d03aad
Reviewed-on: https://go-review.googlesource.com/c/go/+/655816
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
Change-Id: I0a3ce2e823697eee5bb5e7d5ea0ef025132c0689
Reviewed-on: https://go-review.googlesource.com/c/go/+/661655
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Optimise more branches with zero on riscv64. In particular, BLTU with
zero occurs with IsInBounds checks for index zero. This currently results
in two instructions and requires an additional register:
li t2, 0
bltu t2, t1, 0x174b4
This is equivalent to checking if the bounds is not equal to zero. With
this change:
bnez t1, 0x174c0
This removes more than 500 instructions from the Go binary on riscv64.
Change-Id: I6cd861d853e3ef270bd46dacecdfaa205b1c4644
Reviewed-on: https://go-review.googlesource.com/c/go/+/606715
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Meng Zhuo <mengzhuo1203@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Also, rewrite some uses of LookupFieldOrMethod in terms of it.
+ doc, relnote
Fixes#70737
Change-Id: I58a6dd78ee78560d8b6ea2d821381960a72660ab
Reviewed-on: https://go-review.googlesource.com/c/go/+/647196
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Robert Griesemer <gri@google.com>
ssa.Sym is only implemented by *ir.Name or *obj.LSym.
Change-Id: Ia171db618abd8b438fcc2cf402f40f3fe3ec6833
Reviewed-on: https://go-review.googlesource.com/c/go/+/660995
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Currently, if the user stops the timer in a B.Loop benchmark loop, the
benchmark will run until it hits the timeout and fails.
Fix this by detecting that the timer is stopped and failing the
benchmark right away. We avoid making the fast path more expensive for
this check by "poisoning" the B.Loop iteration counter when the timer
is stopped so that it falls back to the slow path, which can check the
timer.
This causes b to escape from B.Loop, which is totally harmless because
it was already definitely heap-allocated. But it causes the
test/inline_testingbloop.go errorcheck test to fail. I don't think the
escape messages actually mattered to that test, they just had to be
matched. To fix this, we drop the debug level to -m=1, since -m=2
prints a lot of extra information for escaping parameters that we
don't want to deal with, and change one error check to allow b to
escape.
Fixes#72971.
Change-Id: I7d4abbb1ec1e096685514536f91ba0d581cca6b7
Reviewed-on: https://go-review.googlesource.com/c/go/+/659657
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Fixes the ComputePadding calculation to take into account the padding
added for the current offset. This fixes an issue where padding can be
added incorrectly for certain structs.
Related: https://github.com/go-delve/delve/issues/3923
Same as https://go-review.googlesource.com/c/go/+/656736 just without
the brittle test.
Fixes#72053
Change-Id: I67f157a42f5fc5d3a54d0e9be03488aa44752bcb
GitHub-Last-Rev: fabed69a31
GitHub-Pull-Request: golang/go#72997
Reviewed-on: https://go-review.googlesource.com/c/go/+/659698
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
This reverts CL 656736.
Reason for revert: breaks many builders (all flavors of
linux-amd64 builders).
Change-Id: Ie7190d4078fada227391804c5cf10b9ce9cc9115
Reviewed-on: https://go-review.googlesource.com/c/go/+/659955
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Fixes the ComputePadding calculation to take into account
the padding added for the current offset. This fixes an issue
where padding can be added incorrectly for certain structs.
Related: https://github.com/go-delve/delve/issues/3923Fixes#72053
Change-Id: I277629799168c6b44bc9ed03df4345e0318064ce
GitHub-Last-Rev: 9478b29a13
GitHub-Pull-Request: golang/go#72805
Reviewed-on: https://go-review.googlesource.com/c/go/+/656736
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
The use of predFn/succFn is not needed since CL 22401.
Change-Id: Icc39190bb7b0e85541c75da2d564093d551751d3
Reviewed-on: https://go-review.googlesource.com/c/go/+/659555
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Currently, only amd64 has an intrinsic for math/bits.OnesCount, which
generates the same code as math/bits.OnesCount64. Replace this with
an alias that maps math/bits.OnesCount to math/bits.OnesCount64 on
64 bit platforms.
Change-Id: Ifa12a2173a201aacd52c3c22b9a948be6e314405
Reviewed-on: https://go-review.googlesource.com/c/go/+/659215
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Auto-Submit: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>