On these platforms, we set up a frame pointer record below
the current stack pointer, so when we're in duffcopy or duffzero,
we get a reasonable traceback. See #73753.
But because this frame pointer record is below SP, it is vulnerable.
Anything that adds a new stack frame to the stack might clobber it.
Which actually happens in #73748 on amd64. I have not yet come across
a repro on arm64, but might as well be safe here.
The only real situation this could happen is when duffzero or duffcopy
is passed a nil pointer. So we can just avoid the problem by doing the
nil check outside duffzero/duffcopy. That way we never add a frame
below duffzero/duffcopy. (Most other ways to get a new frame below the
current one, like async preempt or debugger-generated calls, don't
apply to duffzero/duffcopy because they are runtime functions; we're
not allowed to preempt there.)
Longer term, we should stop putting stuff below SP. #73753 will
include that as part of its remit. But that's not for 1.25, so we'll
do the simple thing for 1.25 for this issue.
Fixes#73748
Change-Id: I913c49ee46dcaee8fb439415a4531f7b59d0f612
Reviewed-on: https://go-review.googlesource.com/c/go/+/676916
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
When creating a new *ir.Name or *ir.LinksymOffsetExpr to represent
a composite literal stored in the read-only data section, we should
use the original type of the expression that was found via
ir.ReassignOracle.StaticValue. (This is needed because the StaticValue
method can traverse through OCONVNOP operations to find its final
result.)
Otherwise, the compilation may succeed, but the linker might erroneously
conclude that a type is not used and prune an itab when it should not,
leading to a call at execution-time to runtime.unreachableMethod, which
throws "fatal error: unreachable method called. linker bug?".
The tests exercise both the case of a zero value struct literal that
can be represented by the read-only runtime.zeroVal, which was the case
of the simplified example from #73888, and also modifies that example to
test the non zero value struct literal case.
This CL makes two similar changes for those two cases. We can get either
of the tests we are adding to fail independently if we only make
a single corresponding change.
Fixes#73888
Updates #71359
Change-Id: Ifd91f445cc168ab895cc27f7964a6557d5cc32e5
Reviewed-on: https://go-review.googlesource.com/c/go/+/676517
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
CL 585399 fixed an initialization loop during IR contruction that
involving alias type, by avoiding publishing alias declarations until
the RHS type expression has been constructed.
There's an assertion to ensure that the alias's type must be the same
during the initialization. However, that assertion is too strict, since
we may construct different instances of the same type, if the type is an
instantination of generic type.
To fix this, we could use types.IdenticalStrict to ensure that these
types matching exactly.
Updates #66873.
Updates #73309.
Change-Id: I2559bed37e21615854333fb1057d7349406e6a1b
Reviewed-on: https://go-review.googlesource.com/c/go/+/668175
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
When transforming for loop variables, the compiler does roughly
following steps:
(1) prebody = {z := z' for z in leaked}
...
(4) init' = (init : s/z/z' for z in leaked)
However, the definition of z is not updated to `z := z'` statement,
causing ReassignOracle incorrectly use the new init statement with z'
instead of z, trigger the ICE.
Fixing this by updating the correct/new definition statement for z
during the prebody initialization.
Fixes#73823
Change-Id: Ice2a6741be7478506c58f4000f591d5582029136
Reviewed-on: https://go-review.googlesource.com/c/go/+/675475
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
unique.Make always copies strings passed into it, so it's safe to not
copy byte slices converted to strings either. Handle this just like map
accesses with string(b) as keys.
This CL only handles unique.Make(string(b)), not nested cases like
unique.Make([2]string{string(b1), string(b2)}); this could be done in a
followup CL but the map lookup code in walk is sufficiently different
than the call handling code that I didn't attempt it. (SSA is much
easier).
Fixes#71926
Change-Id: Ic2f82f2f91963d563b4ddb1282bd49fc40da8b85
Reviewed-on: https://go-review.googlesource.com/c/go/+/672135
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Currently, hash/maphash.Comparable escapes its parameter if it
contains non-string pointers, but does not escape strings or types
that contain strings but no other pointers. This is achieved by a
compiler intrinsic.
unique.Make does something similar: it stores its parameter to a
central map, with strings cloned. So from the escape analysis's
perspective, the non-string pointers are passed through, whereas
string pointers are not. We currently cannot model this type of
type-dependent data flow directly in Go. So we do this with a
compiler intrinsic. In fact, we can unify this and the intrinsic
above.
Tests are from Jake Bailey's CL 671955 (thanks!).
Fixes#73680.
Change-Id: Ia6a78e09dee39f8d9198a16758e4b5322ee2c56a
Reviewed-on: https://go-review.googlesource.com/c/go/+/675156
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Jake Bailey <jacob.b.bailey@gmail.com>
This is a small-ish adjustment to the change earlier in our
stack in CL 649555, which started creating read-only global storage
for a composite literal used in an interface conversion and setting
the interface data pointer to point to that global storage.
In some cases, there are execution-time performance benefits to point
to runtime.zeroVal in particular. In reflect, pointer checks against
the runtime.zeroVal memory address are used to side-step some work,
such as in reflect.Value.Set and reflect.Value.IsZero.
In this CL, we therefore dig up the zeroVal symbol, and we use the
machinery from earlier in our stack to use a pointer to zeroVal for
the interface data pointer if we see examples like:
sink = S{}
or:
s := S{}
sink = s
CL 649076 (also earlier in our stack) added most of the tests
along with debug diagnostics in convert.go to make it easier
to test this change.
We add a benchmark in reflect to show examples of performance benefit.
The left column is our immediately prior CL 649555, and the right is
this CL. (The arrays of structs here do not seem to benefit, which
we attempt to address in our next CL).
goos: linux
goarch: amd64
pkg: reflect
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ cl-649555 │ new │
│ sec/op │ sec/op vs base │
Zero/IsZero/ByteArray/size=16-4 4.176n ± 0% 4.171n ± 0% ~ (p=0.151 n=20)
Zero/IsZero/ByteArray/size=64-4 6.921n ± 0% 3.864n ± 0% -44.16% (p=0.000 n=20)
Zero/IsZero/ByteArray/size=1024-4 21.210n ± 0% 3.878n ± 0% -81.72% (p=0.000 n=20)
Zero/IsZero/BigStruct/size=1024-4 25.505n ± 0% 5.061n ± 0% -80.15% (p=0.000 n=20)
Zero/IsZero/SmallStruct/size=16-4 4.188n ± 0% 4.191n ± 0% ~ (p=0.106 n=20)
Zero/IsZero/SmallStructArray/size=64-4 8.639n ± 0% 8.636n ± 0% ~ (p=0.973 n=20)
Zero/IsZero/SmallStructArray/size=1024-4 79.99n ± 0% 80.06n ± 0% ~ (p=0.213 n=20)
Zero/IsZero/Time/size=24-4 7.232n ± 0% 3.865n ± 0% -46.56% (p=0.000 n=20)
Zero/SetZero/ByteArray/size=16-4 13.47n ± 0% 13.09n ± 0% -2.78% (p=0.000 n=20)
Zero/SetZero/ByteArray/size=64-4 14.14n ± 0% 13.70n ± 0% -3.15% (p=0.000 n=20)
Zero/SetZero/ByteArray/size=1024-4 24.22n ± 0% 20.18n ± 0% -16.68% (p=0.000 n=20)
Zero/SetZero/BigStruct/size=1024-4 24.24n ± 0% 20.18n ± 0% -16.73% (p=0.000 n=20)
Zero/SetZero/SmallStruct/size=16-4 13.45n ± 0% 13.10n ± 0% -2.60% (p=0.000 n=20)
Zero/SetZero/SmallStructArray/size=64-4 14.12n ± 0% 13.69n ± 0% -3.05% (p=0.000 n=20)
Zero/SetZero/SmallStructArray/size=1024-4 24.62n ± 0% 21.61n ± 0% -12.26% (p=0.000 n=20)
Zero/SetZero/Time/size=24-4 13.59n ± 0% 13.40n ± 0% -1.40% (p=0.000 n=20)
geomean 14.06n 10.19n -27.54%
Finally, here are results from the benchmark example from #71323.
Note however that almost all the benefit shown here is from our earlier
CL 649555, which is a more general purpose change and eliminates
the allocation using a different read-only global than this CL.
│ go1.24 │ new │
│ sec/op │ sec/op vs base │
InterfaceAny 112.6000n ± 5% 0.8078n ± 3% -99.28% (p=0.000 n=20)
ReflectValue 11.63n ± 2% 11.59n ± 0% ~ (p=0.330 n=20)
│ go1.24.out │ new.out │
│ B/op │ B/op vs base │
InterfaceAny 224.0 ± 0% 0.0 ± 0% -100.00% (p=0.000 n=20)
ReflectValue 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=20) ¹
│ go1.24.out │ new.out │
│ allocs/op │ allocs/op vs base │
InterfaceAny 1.000 ± 0% 0.000 ± 0% -100.00% (p=0.000 n=20)
ReflectValue 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=20) ¹
Updates #71359
Updates #71323
Change-Id: I64d8cf1a7900f011d2ec59b948388aeda1150676
Reviewed-on: https://go-review.googlesource.com/c/go/+/649078
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
Today, this interface conversion causes the struct literal
to be heap allocated:
var sink any
func example1() {
sink = S{1, 1}
}
For basic literals like integers that are directly used in
an interface conversion that would otherwise allocate, the compiler
is able to use read-only global storage (see #18704).
This CL extends that to struct and array literals as well by creating
read-only global storage that is able to represent for example S{1, 1},
and then using a pointer to that storage in the interface
when the interface conversion happens.
A more challenging example is:
func example2() {
v := S{1, 1}
sink = v
}
In this case, the struct literal is not directly part of the
interface conversion, but is instead assigned to a local variable.
To still avoid heap allocation in cases like this, in walk we
construct a cache that maps from expressions used in interface
conversions to earlier expressions that can be used to represent the
same value (via ir.ReassignOracle.StaticValue). This is somewhat
analogous to how we avoided heap allocation for basic literals in
CL 649077 earlier in our stack, though here we also need to do a
little more work to create the read-only global.
CL 649076 (also earlier in our stack) added most of the tests
along with debug diagnostics in convert.go to make it easier
to test this change.
See the writeup in #71359 for details.
Fixes#71359Fixes#71323
Updates #62653
Updates #53465
Updates #8618
Change-Id: I8924f0c69ff738ea33439bd6af7b4066af493b90
Reviewed-on: https://go-review.googlesource.com/c/go/+/649555
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Currently, the integer value in the following interface conversion gets
heap allocated:
v := 1000
fmt.Println(v)
In contrast, this conversion does not currently cause the integer value
to be heap allocated:
fmt.Println(1000)
The second example is able to avoid heap allocation because of an
optimization in walk (by Josh in #18704 and related issues) that
recognizes a literal is being used. In the first example, that
optimization is currently thwarted by the literal getting assigned
to a local variable prior to use in the interface conversion.
This CL propagates constants to interface conversions like
in the first example to avoid heap allocations, instead using
a read-only global. The net effect is roughly turning the first example
into the second.
One place this comes up in practice currently is with logging or
debug prints. For example, if we have something like:
func conditionalDebugf(format string, args ...interface{}) {
if debugEnabled {
fmt.Fprintf(io.Discard, format, args...)
}
}
Prior to this CL, this integer is heap allocated, even when the
debugEnabled flag is false, and even when the compiler
inlines conditionalDebugf:
v := 1000
conditionalDebugf("hello %d", v)
With this CL, the integer here is no longer heap allocated, even when
the debugEnabled flag is enabled, because the compiler can now see that
it can use a read-only global.
See the writeup in #71359 for more details.
CL 649076 (earlier in our stack) added most of the tests
along with debug diagnostics in convert.go to make it easier
to test this change.
Updates #71359
Updates #62653
Updates #53465
Updates #8618
Change-Id: I19a51e74b36576ebb0b9cf599267cbd2bd847ce4
Reviewed-on: https://go-review.googlesource.com/c/go/+/649079
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
In the loong64 instruction set, there is no NORI instruction,
so the immediate value in NORconst need to be stored in register
and then use the three-register NOR instruction.
Change-Id: I5ef697450619317218cb3ef47fc07e238bdc2139
Reviewed-on: https://go-review.googlesource.com/c/go/+/673836
Reviewed-by: abner chenc <chenguoqi@loongson.cn>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
This CL implements the TODO in combineStores to allow combining
stores of different sizes, as long as the total size aligns to
2, 4, 8.
Fixes#72832.
Change-Id: I6d1d471335da90d851ad8f3b5a0cf10bdcfa17c4
Reviewed-on: https://go-review.googlesource.com/c/go/+/661855
Reviewed-by: Keith Randall <khr@golang.org>
Auto-Submit: Junyang Shao <shaojunyang@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
This adds additional logging for the work that walk does to reduce
how often an interface conversion results in an allocation.
Also, as part of #71359, we will be updating how escape analysis and
walk handle basic literals, composite literals, and zero values,
so add some tests that uses this new logging.
By the end of our CL stack, we address all of these tests.
Updates #71359
Change-Id: I43fde8343d9aacaec1e05360417908014a86c8bd
Reviewed-on: https://go-review.googlesource.com/c/go/+/649076
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
When appending, if the backing store doesn't escape and a
constant-sized backing store is big enough, use a constant-sized
stack-allocated backing store instead of allocating it from the heap.
cmd/go is <0.1% bigger.
As an example of how this helps, if you edit strings/strings.go:FieldsFunc
to replace
spans := make([]span, 0, 32)
with
var spans []span
then this CL removes the first 2 allocations that are part of the growth sequence:
│ base │ exp │
│ allocs/op │ allocs/op vs base │
FieldsFunc/ASCII/16-24 3.000 ± ∞ ¹ 2.000 ± ∞ ¹ -33.33% (p=0.008 n=5)
FieldsFunc/ASCII/256-24 7.000 ± ∞ ¹ 5.000 ± ∞ ¹ -28.57% (p=0.008 n=5)
FieldsFunc/ASCII/4096-24 11.000 ± ∞ ¹ 9.000 ± ∞ ¹ -18.18% (p=0.008 n=5)
FieldsFunc/ASCII/65536-24 18.00 ± ∞ ¹ 16.00 ± ∞ ¹ -11.11% (p=0.008 n=5)
FieldsFunc/ASCII/1048576-24 30.00 ± ∞ ¹ 28.00 ± ∞ ¹ -6.67% (p=0.008 n=5)
FieldsFunc/Mixed/16-24 2.000 ± ∞ ¹ 2.000 ± ∞ ¹ ~ (p=1.000 n=5)
FieldsFunc/Mixed/256-24 7.000 ± ∞ ¹ 5.000 ± ∞ ¹ -28.57% (p=0.008 n=5)
FieldsFunc/Mixed/4096-24 11.000 ± ∞ ¹ 9.000 ± ∞ ¹ -18.18% (p=0.008 n=5)
FieldsFunc/Mixed/65536-24 18.00 ± ∞ ¹ 16.00 ± ∞ ¹ -11.11% (p=0.008 n=5)
FieldsFunc/Mixed/1048576-24 30.00 ± ∞ ¹ 28.00 ± ∞ ¹ -6.67% (p=0.008 n=5)
(Of course, people have spotted and fixed a bunch of allocation sites
like this, but now we're ~automatically doing it everywhere going forward.)
No significant increases in frame sizes in cmd/go.
Change-Id: I301c4d9676667eacdae0058960321041d173751a
Reviewed-on: https://go-review.googlesource.com/c/go/+/664299
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
-N+1 <= x % N <= N-1
This is useful for cases like:
func setBit(b []byte, i int) {
b[i/8] |= 1<<(i%8)
}
The shift does not need protection against larger-than-7 cases.
(It does still need protection against <0 cases.)
Change-Id: Idf83101386af538548bfeb6e2928cea855610ce2
Reviewed-on: https://go-review.googlesource.com/c/go/+/672995
Reviewed-by: Jorropo <jorropo.pgm@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
x += *p
We want to do this with a single load+add operation on amd64.
The tricky part is that we don't want to combine if there are
other uses of x after this instruction.
Implement a simple detector that seems to capture a common situation -
x += *p is in a loop, and the other use of x is after loop exit.
In that case, it does not hurt to do the load+add combo.
Change-Id: I466174cce212e78bde83f908cc1f2752b560c49c
Reviewed-on: https://go-review.googlesource.com/c/go/+/672957
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
for ..; ..; i++ {
...
}
We want to schedule the i++ late in the block, so that all other
uses of i in the block are scheduled first. That way, i++ can
happen in place in a register instead of requiring a temporary register.
Change-Id: Id777407c7e67a5ddbd8e58251099b0488138c0df
Reviewed-on: https://go-review.googlesource.com/c/go/+/672998
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Follow-up to #54959 with another failing case.
The linker needs FuncInfo metadata for all inlined functions. CL 436240 explicitly creates LSym for direct closure calls to ensure we keep the FuncInfo metadata.
However, CL 436240 won't work if the direct closure call is wrapped by a no-effect type conversion, even if that closure could be inlined.
This commit should fix such case.
Fixes#73716
Change-Id: Icda6024da54c8d933f87300e691334c080344695
GitHub-Last-Rev: e9aed02eb6
GitHub-Pull-Request: golang/go#73718
Reviewed-on: https://go-review.googlesource.com/c/go/+/672855
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Windows prefers 64-bit binaries to be loaded at an address above 4GB.
Having a preferred base address below this boundary triggers a
compatibility mode in Address Space Layout Randomization (ASLR) on
recent versions of Windows that reduces the number of locations to which
ASLR may relocate the binary.
The Go internal linker was using a smaller base address due to an issue
with how dynamic cgo symbols were relocated, which has been fixed in
this CL.
Fixes#73561.
Cq-Include-Trybots: luci.golang.try:gotip-windows-amd64-longtest
Change-Id: Ia8cb35d57d921d9be706a8975fa085af7996f124
Reviewed-on: https://go-review.googlesource.com/c/go/+/671515
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
This change splits the finalizer and cleanup queues and implements a new
lock-free blocking queue for cleanups. The basic design is as follows:
The cleanup queue is organized in fixed-sized blocks. Individual cleanup
functions are queued, but only whole blocks are dequeued.
Enqueuing cleanups places them in P-local cleanup blocks. These are
flushed to the full list as they get full. Cleanups can only be enqueued
by an active sweeper.
Dequeuing cleanups always dequeues entire blocks from the full list.
Cleanup blocks can be dequeued and executed at any time.
The very last active sweeper in the sweep phase is responsible for
flushing all local cleanup blocks to the full list. It can do this
without any synchronization because the next GC can't start yet, so we
can be very certain that nobody else will be accessing the local blocks.
Cleanup blocks are stored off-heap because the need to be allocated by
the sweeper, which is called from heap allocation paths. As a result,
the GC treats cleanup blocks as roots, just like finalizer blocks.
Flushes to the full list signal to the scheduler that cleanup goroutines
should be awoken. Every time the scheduler goes to wake up a cleanup
goroutine and there were more signals than goroutines to wake, it then
forwards this signal to runtime.AddCleanup, so that it creates another
goroutine the next time it is called, up to gomaxprocs goroutines.
The signals here are a little convoluted, but exist because the sweeper
and the scheduler cannot safely create new goroutines.
For #71772.
For #71825.
Change-Id: Ie839fde2b67e1b79ac1426be0ea29a8d923a62cc
Reviewed-on: https://go-review.googlesource.com/c/go/+/650697
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Use an automatic algorithm to generate strength reduction code.
You give it all the linear combination (a*x+b*y) instructions in your
architecture, it figures out the rest.
Just amd64 and arm64 for now.
Fixes#67575
Change-Id: I35c69382bebb1d2abf4bb4e7c43fd8548c6c59a1
Reviewed-on: https://go-review.googlesource.com/c/go/+/626998
Reviewed-by: Jakub Ciolek <jakub@ciolek.dev>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
For riscv64/rva22u64 and above, we can intrinsify math/bits.OnesCount
using the CPOP/CPOPW machine instructions. Since the native Go
implementation of OnesCount is relatively expensive, it is also
worth emitting a check for Zbb support when compiled for rva20u64.
On a Banana Pi F3, with GORISCV64=rva22u64:
│ oc.1 │ oc.2 │
│ sec/op │ sec/op vs base │
OnesCount-8 16.930n ± 0% 4.389n ± 0% -74.08% (p=0.000 n=10)
OnesCount8-8 5.642n ± 0% 5.016n ± 0% -11.10% (p=0.000 n=10)
OnesCount16-8 9.404n ± 0% 5.015n ± 0% -46.67% (p=0.000 n=10)
OnesCount32-8 13.165n ± 0% 4.388n ± 0% -66.67% (p=0.000 n=10)
OnesCount64-8 16.300n ± 0% 4.388n ± 0% -73.08% (p=0.000 n=10)
geomean 11.40n 4.629n -59.40%
On a Banana Pi F3, compiled with GORISCV64=rva20u64 and with Zbb
detection enabled:
│ oc.3 │ oc.4 │
│ sec/op │ sec/op vs base │
OnesCount-8 16.930n ± 0% 5.643n ± 0% -66.67% (p=0.000 n=10)
OnesCount8-8 5.642n ± 0% 5.642n ± 0% ~ (p=0.447 n=10)
OnesCount16-8 10.030n ± 0% 6.896n ± 0% -31.25% (p=0.000 n=10)
OnesCount32-8 13.170n ± 0% 5.642n ± 0% -57.16% (p=0.000 n=10)
OnesCount64-8 16.300n ± 0% 5.642n ± 0% -65.39% (p=0.000 n=10)
geomean 11.55n 5.873n -49.16%
On a Banana Pi F3, compiled with GORISCV64=rva20u64 but with Zbb
detection disabled:
│ oc.3 │ oc.5 │
│ sec/op │ sec/op vs base │
OnesCount-8 16.93n ± 0% 29.47n ± 0% +74.07% (p=0.000 n=10)
OnesCount8-8 5.642n ± 0% 5.643n ± 0% ~ (p=0.191 n=10)
OnesCount16-8 10.03n ± 0% 15.05n ± 0% +50.05% (p=0.000 n=10)
OnesCount32-8 13.17n ± 0% 18.18n ± 0% +38.04% (p=0.000 n=10)
OnesCount64-8 16.30n ± 0% 21.94n ± 0% +34.60% (p=0.000 n=10)
geomean 11.55n 15.84n +37.16%
For hardware without Zbb, this adds ~5ns overhead, while for hardware
with Zbb we achieve a performance gain up of up to 11ns. It is worth
noting that OnesCount8 is cheap enough that it is preferable to stick
with the generic version in this case.
Change-Id: Id657e40e0dd1b1ab8cc0fe0f8a68df4c9f2d7da5
Reviewed-on: https://go-review.googlesource.com/c/go/+/660856
Reviewed-by: Carlos Amedee <carlos@golang.org>
Reviewed-by: Meng Zhuo <mengzhuo1203@gmail.com>
Reviewed-by: Mark Ryan <markdryan@rivosinc.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
That's where the unified IR writer expects it.
Fixes#73476
Change-Id: Ic22bd8dee5be5991e6d126ae3f6eccb2acdc0b19
Reviewed-on: https://go-review.googlesource.com/c/go/+/667415
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
When replacing a loop where the iteration variable has a named type,
we need to compute the last iteration value as i = T(len(a)-1), not
just i = len(a)-1.
Fixes#73491
Change-Id: Ic1cc3bdf8571a40c10060f929a9db8a888de2b70
Reviewed-on: https://go-review.googlesource.com/c/go/+/667815
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@google.com>
Reviewed-by: Junyang Shao <shaojunyang@google.com>
Reviewed-by: Keith Randall <khr@google.com>
We only want to call into the race detector for Go global variables.
By rounding up the region bounds, we can include some C globals.
Even worse, we can include only *part* of a C global, leading to
race{read,write}range calls which straddle the end of shadow memory.
That causes the race detector to barf.
Fix some off-by-one errors in the assembly comparisons. We want to
skip calling the race detector when addr == racedataend.
Fixes#73483
Change-Id: I436b0f588d6165b61f30cb7653016ba9b7cbf585
Reviewed-on: https://go-review.googlesource.com/c/go/+/667655
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The full 64x64->128 multiply comes up when using bits.Mul64.
The 64x64->64+overflow multiply comes up in unsafe.Slice when using
a constant length.
Change-Id: I298515162ca07d804b2d699d03bc957ca30a4ebc
Reviewed-on: https://go-review.googlesource.com/c/go/+/667175
Reviewed-by: Junyang Shao <shaojunyang@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Updated the use of atomic.Uint32 to atomic.Bool for sync package.
Change-Id: Ib8da66fea86ef06e1427ac5118016b96fbcda6b1
GitHub-Last-Rev: d36e0f431f
GitHub-Pull-Request: golang/go#73447
Reviewed-on: https://go-review.googlesource.com/c/go/+/666895
Reviewed-by: Junyang Shao <shaojunyang@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Jorropo <jorropo.pgm@gmail.com>
For any len() which requires the evaluation of its arg (according to the spec).
Update #72844
Change-Id: Id2b0bcc78073a6d5051abd000131dafdf65e7f26
Reviewed-on: https://go-review.googlesource.com/c/go/+/658097
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
If the thing we're ranging over is an array or ptr to array, and
it doesn't have a function call or channel receive in it, then we
shouldn't evaluate it.
Typecheck the ranged-over value as a constant in that case.
That makes the unified exporter replace the range expression
with a constant int.
Change-Id: I0d4ea081de70d20cf6d1fa8d25ef6cb021975554
Reviewed-on: https://go-review.googlesource.com/c/go/+/659317
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Robert Griesemer <gri@google.com>
It's really only needed for stores and store-like instructions
(atomic exchange, compare-and-swap, ...).
Fixes#73180
Change-Id: I8ecd833a301355adf0fa4bff43250091640c6226
Reviewed-on: https://go-review.googlesource.com/c/go/+/663155
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
For variable-sized allocations.
Turns out that we already implement the correct escape semantics
for this case. Even when the result of the "make" does not escape,
everything assigned into it does.
Change-Id: Ia123c538d39f2f1e1581c24e4135a65af3821c5e
Reviewed-on: https://go-review.googlesource.com/c/go/+/657937
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Robert Griesemer <gri@google.com>
Instead of always allocating variable-sized "make" calls on the heap,
allocate a small, constant-sized array on the stack and use that array
as the backing store if it is big enough.
Requires the result of the "make" doesn't escape.
if cap <= K {
var arr [K]E
slice = arr[:len:cap]
} else {
slice = makeslice(E, len, cap)
}
Pretty conservatively for now, K = 32/sizeof(E). The slice header is
already 24 bytes, so wasting 32 bytes of stack if the requested size
is too big isn't that bad. Larger would waste more stack space but
maybe avoid more allocations.
This CL also requires the element type be pointer-free. Maybe we
could relax that at some point, but it is hard. If the element type
has pointers we can get heap->stack pointers (in the case where the
requested size is too big and the slice is heap allocated).
Note that this only handles the case of makeslice called directly from
compiler-generated code. It does not handle slices built in the
runtime on behalf of the program (e.g. in growslice). Some of those
are currently handled by passing in a tmpBuf (e.g. concatstrings),
but we could probably do more.
Change-Id: I8378efad527cd00d25948a80b82a68d88fbd93a1
Reviewed-on: https://go-review.googlesource.com/c/go/+/653856
Reviewed-by: Robert Griesemer <gri@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Improve the compiler's store-to-load forwarding optimization by relaxing the
type comparison condition. Instead of requiring exact type equality (CMPeq),
we now use copyCompatibleType which allows forwarding between compatible
types where safe.
Fix several size comparison bugs in the nested store patterns. Previously,
we were comparing the size of the outer store with the load type,
rather than comparing with the size of the actual store being forwarded
from.
Skip OpConvert in dead store elimination to help get rid of dead stores such
as zeroing slices. OpConvert, like OpInlMark, doesn't really use the memory.
This optimization is particularly beneficial for code that creates slices with
computed pointers, such as the runtime's heapBitsSlice function, where
intermediate calculations were previously causing the compiler to miss
store-to-load forwarding opportunities.
Local sweet run result on an x86_64 laptop:
│ Orig.res │ Hopt.res │
│ sec/op │ sec/op vs base │
BiogoIgor-8 5.303 ± 1% 5.322 ± 1% ~ (p=0.190 n=10)
BiogoKrishna-8 7.894 ± 1% 7.828 ± 2% ~ (p=0.190 n=10)
BleveIndexBatch100-8 2.257 ± 1% 2.248 ± 2% ~ (p=0.529 n=10)
EtcdPut-8 30.12m ± 1% 30.03m ± 1% ~ (p=0.796 n=10)
EtcdSTM-8 127.1m ± 1% 126.2m ± 0% -0.74% (p=0.023 n=10)
GoBuildKubelet-8 52.21 ± 0% 52.05 ± 1% ~ (p=0.063 n=10)
GoBuildKubeletLink-8 4.342 ± 1% 4.305 ± 0% -0.85% (p=0.000 n=10)
GoBuildIstioctl-8 43.33 ± 0% 43.24 ± 0% -0.22% (p=0.015 n=10)
GoBuildIstioctlLink-8 4.604 ± 1% 4.598 ± 0% ~ (p=0.063 n=10)
GoBuildFrontend-8 15.33 ± 0% 15.29 ± 0% ~ (p=0.143 n=10)
GoBuildFrontendLink-8 740.0m ± 1% 737.7m ± 1% ~ (p=0.912 n=10)
GopherLuaKNucleotide-8 9.590 ± 1% 9.656 ± 1% ~ (p=0.165 n=10)
MarkdownRenderXHTML-8 96.97m ± 1% 97.26m ± 2% ~ (p=0.105 n=10)
Tile38QueryLoad-8 335.9µ ± 1% 335.6µ ± 1% ~ (p=0.481 n=10)
geomean 1.336 1.333 -0.22%
Change-Id: I031552623e6d5a3b1b5be8325e6314706e45534f
Reviewed-on: https://go-review.googlesource.com/c/go/+/662075
Reviewed-by: Carlos Amedee <carlos@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Carlos Amedee <carlos@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Optimise more branches with zero on riscv64. In particular, BLTU with
zero occurs with IsInBounds checks for index zero. This currently results
in two instructions and requires an additional register:
li t2, 0
bltu t2, t1, 0x174b4
This is equivalent to checking if the bounds is not equal to zero. With
this change:
bnez t1, 0x174c0
This removes more than 500 instructions from the Go binary on riscv64.
Change-Id: I6cd861d853e3ef270bd46dacecdfaa205b1c4644
Reviewed-on: https://go-review.googlesource.com/c/go/+/606715
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Meng Zhuo <mengzhuo1203@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
All other files here use the codegen package.
Change-Id: I714162941b9fa9051dacc29643e905fe60b9304b
Reviewed-on: https://go-review.googlesource.com/c/go/+/661135
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
This adds tests for type conversion and shifts, detailing various
poor bad code generation that currently exists for riscv64. This
will be addressed in future CLs.
Change-Id: Ie1d366dfe878832df691600f8500ef383da92848
Reviewed-on: https://go-review.googlesource.com/c/go/+/615678
Reviewed-by: Meng Zhuo <mengzhuo1203@gmail.com>
Reviewed-by: Mark Ryan <markdryan@rivosinc.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
Currently, if the user stops the timer in a B.Loop benchmark loop, the
benchmark will run until it hits the timeout and fails.
Fix this by detecting that the timer is stopped and failing the
benchmark right away. We avoid making the fast path more expensive for
this check by "poisoning" the B.Loop iteration counter when the timer
is stopped so that it falls back to the slow path, which can check the
timer.
This causes b to escape from B.Loop, which is totally harmless because
it was already definitely heap-allocated. But it causes the
test/inline_testingbloop.go errorcheck test to fail. I don't think the
escape messages actually mattered to that test, they just had to be
matched. To fix this, we drop the debug level to -m=1, since -m=2
prints a lot of extra information for escaping parameters that we
don't want to deal with, and change one error check to allow b to
escape.
Fixes#72971.
Change-Id: I7d4abbb1ec1e096685514536f91ba0d581cca6b7
Reviewed-on: https://go-review.googlesource.com/c/go/+/659657
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>