Now the registration phase looks like:
var cases [4]runtime.scases
var order [8]uint16
selectsend(&cases[0], c1, &v1)
selectrecv(&cases[1], c2, &v2, nil)
selectrecv(&cases[2], c3, &v3, &ok)
selectdefault(&cases[3])
chosen := selectgo(&cases[0], &order[0], 4)
Primarily, this is just preparation for having the compiler open-code
selectsend, selectrecv, and selectdefault.
As a minor benefit, order can now be layed out separately on the stack
in the pointer-free segment, so it won't take up space in the
function's stack pointer maps.
Change-Id: I5552ba594201efd31fcb40084da20b42ea569a45
Reviewed-on: https://go-review.googlesource.com/37933
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Some of the comments relative paths do not exist and
reflect does not define its own hmap structure.
Correct paths and consistently reference paths starting from the
go src directory.
Change-Id: I5204a3a98f77d65f17dcde98b847378cea05ad8a
Reviewed-on: https://go-review.googlesource.com/94758
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Change the help doc of
go tool compile -d=ssa/help
from this:
compile: GcFlag -d=ssa/<phase>/<flag>[=<value>|<function_name>]
<phase> is one of:
check, all, build, intrinsics, early_phielim, early_copyelim
early_deadcode, short_circuit, decompose_user, opt, zero_arg_cse
opt_deadcode, generic_cse, phiopt, nilcheckelim, prove, loopbce
decompose_builtin, softfloat, late_opt, generic_deadcode, check_bce
fuse, dse, writebarrier, insert_resched_checks, tighten, lower
lowered_cse, elim_unread_autos, lowered_deadcode, checkLower
late_phielim, late_copyelim, phi_tighten, late_deadcode, critical
likelyadjust, layout, schedule, late_nilcheck, flagalloc, regalloc
loop_rotate, stackframe, trim
<flag> is one of on, off, debug, mem, time, test, stats, dump
<value> defaults to 1
<function_name> is required for "dump", specifies name of function to dump after <phase>
Except for dump, output is directed to standard out; dump appears in a file.
Phase "all" supports flags "time", "mem", and "dump".
Phases "intrinsics" supports flags "on", "off", and "debug".
Interpretation of the "debug" value depends on the phase.
Dump files are named <phase>__<function_name>_<seq>.dump.
To this:
compile: PhaseOptions usage:
go tool compile -d=ssa/<phase>/<flag>[=<value>|<function_name>]
where:
- <phase> is one of:
check, all, build, intrinsics, early_phielim, early_copyelim
early_deadcode, short_circuit, decompose_user, opt, zero_arg_cse
opt_deadcode, generic_cse, phiopt, nilcheckelim, prove
decompose_builtin, softfloat, late_opt, generic_deadcode, check_bce
branchelim, fuse, dse, writebarrier, insert_resched_checks, lower
lowered_cse, elim_unread_autos, lowered_deadcode, checkLower
late_phielim, late_copyelim, tighten, phi_tighten, late_deadcode
critical, likelyadjust, layout, schedule, late_nilcheck, flagalloc
regalloc, loop_rotate, stackframe, trim
- <flag> is one of:
on, off, debug, mem, time, test, stats, dump
- <value> defaults to 1
- <function_name> is required for the "dump" flag, and specifies the
name of function to dump after <phase>
Phase "all" supports flags "time", "mem", and "dump".
Phase "intrinsics" supports flags "on", "off", and "debug".
If the "dump" flag is specified, the output is written on a file named
<phase>__<function_name>_<seq>.dump; otherwise it is directed to stdout.
Also add a few examples at the bottom.
Fixes#20349
Change-Id: I334799e951e7b27855b3ace5d2d966c4d6ec4cff
Reviewed-on: https://go-review.googlesource.com/110062
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
The prove pass sometimes has bounds information
that later rewrite passes do not.
Use this information to mark shifts as bounded,
and then use that information to generate better code on amd64.
It may prove to be helpful on other architectures, too.
While here, coalesce the existing shift lowering rules.
This triggers 35 times building std+cmd. The full list is below.
Here's an example from runtime.heapBitsSetType:
if nb < 8 {
b |= uintptr(*p) << nb
p = add1(p)
} else {
nb -= 8
}
We now generate better code on amd64 for that left shift.
Updates #25087
vendor/golang_org/x/crypto/curve25519/mont25519_amd64.go:48:20: Proved Rsh8Ux64 bounded
runtime/mbitmap.go:1252:22: Proved Lsh64x64 bounded
runtime/mbitmap.go:1265:16: Proved Lsh64x64 bounded
runtime/mbitmap.go:1275:28: Proved Lsh64x64 bounded
runtime/mbitmap.go:1645:25: Proved Lsh64x64 bounded
runtime/mbitmap.go:1663:25: Proved Lsh64x64 bounded
runtime/mbitmap.go:1808:41: Proved Lsh64x64 bounded
runtime/mbitmap.go:1831:49: Proved Lsh64x64 bounded
syscall/route_bsd.go:227:23: Proved Lsh32x64 bounded
syscall/route_bsd.go:295:23: Proved Lsh32x64 bounded
syscall/route_darwin.go:40:23: Proved Lsh32x64 bounded
compress/bzip2/bzip2.go:384:26: Proved Lsh64x16 bounded
vendor/golang_org/x/net/route/address.go:370:14: Proved Lsh64x64 bounded
compress/flate/inflate.go:201:54: Proved Lsh64x64 bounded
math/big/prime.go:50:25: Proved Lsh64x64 bounded
vendor/golang_org/x/crypto/cryptobyte/asn1.go:464:43: Proved Lsh8x8 bounded
net/ip.go:87:21: Proved Rsh8Ux64 bounded
cmd/internal/goobj/read.go:267:23: Proved Lsh64x64 bounded
cmd/vendor/golang.org/x/arch/arm64/arm64asm/decode.go:534:27: Proved Lsh32x32 bounded
cmd/vendor/golang.org/x/arch/arm64/arm64asm/decode.go:544:27: Proved Lsh32x32 bounded
cmd/internal/obj/arm/asm5.go:1044:16: Proved Lsh32x64 bounded
cmd/internal/obj/arm/asm5.go:1065:10: Proved Lsh32x32 bounded
cmd/internal/obj/mips/obj0.go:1311:21: Proved Lsh32x64 bounded
cmd/compile/internal/syntax/scanner.go:352:23: Proved Lsh64x64 bounded
go/types/expr.go:222:36: Proved Lsh64x64 bounded
crypto/x509/x509.go:1626:9: Proved Rsh8Ux64 bounded
cmd/link/internal/loadelf/ldelf.go:823:22: Proved Lsh8x64 bounded
net/http/h2_bundle.go:1470:17: Proved Lsh8x8 bounded
net/http/h2_bundle.go:1477:46: Proved Lsh8x8 bounded
net/http/h2_bundle.go:1481:31: Proved Lsh64x8 bounded
cmd/compile/internal/ssa/rewriteARM64.go:18759:17: Proved Lsh64x64 bounded
cmd/compile/internal/ssa/sparsemap.go:70:23: Proved Lsh32x64 bounded
cmd/compile/internal/ssa/sparsemap.go:73:45: Proved Lsh32x64 bounded
Change-Id: I58bb72f3e6f12f6ac69be633ea7222c245438142
Reviewed-on: https://go-review.googlesource.com/109776
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Giovanni Bajo <rasky@develer.com>
When a loop has bound len(s)-delta, findIndVar detected it and
returned len(s) as (conservative) upper bound. This little lie
allowed loopbce to drop bound checks.
It is obviously more generic to teach prove about relations like
x+d<w for non-constant "w"; we already handled the case for
constant "w", so we just want to learn that if d<0, then x+d<w
proves that x<w.
To be able to remove the code from findIndVar, we also need
to teach prove that len() and cap() are always non-negative.
This CL allows to prove 633 more checks in cmd+std. Most
of them are cases where the code was already testing before
accessing a slice but the compiler didn't know it. For instance,
take strings.HasSuffix:
func HasSuffix(s, suffix string) bool {
return len(s) >= len(suffix) && s[len(s)-len(suffix):] == suffix
}
When suffix is a literal string, the compiler now understands
that the explicit check is enough to not emit a slice check.
I also found a loopbce test that was incorrectly
written to detect an overflow but had a off-by-one (on the
conservative side), so it unexpectly passed with this CL; I
changed it to really trigger the overflow as intended.
Change-Id: Ib5abade337db46b8811425afebad4719b6e46c4a
Reviewed-on: https://go-review.googlesource.com/105635
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
To be effective, this also requires being able to relax constraints
on min/max bound inclusiveness; they are now exposed through a flags,
and prove has been updated to handle it correctly.
Change-Id: I3490e54461b7b9de8bc4ae40d3b5e2fa2d9f0556
Reviewed-on: https://go-review.googlesource.com/104041
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Test both minimum and maximum bound, and prepare
formatting for more advanced tests (inclusive / esclusive bounds).
Change-Id: Ibe432916d9c938343bc07943798bc9709ad71845
Reviewed-on: https://go-review.googlesource.com/104040
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reuse findIndVar to discover induction variables, and then
register the facts we know about them into the facts table
when entering the loop block.
Moreover, handle "x+delta > w" while updating the facts table,
to be able to prove accesses to slices with constant offsets
such as slice[i-10].
Change-Id: I2a63d050ed58258136d54712ac7015b25c893d71
Reviewed-on: https://go-review.googlesource.com/104038
Run-TryBot: Giovanni Bajo <rasky@develer.com>
Reviewed-by: David Chase <drchase@google.com>
When a branch is followed, we apply the relation as described
in the domain relation table. In case the relation is in the
positive domain, we can also infer an unsigned relation if,
by that point, we know that both operands are non-negative.
Fixes#20393
Change-Id: Ieaf0c81558b36d96616abae3eb834c788dd278d5
Reviewed-on: https://go-review.googlesource.com/100278
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Giovanni Bajo <rasky@develer.com>
Reviewed-by: David Chase <drchase@google.com>
There are several things combined in this change.
First, eliminate the gobitvector type in favor
of adding a ptrbit method to bitvector.
In non-performance-critical code, use that method.
In performance critical code, though, load the bitvector data
one byte at a time and iterate only over set bits.
To support that, add and use sys.Ctz8.
name old time/op new time/op delta
StackCopyPtr-8 81.8ms ± 5% 78.9ms ± 3% -3.58% (p=0.000 n=97+96)
StackCopy-8 65.9ms ± 3% 62.8ms ± 3% -4.67% (p=0.000 n=96+92)
StackCopyNoCache-8 105ms ± 3% 102ms ± 3% -3.38% (p=0.000 n=96+95)
Change-Id: I00b80f45612708bd440b1a411a57fa6dfa24aa74
Reviewed-on: https://go-review.googlesource.com/109716
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
getArgInfo is called a lot during stack copying.
In the common case it doesn't do much work,
but it cannot be inlined.
This change works around that.
name old time/op new time/op delta
StackCopyPtr-8 108ms ± 5% 96ms ± 4% -10.40% (p=0.000 n=20+20)
StackCopy-8 82.6ms ± 3% 78.4ms ± 6% -5.15% (p=0.000 n=19+20)
StackCopyNoCache-8 130ms ± 3% 122ms ± 3% -6.44% (p=0.000 n=20+20)
Change-Id: If7d8a08c50a4e2e76e4331b399396c5dbe88c2ce
Reviewed-on: https://go-review.googlesource.com/108945
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
mips64 softfloat support is based on mips implementation and introduces
new enviroment variable GOMIPS64.
GOMIPS64 is a GOARCH=mips64{,le} specific option, for a choice between
hard-float and soft-float. Valid values are 'hardfloat' (default) and
'softfloat'. It is passed to the assembler as
'GOMIPS64_{hardfloat,softfloat}'.
Change-Id: I7f73078627f7cb37c588a38fb5c997fe09c56134
Reviewed-on: https://go-review.googlesource.com/108475
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
As a follow-up to the first README for cmd/compile/internal/ssa.
Since this is the parent package for all the compiler packages, this
README serves as an overview of the compiler and its packages. As more
READMEs are added for specific parts with more detail, such as ssa's,
they can be linked from this one.
Thanks to Iskander Sharipov, Josh Bleecher Snyder, Matthew Dempsky,
Alberto Donizetti, and Robert Griesemer for helping with all the details
in this document.
Change-Id: I820a535e25dce86ccc667ce1c6e92b75fc32f3af
Reviewed-on: https://go-review.googlesource.com/103935
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Using oldname+resolve is how typecheck handles this anyway.
Passes toolstash -cmp, with both -iexport enabled and disabled.
Change-Id: I12b0f0333d6b86ce6bfc4d416c461b5f15c1110d
Reviewed-on: https://go-review.googlesource.com/109715
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
On amd64, Ctz must include special handling of zeros.
But the prove pass has enough information to detect whether the input
is non-zero, allowing a more efficient lowering.
Introduce new CtzNonZero ops to capture and use this information.
Benchmark code:
func BenchmarkVisitBits(b *testing.B) {
b.Run("8", func(b *testing.B) {
for i := 0; i < b.N; i++ {
x := uint8(0xff)
for x != 0 {
sink = bits.TrailingZeros8(x)
x &= x - 1
}
}
})
// and similarly so for 16, 32, 64
}
name old time/op new time/op delta
VisitBits/8-8 7.27ns ± 4% 5.58ns ± 4% -23.35% (p=0.000 n=28+26)
VisitBits/16-8 14.7ns ± 7% 10.5ns ± 4% -28.43% (p=0.000 n=30+28)
VisitBits/32-8 27.6ns ± 8% 19.3ns ± 3% -30.14% (p=0.000 n=30+26)
VisitBits/64-8 44.0ns ±11% 38.0ns ± 5% -13.48% (p=0.000 n=30+30)
Fixes#25077
Change-Id: Ie6e5bd86baf39ee8a4ca7cadcf56d934e047f957
Reviewed-on: https://go-review.googlesource.com/109358
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Running 'go run *.go' in the gen directory resulted in this diff.
Change-Id: Iee398a720f54d3f2c3c122fc6fc45a708a39e45e
Reviewed-on: https://go-review.googlesource.com/109575
Run-TryBot: Michael Munday <mike.munday@ibm.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change implements math.Round as an intrinsic on ppc64x so it can be
done using a single instruction.
benchmark old ns/op new ns/op delta
BenchmarkRound-16 2.60 0.69 -73.46%
Change-Id: I9408363e96201abdfc73ced7bcd5f0c29db006a8
Reviewed-on: https://go-review.googlesource.com/109395
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
When we moved racewalk to buildssa, we disabled SSA of structs with
2-4 fields while instrumenting because it caused false dependencies
because for "x.f" we would emit
(StructSelect (Load (Addr x)) "f")
Even though we later simplify this to
(Load (OffPtr (Addr x) "f"))
the instrumentation saw a load of x in its entirety and would issue
appropriate race/msan calls.
The fix taken here is to directly emit the OffPtr form when x.f is
addressable and can't be represented in SSA form.
Change-Id: I0caf37bced52e9c16937466b0ac8cab6d356e525
Reviewed-on: https://go-review.googlesource.com/109360
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
The previous change sped up the pure computation form of LeadingZeros8.
This places it somewhat close to the table lookup form.
Depending on something that varies from toolchain to toolchain
(alignment, perhaps?), the slowdown from ditching the table lookup
is either 20% or 5%.
This benchmark is the best case scenario for the table lookup:
It is in the L1 cache already.
I think we're close enough that we can switch to the computational version,
and trust that the memory effects and binary size savings will be worth it.
Code:
func f8(x uint8) { z = bits.LeadingZeros8(x) }
Before:
"".f8 STEXT nosplit size=34 args=0x8 locals=0x0
0x0000 00000 (x.go:7) TEXT "".f8(SB), NOSPLIT, $0-8
0x0000 00000 (x.go:7) FUNCDATA $0, gclocals·2a5305abe05176240e61b8620e19a815(SB)
0x0000 00000 (x.go:7) FUNCDATA $1, gclocals·33cdeccccebe80329f1fdbee7f5874cb(SB)
0x0000 00000 (x.go:7) MOVBLZX "".x+8(SP), AX
0x0005 00005 (x.go:7) MOVBLZX AL, AX
0x0008 00008 (x.go:7) LEAQ math/bits.len8tab(SB), CX
0x000f 00015 (x.go:7) MOVBLZX (CX)(AX*1), AX
0x0013 00019 (x.go:7) ADDQ $-8, AX
0x0017 00023 (x.go:7) NEGQ AX
0x001a 00026 (x.go:7) MOVQ AX, "".z(SB)
0x0021 00033 (x.go:7) RET
After:
"".f8 STEXT nosplit size=30 args=0x8 locals=0x0
0x0000 00000 (x.go:7) TEXT "".f8(SB), NOSPLIT, $0-8
0x0000 00000 (x.go:7) FUNCDATA $0, gclocals·2a5305abe05176240e61b8620e19a815(SB)
0x0000 00000 (x.go:7) FUNCDATA $1, gclocals·33cdeccccebe80329f1fdbee7f5874cb(SB)
0x0000 00000 (x.go:7) MOVBLZX "".x+8(SP), AX
0x0005 00005 (x.go:7) MOVBLZX AL, AX
0x0008 00008 (x.go:7) LEAL 1(AX)(AX*1), AX
0x000c 00012 (x.go:7) BSRL AX, AX
0x000f 00015 (x.go:7) ADDQ $-8, AX
0x0013 00019 (x.go:7) NEGQ AX
0x0016 00022 (x.go:7) MOVQ AX, "".z(SB)
0x001d 00029 (x.go:7) RET
Change-Id: Icc7db50a7820fb9a3da8a816d6b6940d7f8e193e
Reviewed-on: https://go-review.googlesource.com/108942
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Get rid of a bunch of stuff we've already done.
Change-Id: Ibae4be7535ddb58590a072a2390c5f3e948c2fd7
Reviewed-on: https://go-review.googlesource.com/109136
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This reduces the API surface of Type slightly (for #25056), but also
makes it more consistent with the reflect and go/types APIs.
Passes toolstash-check.
Change-Id: Ief9a8eb461ae6e88895f347e2a1b7b8a62423222
Reviewed-on: https://go-review.googlesource.com/109138
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This was an artifact from when we had a separate ssa.Type interface to
break circular dependency between packages ssa and gc. It's no longer
needed now that package ssa directly uses package types.
Change-Id: I6a93e5d79082815f7f0eb89507381969cc6cb403
Reviewed-on: https://go-review.googlesource.com/109137
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
We currently rewrite
(TESTQ (MOVQconst [c] x)) into (TESTQconst [c] x)
and (TESTQconst [-1] x) into (TESTQ x x)
if x is a (MOVQconst [-1]) we will be stuck in the endless rewrite loop.
Don't perform the rewrite in such cases.
Fixes#25006
Change-Id: I77f561ba2605fc104f1e5d5c57f32e9d67a2c000
Reviewed-on: https://go-review.googlesource.com/108879
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
gc/ssa.go initilizes SP and SB values with TUINTPTR type.
Assign same type in SSA tests and modify check.go to catch
mismatching types for those ops.
This makes SSA tests more consistent.
Change-Id: I798440d57d00fb949d1a0cd796759c9b82a934bd
Reviewed-on: https://go-review.googlesource.com/106658
Run-TryBot: Iskander Sharipov <iskander.sharipov@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The go/types API exposes what package objects were declared in, which
includes struct fields, interface methods, and function parameters.
The compiler implicitly tracks these for non-exported identifiers
(through the Sym's associated Pkg), but exported identifiers always
use localpkg. To simplify identifying this, add an explicit package
field to struct, interface, and function types.
Change-Id: I6adc5dc653e78f058714259845fb3077066eec82
Reviewed-on: https://go-review.googlesource.com/107622
Reviewed-by: Robert Griesemer <gri@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Rewrite x<<1+c into x+x+c, which can be expressed as a single LEAQ/LEAL.
Bit of a special case, but the single-instruction
LEA is both shorter and faster than SHL then ADD.
Triggers 293 times during make.bash.
Change-Id: I3f09c8e9a8f3859d1eeed336f095fc3ada79c2c1
Reviewed-on: https://go-review.googlesource.com/108938
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
For struct fields and methods, Field.Nname was only used to store
position information, which means we're allocating an entire ONAME
Node+Name+Param structure just for one field. We can optimize away
these ONAME allocations by instead adding a Field.Pos field.
Unfortunately, we can't get rid of Field.Nname, because it's needed
for function parameters, so Field grows a little bit and now has more
redundant information in those cases. However, that was already the
case (e.g., Field.Sym and Field.Nname.Sym), and it's still a net win
for allocations as demonstrated by the benchmarks below.
Additionally, by moving the ONAME allocation for function parameters
to funcargs, we can avoid allocating them for function parameters that
aren't used in corresponding function bodies (e.g., interface methods,
function-typed variables, and imported functions/methods without
inline bodies).
name old time/op new time/op delta
Template 254ms ± 6% 251ms ± 6% -1.04% (p=0.000 n=487+488)
Unicode 128ms ± 7% 128ms ± 7% ~ (p=0.294 n=482+467)
GoTypes 862ms ± 5% 860ms ± 4% ~ (p=0.075 n=488+471)
Compiler 3.91s ± 4% 3.90s ± 4% -0.39% (p=0.000 n=468+473)
name old user-time/op new user-time/op delta
Template 339ms ±14% 336ms ±14% -1.02% (p=0.001 n=498+494)
Unicode 176ms ±18% 176ms ±25% ~ (p=0.940 n=491+499)
GoTypes 1.13s ± 8% 1.13s ± 9% ~ (p=0.157 n=496+493)
Compiler 5.24s ± 6% 5.21s ± 6% -0.57% (p=0.000 n=485+489)
name old alloc/op new alloc/op delta
Template 38.3MB ± 0% 37.3MB ± 0% -2.58% (p=0.000 n=499+497)
Unicode 29.1MB ± 0% 29.1MB ± 0% -0.03% (p=0.000 n=500+493)
GoTypes 116MB ± 0% 115MB ± 0% -0.65% (p=0.000 n=498+499)
Compiler 492MB ± 0% 487MB ± 0% -1.00% (p=0.000 n=497+498)
name old allocs/op new allocs/op delta
Template 364k ± 0% 360k ± 0% -1.15% (p=0.000 n=499+499)
Unicode 336k ± 0% 336k ± 0% -0.01% (p=0.000 n=500+493)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.30% (p=0.000 n=499+499)
Compiler 4.54M ± 0% 4.51M ± 0% -0.58% (p=0.000 n=494+495)
Passes toolstash-check -gcflags=-dwarf=false. Changes DWARF output
because position information is now tracked more precisely for
function parameters.
Change-Id: Ib8077d70d564cc448c5e4290baceab3a4396d712
Reviewed-on: https://go-review.googlesource.com/108217
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Currently Liveness.compact rewrites the Liveness.livevars slice in
place. However, we're about to add register maps, which we'll want to
track in livevars, but compact independently from the stack maps.
Hence, this CL modifies Liveness.compact to consume Liveness.livevars
and produce a new slice of deduplicated stack maps. This is somewhat
clearer anyway because it avoids potential confusion over how
Liveness.livevars is indexed.
Passes toolstash -cmp.
For #24543.
Change-Id: I7093fbc71143f8a29e677aa30c96e501f953ca2b
Reviewed-on: https://go-review.googlesource.com/108498
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
This triggers three times while building std,
once in image/png and twice in go/internal/gccgoimporter.
There are no instances in std in which a more aggressive
optimization would have triggered.
This doesn't necessarily avoid an allocation,
because escape analysis is already able in many cases
to use a temporary backing for the string,
but it does at a minimum avoid the runtime call and copy.
Fixes#24937
Change-Id: I7019e85638ba8cd7e2f03890e672558b858579bc
Reviewed-on: https://go-review.googlesource.com/108035
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
This CL moves all of the logic for wiring up imported declarations
into export.go, so that it can be reused by the indexed importer
code. While here, increase symmetry across routines.
Passes toolstash-check.
Change-Id: I1ccec5c3999522b010e4d04ed56b632fd4d712d9
Reviewed-on: https://go-review.googlesource.com/107621
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
These will appear when tracking live pointers in registers, so we need
to know whether they have pointers.
For #24543.
Change-Id: I2edccee39ca989473db4b3e7875ff166808ac141
Reviewed-on: https://go-review.googlesource.com/108497
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>