Commit Graph

40 Commits

Author SHA1 Message Date
Michael Anthony Knyszek 38ac7c41aa runtime: implement experiment to replace heap bitmap with alloc headers
This change replaces the 1-bit-per-word heap bitmap for most size
classes with allocation headers for objects that contain pointers. The
header consists of a single pointer to a type. All allocations with
headers are treated as implicitly containing one or more instances of
the type in the header.

As the name implies, headers are usually stored as the first word of an
object. There are two additional exceptions to where headers are stored
and how they're used.

Objects smaller than 512 bytes do not have headers. Instead, a heap
bitmap is reserved at the end of spans for objects of this size. A full
word of overhead is too much for these small objects. The bitmap is of
the same format of the old bitmap, minus the noMorePtrs bits which are
unnecessary. All the objects <512 bytes have a bitmap less than a
pointer-word in size, and that was the granularity at which noMorePtrs
could stop scanning early anyway.

Objects that are larger than 32 KiB (which have their own span) have
their headers stored directly in the span, to allow power-of-two-sized
allocations to not spill over into an extra page.

The full implementation is behind GOEXPERIMENT=allocheaders.

The purpose of this change is performance. First and foremost, with
headers we no longer have to unroll pointer/scalar data at allocation
time for most size classes. Small size classes still need some
unrolling, but their bitmaps are small so we can optimize that case
fairly well. Larger objects effectively have their pointer/scalar data
unrolled on-demand from type data, which is much more compactly
represented and results in less TLB pressure. Furthermore, since the
headers are usually right next to the object and where we're about to
start scanning, we get an additional temporal locality benefit in the
data cache when looking up type metadata. The pointer/scalar data is
now effectively unrolled on-demand, but it's also simpler to unroll than
before; that unrolled data is never written anywhere, and for arrays we
get the benefit of retreading the same data per element, as opposed to
looking it up from scratch for each pointer-word of bitmap. Lastly,
because we no longer have a heap bitmap that spans the entire heap,
there's a flat 1.5% memory use reduction. This is balanced slightly by
some objects possibly being bumped up a size class, but most objects are
not tightly optimized to size class sizes so there's some memory to
spare, making the header basically free in those cases.

See the follow-up CL which turns on this experiment by default for
benchmark results. (CL 538217.)

Change-Id: I4c9034ee200650d06d8bdecd579d5f7c1bbf1fc5
Reviewed-on: https://go-review.googlesource.com/c/go/+/437955
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2023-11-09 19:58:08 +00:00
Keith Randall 39263f34a3 cmd/compile: add a cache to interface type switches
That way we don't need to call into the runtime when the type being
switched on has been seen many times before.

The cache is just a hash table of a sample of all the concrete types
that have been switched on at that source location.  We record the
matching case number and the resulting itab for each concrete input
type.

The caches seldom get large. The only two in a run of all.bash that
get more than 100 entries, even with the sampling rate set to 1, are

test/fixedbugs/issue29264.go, with 101
test/fixedbugs/issue29312.go, with 254

Both happen at the type switch in fmt.(*pp).handleMethods, perhaps
unsurprisingly.

name                                 old time/op  new time/op  delta
SwitchInterfaceTypePredictable-24    25.8ns ± 2%   2.5ns ± 3%  -90.43%  (p=0.000 n=10+10)
SwitchInterfaceTypeUnpredictable-24  37.5ns ± 2%  11.2ns ± 1%  -70.02%  (p=0.000 n=10+10)

Change-Id: I4961ac9547b7f15b03be6f55cdcb972d176955eb
Reviewed-on: https://go-review.googlesource.com/c/go/+/526658
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@google.com>
2023-10-06 15:44:08 +00:00
Egon Elbre 1d84b02b22 runtime: introduce nextslicecap
This allows to reuse the slice cap computation across
specialized growslice funcs.

Updates #49480

Change-Id: Ie075d9c3075659ea14c11d51a9cd4ed46aa0e961
Reviewed-on: https://go-review.googlesource.com/c/go/+/495876
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Egon Elbre <egonelbre@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Ian Lance Taylor <iant@golang.org>
2023-09-04 17:50:50 +00:00
Than McIntosh 445e520d49 cmd/compile: allow more inlining of functions that construct closures
[This is a roll-forward of CL 479095, which was reverted due to a bad
interaction between inlining and escape analysis, then later fixed
first with an attempt in CL 482355, then again in CL 484859, and then
one more time with CL 492135.]

Currently, when the inliner is determining if a function is
inlineable, it descends into the bodies of closures constructed by
that function. This has several unfortunate consequences:

- If the closure contains a disallowed operation (e.g., a defer), then
  the outer function can't be inlined. It makes sense that the
  *closure* can't be inlined in this case, but it doesn't make sense
  to punish the function that constructs the closure.

- The hairiness of the closure counts against the inlining budget of
  the outer function. Since we currently copy the closure body when
  inlining the outer function, this makes sense from the perspective
  of export data size and binary size, but ultimately doesn't make
  much sense from the perspective of what should be inlineable.

- Since the inliner walks into every closure created by an outer
  function in addition to starting a walk at every closure, this adds
  an n^2 factor to inlinability analysis.

This CL simply drops this behavior.

In std, this makes 57 more functions inlinable, and disallows inlining
for 10 (due to the basic instability of our bottom-up inlining
approach), for an net increase of 47 inlinable functions (+0.6%).

This will help significantly with the performance of the functions to
be added for #56102, which have a somewhat complicated nesting of
closures with a performance-critical fast path.

The downside of this seems to be a potential increase in export data
and text size, but the practical impact of this seems to be
negligible:

	       │    before    │           after            │
	       │    bytes     │    bytes      vs base      │
Go/binary        15.12Mi ± 0%   15.14Mi ± 0%  +0.16% (n=1)
Go/text          5.220Mi ± 0%   5.237Mi ± 0%  +0.32% (n=1)
Compile/binary   22.92Mi ± 0%   22.94Mi ± 0%  +0.07% (n=1)
Compile/text     8.428Mi ± 0%   8.435Mi ± 0%  +0.08% (n=1)

Change-Id: I5f75fcceb177f05853996b75184a486528eafe96
Reviewed-on: https://go-review.googlesource.com/c/go/+/492017
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-05-05 21:04:48 +00:00
Michael Knyszek ce10e9d845 Revert "cmd/compile: allow more inlining of functions that construct closures"
This reverts commit f8162a0e72.

Reason for revert: https://github.com/golang/go/issues/59680

Change-Id: I91821c691a2d019ff0ad5b69509e32f3d56b8f67
Reviewed-on: https://go-review.googlesource.com/c/go/+/485498
Reviewed-by: Russ Cox <rsc@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
2023-04-17 21:45:00 +00:00
Than McIntosh f8162a0e72 cmd/compile: allow more inlining of functions that construct closures
[This is a roll-forward of CL 479095, which was reverted due to a bad
interaction between inlining and escape analysis, then later fixed
fist with an attempt in CL 482355, then again in 484859 .]

Currently, when the inliner is determining if a function is
inlineable, it descends into the bodies of closures constructed by
that function. This has several unfortunate consequences:

- If the closure contains a disallowed operation (e.g., a defer), then
  the outer function can't be inlined. It makes sense that the
  *closure* can't be inlined in this case, but it doesn't make sense
  to punish the function that constructs the closure.

- The hairiness of the closure counts against the inlining budget of
  the outer function. Since we currently copy the closure body when
  inlining the outer function, this makes sense from the perspective
  of export data size and binary size, but ultimately doesn't make
  much sense from the perspective of what should be inlineable.

- Since the inliner walks into every closure created by an outer
  function in addition to starting a walk at every closure, this adds
  an n^2 factor to inlinability analysis.

This CL simply drops this behavior.

In std, this makes 57 more functions inlinable, and disallows inlining
for 10 (due to the basic instability of our bottom-up inlining
approach), for an net increase of 47 inlinable functions (+0.6%).

This will help significantly with the performance of the functions to
be added for #56102, which have a somewhat complicated nesting of
closures with a performance-critical fast path.

The downside of this seems to be a potential increase in export data
and text size, but the practical impact of this seems to be
negligible:

	       │    before    │           after            │
	       │    bytes     │    bytes      vs base      │
Go/binary        15.12Mi ± 0%   15.14Mi ± 0%  +0.16% (n=1)
Go/text          5.220Mi ± 0%   5.237Mi ± 0%  +0.32% (n=1)
Compile/binary   22.92Mi ± 0%   22.94Mi ± 0%  +0.07% (n=1)
Compile/text     8.428Mi ± 0%   8.435Mi ± 0%  +0.08% (n=1)

Updates #56102.

Change-Id: I6e938d596992ffb473cf51e7e598f372ce08deb0
Reviewed-on: https://go-review.googlesource.com/c/go/+/484860
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-04-17 14:52:41 +00:00
Than McIntosh 8854be4180 Revert "cmd/compile: allow more inlining of functions that construct closures"
This reverts commit http://go.dev/cl/c/482356.

Reason for revert: Reverting this change again, since it is causing additional failures in google-internal testing.

Change-Id: I9234946f62e5bb18c2f873a65e8b298d04af0809
Reviewed-on: https://go-review.googlesource.com/c/go/+/484735
Reviewed-by: Florian Zenker <floriank@google.com>
Run-TryBot: Than McIntosh <thanm@google.com>
Auto-Submit: Than McIntosh <thanm@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-04-14 14:45:59 +00:00
Than McIntosh 39986d28e4 cmd/compile: allow more inlining of functions that construct closures
[This is a roll-forward of CL 479095, which was reverted due to a bad
interaction between inlining and escape analysis since fixed in CL 482355.]

Currently, when the inliner is determining if a function is
inlineable, it descends into the bodies of closures constructed by
that function. This has several unfortunate consequences:

- If the closure contains a disallowed operation (e.g., a defer), then
  the outer function can't be inlined. It makes sense that the
  *closure* can't be inlined in this case, but it doesn't make sense
  to punish the function that constructs the closure.

- The hairiness of the closure counts against the inlining budget of
  the outer function. Since we currently copy the closure body when
  inlining the outer function, this makes sense from the perspective
  of export data size and binary size, but ultimately doesn't make
  much sense from the perspective of what should be inlineable.

- Since the inliner walks into every closure created by an outer
  function in addition to starting a walk at every closure, this adds
  an n^2 factor to inlinability analysis.

This CL simply drops this behavior.

In std, this makes 57 more functions inlinable, and disallows inlining
for 10 (due to the basic instability of our bottom-up inlining
approach), for an net increase of 47 inlinable functions (+0.6%).

This will help significantly with the performance of the functions to
be added for #56102, which have a somewhat complicated nesting of
closures with a performance-critical fast path.

The downside of this seems to be a potential increase in export data
and text size, but the practical impact of this seems to be
negligible:

	       │    before    │           after            │
	       │    bytes     │    bytes      vs base      │
Go/binary        15.12Mi ± 0%   15.14Mi ± 0%  +0.16% (n=1)
Go/text          5.220Mi ± 0%   5.237Mi ± 0%  +0.32% (n=1)
Compile/binary   22.92Mi ± 0%   22.94Mi ± 0%  +0.07% (n=1)
Compile/text     8.428Mi ± 0%   8.435Mi ± 0%  +0.08% (n=1)

Updates #56102.

Change-Id: I1f4fc96c71609c8feb59fecdb92b69ba7e3b5b41
Reviewed-on: https://go-review.googlesource.com/c/go/+/482356
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-04-07 15:12:08 +00:00
Than McIntosh f46320849d cmd/compile/internal/test: skip testpoint due to revert of CL 479095
Skip one of the testpoints that verifies inlining, since it
no longer passes as a result of reverting CL 479095. Once we
roll forward with a new version of CL 479095 we can re-enable
this testpoint.

Change-Id: I41f6fb3fce78f31e60c5f0ed2856be0e66865149
Reviewed-on: https://go-review.googlesource.com/c/go/+/481755
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
2023-04-03 18:54:15 +00:00
Austin Clements bb44c2b54e sync: implement OnceFunc, OnceValue, and OnceValues
This adds the three functions from #56102 to the sync package. These
provide a convenient API for the most common uses of sync.Once.

The performance of these is comparable to direct use of sync.Once:

$ go test -run ^$ -bench OnceFunc\|OnceVal -count 20 | benchstat -row .name -col /v
goos: linux
goarch: amd64
pkg: sync
cpu: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
          │     Once     │                Global                 │                Local                 │
          │    sec/op    │    sec/op     vs base                 │    sec/op     vs base                │
OnceFunc    1.3500n ± 6%   2.7030n ± 1%  +100.22% (p=0.000 n=20)   0.3935n ± 0%  -70.86% (p=0.000 n=20)
OnceValue   1.3155n ± 0%   2.7460n ± 1%  +108.74% (p=0.000 n=20)   0.5478n ± 1%  -58.35% (p=0.000 n=20)

The "Once" column represents the baseline of how code would typically
express these patterns using sync.Once. "Global" binds the closure
returned by OnceFunc/OnceValue to global, which is how I expect these
to be used most of the time. Currently, this defeats some inlining
opportunities, which roughly doubles the cost over sync.Once; however,
it's still *extremely* fast. Finally, "Local" binds the returned
closure to a local variable. This unlocks several levels of inlining
and represents pretty much the best possible case for these APIs, but
is also unlikely to happen in practice. In principle the compiler
could recognize that the global in the "Global" case is initialized in
place and never mutated and do the same optimizations it does in the
"Local" case, but it currently does not.

Fixes #56102

Change-Id: If7355eccd7c8de7288d89a4282ff15ab1469e420
Reviewed-on: https://go-review.googlesource.com/c/go/+/451356
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Caleb Spare <cespare@gmail.com>
Auto-Submit: Austin Clements <austin@google.com>
2023-03-31 20:01:17 +00:00
Keith Randall d3daeb5267 runtime: remove the restriction that write barrier ptrs come in pairs
Future CLs will remove the invariant that pointers are always put in
the write barrier in pairs.

The behavior of the assembly code changes a bit, where instead of writing
the pointers unconditionally and then checking for overflow, check for
overflow first and then write the pointers.

Also changed the write barrier flush function to not take the src/dst
as arguments.

Change-Id: I2ef708038367b7b82ea67cbaf505a1d5904c775c
Reviewed-on: https://go-review.googlesource.com/c/go/+/447779
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Bypass: Keith Randall <khr@golang.org>
2023-02-17 22:19:26 +00:00
Cuong Manh Le 3e1478ef0d cmd/compile: cleanup atomic.Pointer[T] inline test
Updates #57410

Change-Id: I9be38e20c6b83d14f7785049a66de77ac7ecdf15
Reviewed-on: https://go-review.googlesource.com/c/go/+/463997
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2023-01-31 20:36:13 +00:00
Matthew Dempsky 4f467f1082 cmd: remove GOEXPERIMENT=nounified knob
This CL removes the GOEXPERIMENT=nounified knob, and any conditional
statements that depend on that knob. Further CLs to remove unreachable
code follow this one.

Updates #57410.

Change-Id: I39c147e1a83601c73f8316a001705778fee64a91
Reviewed-on: https://go-review.googlesource.com/c/go/+/458615
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2023-01-25 21:16:32 +00:00
qmuntal 2423370136 utf16: reduce utf16.Decode allocations
This CL avoids allocating in utf16.Decode for code point sequences
with less than 64 elements. It does so by splitting the function in two,
one that can be inlined that preallocates a buffer and the other that
does the heavy-lifting.

The mid-stack inliner will allocate the buffer in the caller stack,
and in many cases this will be enough to avoid the allocation.

unicode/utf16 benchmarks:

name                         old time/op    new time/op    delta
DecodeValidASCII-12            60.1ns ± 3%    16.0ns ±20%   -73.40%  (p=0.000 n=8+10)
DecodeValidJapaneseChars-12    61.3ns ±10%    14.9ns ±39%   -75.71%  (p=0.000 n=10+10)

name                         old alloc/op   new alloc/op   delta
DecodeValidASCII-12             48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)
DecodeValidJapaneseChars-12     48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=10+10)

name                         old allocs/op  new allocs/op  delta
DecodeValidASCII-12              1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)
DecodeValidJapaneseChars-12      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=10+10)

I've also benchmarked os.File.ReadDir with this change applied
to demonstrate that it does make a difference in the caller site, in this
case via syscall.UTF16ToString:

name        old time/op    new time/op    delta
ReadDir-12     592µs ± 8%     620µs ±16%     ~     (p=0.280 n=10+10)

name        old alloc/op   new alloc/op   delta
ReadDir-12    30.4kB ± 0%    22.4kB ± 0%  -26.10%  (p=0.000 n=8+10)

name        old allocs/op  new allocs/op  delta
ReadDir-12       402 ± 0%       272 ± 0%  -32.34%  (p=0.000 n=10+10)

Change-Id: I65cf5caa3fd3b3a466c0ed837a50a96e975bbe6b
Reviewed-on: https://go-review.googlesource.com/c/go/+/453415
Reviewed-by: Damien Neil <dneil@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Quim Muntal <quimmuntal@gmail.com>
2023-01-23 20:59:01 +00:00
Bryan C. Mills 2cb103ff58 cmd/compile: use testenv.Command instead of exec.Command in tests
testenv.Command sets a default timeout based on the test's deadline
and sends SIGQUIT (where supported) in case of a hang.

Change-Id: I084b324a20d5ecf733b2cb95f160947a7410a805
Reviewed-on: https://go-review.googlesource.com/c/go/+/450696
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Bryan Mills <bcmills@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
2022-11-15 20:19:15 +00:00
Youlin Feng 7ae652b7c0 runtime: replace all uses of CtzXX with TrailingZerosXX
Replace all uses of Ctz64/32/8 with TrailingZeros64/32/8, because they
are the same and maybe duplicated. Also renamed CtzXX functions in 386
assembly code.

Change-Id: I19290204858083750f4be589bb0923393950ae6d
Reviewed-on: https://go-review.googlesource.com/c/go/+/438935
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
2022-10-18 18:06:27 +00:00
Than McIntosh 4a459cbbad cmd/compile: tweak inliners handling of coverage counter updates
This patch fixes up a bug in the inliner's special case code for
coverage counter updates, which was not properly working for
-covermode=atomic compilations.

Updates #56044.

Change-Id: I9e309312b123121c3df02862623bdbab1f6c6a4b
Reviewed-on: https://go-review.googlesource.com/c/go/+/441858
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2022-10-10 19:56:43 +00:00
Cuong Manh Le ef69718dd7 all: make sure *Pointer[T]'s methods are inlined as intended
Updates #50860

Change-Id: I65bced707e50364b16edf4b087c541cf19bb1778
Reviewed-on: https://go-review.googlesource.com/c/go/+/428362
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2022-09-06 18:16:03 +00:00
Austin Clements dbf442b1b2 runtime: replace stkframe.arglen/argmap with methods
Currently, stkframe.arglen and stkframe.argmap are populated by
gentraceback under a particular set of circumstances. But because they
can be constructed from other fields in stkframe, they don't need to
be computed eagerly at all. They're also rather misleading, as they're
only part of computing the actual argument map and most callers should
be using getStackMap, which does the rest of the work.

This CL drops these fields from stkframe. It shifts the functions that
used to compute them, getArgInfoFast and getArgInfo, into
corresponding methods stkframe.argBytes and stkframe.argMapInternal.
argBytes is expected to be used by callers that need to know only the
argument frame size, while argMapInternal is used only by argBytes and
getStackMap.

We also move some of the logic from getStackMap into argMapInternal
because the previous split of responsibilities didn't make much sense.
This lets us return just a bitvector from argMapInternal, rather than
both a bitvector, which carries a size, and an "actually use this
size".

The getArgInfoFast function was inlined before (and inl_test checked
this). We drop that requirement from stkframe.argBytes because the
uses of this have shifted and now it's only called from heap dumping
(which never happens) and conservative stack frame scanning (which
very, very rarely happens).

There will be a few follow-up clean-up CLs.

For #54466. This is a nice clean-up on its own, but it also serves to
remove pointers from the traceback state that would eventually become
troublesome write barriers once we stack-rip gentraceback.

Change-Id: I107f98ed8e7b00185c081de425bbf24af02a4163
Reviewed-on: https://go-review.googlesource.com/c/go/+/424514
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-09-02 19:08:53 +00:00
Keith Randall 6a9c674a09 runtime: redo heap bitmap
[this is a retry of CL 407035 + its revert CL 422395. The content is unchanged]

Use just 1 bit per word to record the ptr/nonptr bitmap.
Use word-sized operations to manipulate the bitmap, so we can operate
on up to 64 ptr/nonptr bits at a time.

Use a separate bitmap, one bit per word of the ptr/nonptr bitmap,
to encode a no-more-pointers signal. Since we can check 64 ptr/nonptr
bits at once, knowing the exact last pointer location is not necessary.

As a followon CL, we should make the gcdata bitmap an array of
uintptr instead of an array of byte, so we can load 64 bits of it at once.
Similarly for the processing of gc programs.

Change-Id: Ica5eb622f5b87e647be64f471d67b02732ef8be6
Reviewed-on: https://go-review.googlesource.com/c/go/+/422634
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
2022-08-16 20:39:36 +00:00
Keith Randall ad0287f496 Revert "runtime: redo heap bitmap"
This reverts commit b589208c8c.

Reason for revert: Bug somewhere in this code, causing wasm and maybe linux/386 to fail.

Change-Id: I5e1e501d839584e0219271bb937e94348f83c11f
Reviewed-on: https://go-review.googlesource.com/c/go/+/422395
Reviewed-by: Than McIntosh <thanm@google.com>
Run-TryBot: Keith Randall <khr@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-08-09 16:10:10 +00:00
Keith Randall b589208c8c runtime: redo heap bitmap
Use just 1 bit per word to record the ptr/nonptr bitmap.
Use word-sized operations to manipulate the bitmap, so we can operate
on up to 64 ptr/nonptr bits at a time.

Use a separate bitmap, one bit per word of the ptr/nonptr bitmap,
to encode a no-more-pointers signal. Since we can check 64 ptr/nonptr
bits at once, knowing the exact last pointer location is not necessary.

This cleans up the bitmap implementation significantly, which will
hopefully make it faster. TODO: measure

As a followon CL, we should make the gcdata bitmap an array of
uintptr instead of an array of byte, so we can load 64 bits of it at once.
Similarly for the processing of gc programs.

Change-Id: I18151b1876d9543599800dec51e2a1b19df97d49
Reviewed-on: https://go-review.googlesource.com/c/go/+/407035
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Keith Randall <khr@google.com>
2022-08-08 16:57:33 +00:00
Cherry Mui 2c3cb19a98 cmd/compile/internal/test: make TestIntendedInlining faster
There is no need to build with -a. The go command should do the
right thing to pass the flags. Also, we only care packages
mentioned on the command line, so no need to add -gcflags=all=....

May fix #52081.

Change-Id: Idabcfe285c90ed5d25ea6d42abd7617078d3283a
Reviewed-on: https://go-review.googlesource.com/c/go/+/407015
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2022-05-18 18:06:05 +00:00
Xiaodong Liu 845a95b1ae cmd/compile/internal: fix test error on loong64
For TestLogOpt test case, add loong64 support to test the host
architecture and os.

The Ctz64 is not intrinsified on loong64 for TestIntendedInlining.

Contributors to the loong64 port are:
  Weining Lu <luweining@loongson.cn>
  Lei Wang <wanglei@loongson.cn>
  Lingqin Gong <gonglingqin@loongson.cn>
  Xiaolin Zhao <zhaoxiaolin@loongson.cn>
  Meidan Li <limeidan@loongson.cn>
  Xiaojuan Zhai <zhaixiaojuan@loongson.cn>
  Qiyuan Pu <puqiyuan@loongson.cn>
  Guoqi Chen <chenguoqi@loongson.cn>

This port has been updated to Go 1.15.6:
  https://github.com/loongson/go

Updates #46229

Change-Id: I42280e89a337dbfde55a01a134820f8ae94f6b47
Reviewed-on: https://go-review.googlesource.com/c/go/+/400237
Reviewed-by: David Chase <drchase@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
2022-05-13 22:18:38 +00:00
Russ Cox ffe48e00ad sync/atomic: add typed atomic values
These implementations will inline to the lower-level primitives,
but they hide the underlying values so that all accesses are
forced to use the atomic APIs. They also allow the use of shorter
names (methods instead of functions) at call sites, making code
more readable.

Pointer[T] also avoids conversions using unsafe.Pointer at call sites.

Discussed on #47141.
See also https://research.swtch.com/gomm for background.

Fixes #50860.

Change-Id: I0b178ee0c7747fa8985f8e48cd7b01063feb7dcc
Reviewed-on: https://go-review.googlesource.com/c/go/+/381317
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-04 18:05:18 +00:00
Joe Tsai 67d6be139c reflect: make more Value methods inlineable
The following Value methods are now inlineable:

    Bool  for ~bool
    String for ~string (but not other kinds)
    Bytes for []byte (but not ~[]byte or ~[N]byte)
    Len   for ~[]T (but not ~[N]T, ~chan T, ~map[K]V, or ~string)
    Cap   for ~[]T (but not ~[N]T or ~chan T)

For Bytes, we only have enough inline budget to inline one type,
so we optimize for unnamed []byte, which is far more common than
named []byte or [N]byte.

For Len and Cap, we only have enough inline budget to inline one kind,
so we optimize for ~[]T, which is more common than the others.
The exception is string, but the size of a string can be obtained
through len(v.String()).

Performance:

	Bool        1.65ns ± 0%  0.51ns ± 3%  -68.81%  (p=0.008 n=5+5)
	String      1.97ns ± 1%  0.70ns ± 1%  -64.25%  (p=0.008 n=5+5)
	Bytes       8.90ns ± 2%  0.89ns ± 1%  -89.95%  (p=0.008 n=5+5)
	NamedBytes  8.89ns ± 1%  8.88ns ± 1%     ~     (p=0.548 n=5+5)
	BytesArray  10.0ns ± 2%  10.2ns ± 1%   +1.58%  (p=0.048 n=5+5)
	SliceLen    1.97ns ± 1%  0.45ns ± 1%  -77.22%  (p=0.008 n=5+5)
	MapLen      2.62ns ± 1%  3.07ns ± 1%  +17.24%  (p=0.008 n=5+5)
	StringLen   1.96ns ± 1%  1.98ns ± 2%     ~     (p=0.151 n=5+5)
	ArrayLen    1.96ns ± 1%  2.19ns ± 1%  +11.46%  (p=0.008 n=5+5)
	SliceCap    1.76ns ± 1%  0.45ns ± 2%  -74.28%  (p=0.008 n=5+5)

There's a slight slowdown (~10-20%) for obtaining the length
of a string or map, but a substantial improvement for slices.

Performance according to encoding/json:

	CodeMarshal          555µs ± 2%   562µs ± 4%     ~     (p=0.421 n=5+5)
	MarshalBytes/32      163ns ± 1%   157ns ± 1%   -3.82%  (p=0.008 n=5+5)
	MarshalBytes/256     453ns ± 1%   447ns ± 1%     ~     (p=0.056 n=5+5)
	MarshalBytes/4096   4.10µs ± 1%  4.09µs ± 0%     ~     (p=1.000 n=5+4)
	CodeUnmarshal       3.16ms ± 2%  3.02ms ± 1%   -4.18%  (p=0.008 n=5+5)
	CodeUnmarshalReuse  2.64ms ± 3%  2.51ms ± 2%   -4.81%  (p=0.016 n=5+5)
	UnmarshalString     65.4ns ± 4%  64.1ns ± 0%     ~     (p=0.190 n=5+4)
	UnmarshalFloat64    59.8ns ± 5%  58.9ns ± 2%     ~     (p=0.222 n=5+5)
	UnmarshalInt64      51.7ns ± 1%  50.0ns ± 2%   -3.26%  (p=0.008 n=5+5)
	EncodeMarshaler     23.6ns ±11%  20.8ns ± 1%  -12.10%  (p=0.016 n=5+4)

Add all inlineable methods of Value to cmd/compile/internal/test/inl_test.go.

Change-Id: Ifc192491918af6b62f7fe3a094a5a5256bfb326d
Reviewed-on: https://go-review.googlesource.com/c/go/+/400676
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
2022-04-21 23:45:36 +00:00
Joe Tsai c5edd5f616 reflect: make Value.MapRange inlineable
This allows the caller to decide whether MapIter should be
stack allocated or heap allocated based on whether it escapes.
In most cases, it does not escape and thus removes the utility
of MapIter.Reset (#46293). In fact, use of sync.Pool with MapIter
and calling MapIter.Reset is likely to be slower.

Change-Id: Ic93e7d39e5dd4c83e7fca9e0bdfbbcd70777f0e1
Reviewed-on: https://go-review.googlesource.com/c/go/+/400675
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-04-18 22:14:50 +00:00
Bryan Mills d4dbad53ca Revert "cmd/compile/internal: fix test error on loong64"
This reverts CL 367043.

Reason for revert: auto-submitted prematurely, breaking tests on most builders.

Change-Id: I6da319fb042b629bcd6f549be638497a357e7d28
Reviewed-on: https://go-review.googlesource.com/c/go/+/399795
Run-TryBot: Bryan Mills <bcmills@google.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-04-12 03:07:42 +00:00
Xiaodong Liu ace7672526 cmd/compile/internal: fix test error on loong64
For TestLogOpt test case, add loong64 support to test the host
architecture and os.

The Ctz64 is not intrinsified on loong64 for TestIntendedInlining.

Contributors to the loong64 port are:
  Weining Lu <luweining@loongson.cn>
  Lei Wang <wanglei@loongson.cn>
  Lingqin Gong <gonglingqin@loongson.cn>
  Xiaolin Zhao <zhaoxiaolin@loongson.cn>
  Meidan Li <limeidan@loongson.cn>
  Xiaojuan Zhai <zhaixiaojuan@loongson.cn>
  Qiyuan Pu <puqiyuan@loongson.cn>
  Guoqi Chen <chenguoqi@loongson.cn>

This port has been updated to Go 1.15.6:
  https://github.com/loongson/go

Updates #46229

Change-Id: I4ca290bf725425a9a6ac2c6767a5bf4ff2339d0e
Reviewed-on: https://go-review.googlesource.com/c/go/+/367043
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Auto-Submit: Russ Cox <rsc@golang.org>
2022-04-12 02:20:36 +00:00
Josh Bleecher Snyder 4a37a1d49f cmd/compile: add runtime.funcspdelta to intended inlining test
Follow-up to CL 354133.

Suggested-by: Daniel Martí <mvdan@mvdan.cc>
Change-Id: I0d0895dfa8c2deae0dbda6e683fbe41469849145
Reviewed-on: https://go-review.googlesource.com/c/go/+/354392
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Trust: Daniel Martí <mvdan@mvdan.cc>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
2021-10-06 20:30:12 +00:00
Fabio Falzoi 04f7521b0a reflect: add Value.{CanInt, CanUint, CanFloat, CanComplex}
As discussed in #47658, Value already has CanAddr and CanInterface to
test if a call to Addr or Inteface, respectively, does not result in a
panic.
Therefore we add CanInt, CanUint, CanFloat and CanComplex to ease the
test for a possible panic in calling, respectively, Int, Uint, Float and
Complex.

Fixes #47658

Change-Id: I58b77d77e6eec9f34234e985f631eab72b5b935e
Reviewed-on: https://go-review.googlesource.com/c/go/+/352131
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: David Chase <drchase@google.com>
Trust: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2021-09-27 21:31:14 +00:00
Josh Bleecher Snyder f0c79caa13 runtime: move entry method from _func to funcInfo
This will be required when we change from storing entry PCs in _func
to entry PC offsets, which are relative to the containing module.

Notably, almost all uses of the entry method were already called
on a funcInfo. Only Func.Entry incurs the additional module
lookup cost.

This makes Entry considerably slower, but it is probably
still fast enough in absolute terms that it is OK.

name             old time/op  new time/op  delta
Func/Name-8      8.86ns ± 0%  8.33ns ± 2%    -5.92%  (p=0.000 n=12+13)
Func/Entry-8     0.64ns ± 0%  2.62ns ±36%  +310.07%  (p=0.000 n=14+15)
Func/FileLine-8  24.5ns ± 0%  25.0ns ± 4%    +2.21%  (p=0.015 n=14+13)

Change-Id: Ia2d5de5f2f83fab334f1875452b9e8e87651d340
Reviewed-on: https://go-review.googlesource.com/c/go/+/351461
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-09-27 20:59:03 +00:00
Josh Bleecher Snyder 61a0a70113 runtime: convert _func.entry to a method
A subsequent change will alter the semantics of _func.entry.
To make that change obvious and clear, change _func.entry to a method,
and rename the field to _func.entryPC.

Change-Id: I05d66b54d06c5956d4537b0729ddf4290c3e2635
Reviewed-on: https://go-review.googlesource.com/c/go/+/351460
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-09-27 20:58:49 +00:00
Joe Tsai f371b30f32 unicode/utf8: add AppendRune
AppendRune appends the UTF-8 encoding of a rune to a []byte.
It is a generally more user friendly than EncodeRune.

    EncodeASCIIRune-4     2.35ns ± 2%
    EncodeJapaneseRune-4  4.60ns ± 2%
    AppendASCIIRune-4     0.30ns ± 3%
    AppendJapaneseRune-4  4.70ns ± 2%

The ASCII case is written to be inlineable.

Fixes #47609

Change-Id: If4f71eedffd2bd4ef0d7f960cb55b41c637eec54
Reviewed-on: https://go-review.googlesource.com/c/go/+/345571
Trust: Joe Tsai <joetsai@digital-static.net>
Reviewed-by: Rob Pike <r@golang.org>
Run-TryBot: Rob Pike <r@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
2021-08-28 01:49:50 +00:00
Cherry Mui e0e9fb8aff [dev.typeparams] runtime: simplify defer record allocation
Now that deferred functions are always argumentless and defer
records are no longer with arguments, defer record can be fixed
size (just the _defer struct). This allows us to simplify the
allocation of defer records, specifically, remove the defer
classes and the pools of different sized defers.

Change-Id: Icc4b16afc23b38262ca9dd1f7369ad40874cf701
Reviewed-on: https://go-review.googlesource.com/c/go/+/326062
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
2021-06-11 18:33:07 +00:00
Cherry Mui 12b37b713f [dev.typeparams] runtime: remove variadic defer/go calls
Now that defer/go wrapping is used, deferred/go'd functions are
always argumentless. Remove the code handling arguments.

This CL is mostly removing the fallback code path. There are more
cleanups to be done, in later CLs.

Change-Id: I87bfd3fb2d759fbeb6487b8125c0f6992863d6e5
Reviewed-on: https://go-review.googlesource.com/c/go/+/325915
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2021-06-08 19:46:10 +00:00
Cherry Mui 626e89c261 [dev.typeparams] runtime: replace funcPC with internal/abi.FuncPCABIInternal
At this point all funcPC references are ABIInternal functions.
Replace with the intrinsics.

Change-Id: I3ba7e485c83017408749b53f92877d3727a75e27
Reviewed-on: https://go-review.googlesource.com/c/go/+/321954
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2021-05-21 22:40:36 +00:00
Meng Zhuo 7beb988a3b runtime: using wyhash for memhashFallback on 64bit platform
wyhash is a general hash function that:

1. About 8-70% faster that internal maphash
2. Passed Smhasher, BigCrush and PractRand tests

name                  old time/op    new time/op    delta
Hash5                   28.9ns ± 0%    30.0ns ± 0%   +3.77%  (p=0.000 n=9+10)
Hash16                  32.4ns ± 0%    30.2ns ± 0%   -6.74%  (p=0.000 n=10+8)
Hash64                  52.4ns ± 0%    43.4ns ± 0%  -17.20%  (p=0.000 n=9+10)
Hash1024                 415ns ± 0%     258ns ± 2%  -37.89%  (p=0.000 n=10+10)
Hash65536               24.9µs ± 0%    14.6µs ± 0%  -41.22%  (p=0.000 n=9+9)
HashStringSpeed         50.2ns ± 4%    47.8ns ± 4%   -4.88%  (p=0.000 n=10+10)
HashBytesSpeed          90.1ns ± 7%    78.3ns ± 4%  -13.06%  (p=0.000 n=10+10)
HashInt32Speed          33.3ns ± 6%    33.6ns ± 4%     ~     (p=0.071 n=10+10)
HashInt64Speed          32.7ns ± 3%    34.0ns ± 3%   +4.05%  (p=0.000 n=9+10)
HashStringArraySpeed     131ns ± 2%     117ns ± 5%  -10.32%  (p=0.000 n=9+10)
FastrandHashiter        72.2ns ± 1%    75.7ns ±10%   +4.87%  (p=0.019 n=8+10)

name                  old speed      new speed      delta
Hash5                  173MB/s ± 0%   167MB/s ± 0%   -3.63%  (p=0.000 n=9+10)
Hash16                 494MB/s ± 0%   530MB/s ± 0%   +7.23%  (p=0.000 n=10+8)
Hash64                1.22GB/s ± 0%  1.48GB/s ± 0%  +20.77%  (p=0.000 n=9+10)
Hash1024              2.47GB/s ± 0%  3.97GB/s ± 2%  +61.01%  (p=0.000 n=8+10)
Hash65536             2.64GB/s ± 0%  4.48GB/s ± 0%  +70.13%  (p=0.000 n=9+9)

Change-Id: I76af4e2bc1995a18149d11983ea8a149c132865e
Reviewed-on: https://go-review.googlesource.com/c/go/+/279612
Trust: Meng Zhuo <mzh@golangcn.org>
Run-TryBot: Meng Zhuo <mzh@golangcn.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2021-04-12 02:29:32 +00:00
Josh Bleecher Snyder cc4e6160a7 net: use mid-stack inlining with ReadFromUDP to avoid an allocation
This commit rewrites ReadFromUDP to be mid-stack inlined
and pass a UDPAddr for lower layers to fill in.

This lets performance-sensitive clients avoid an allocation.
It requires some care on their part to prevent the UDPAddr
from escaping, but it is now possible.
The UDPAddr trivially does not escape in the benchmark,
as it is immediately discarded.

name                  old time/op    new time/op    delta
WriteToReadFromUDP-8    17.2µs ± 6%    17.1µs ± 5%     ~     (p=0.387 n=9+9)

name                  old alloc/op   new alloc/op   delta
WriteToReadFromUDP-8      112B ± 0%       64B ± 0%  -42.86%  (p=0.000 n=10+10)

name                  old allocs/op  new allocs/op  delta
WriteToReadFromUDP-8      3.00 ± 0%      2.00 ± 0%  -33.33%  (p=0.000 n=10+10)

Updates #43451

Co-authored-by: Filippo Valsorda <filippo@golang.org>
Change-Id: I1f9d2ab66bd7e4eff07fe39000cfa0b45717bd13
Reviewed-on: https://go-review.googlesource.com/c/go/+/291509
Run-TryBot: Filippo Valsorda <filippo@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Trust: Filippo Valsorda <filippo@golang.org>
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Trust: Jason A. Donenfeld <Jason@zx2c4.com>
2021-03-15 19:46:51 +00:00
Russ Cox 37f138df6b [dev.regabi] cmd/compile: split out package test [generated]
[git-generate]
cd src/cmd/compile/internal/gc
rf '
	mv bench_test.go constFold_test.go dep_test.go \
		fixedbugs_test.go iface_test.go float_test.go global_test.go \
		inl_test.go lang_test.go logic_test.go \
		reproduciblebuilds_test.go shift_test.go ssa_test.go \
		truncconst_test.go zerorange_test.go \
		cmd/compile/internal/test
'
mv testdata ../test

Change-Id: I041971b7e9766673f7a331679bfe1c8110dcda66
Reviewed-on: https://go-review.googlesource.com/c/go/+/279480
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2020-12-23 06:40:04 +00:00