This commit does not change the semantics of the Call method. Its
purpose is to avoid duplication of code by making PrepareCall available
for separate use by the wasm backend.
Updates #18892
Change-Id: I04a3098f56ebf0d995791c5375dd4c03b6a202a3
Reviewed-on: https://go-review.googlesource.com/103275
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Stores to auto tmp variables can be hoisted to places
where the line numbers make debugging look "jumpy".
Turning those instructions into ones with is_stmt = 0 in
the DWARF (accomplished by marking ssa nodes with NotStmt)
makes debugging look better while still attributing the
instructions with the correct line number.
The same is true for certain register allocator spills and
reloads.
Change-Id: I97a394eb522d4911cc40b4bf5bf76d3d7221f6c0
Reviewed-on: https://go-review.googlesource.com/98415
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Add IsStmt information to src.lico so that suitable lines
for breakpoints (or not) can be noted, eventually for
communication to the debugger via the linker and DWARF.
The expectation is that the front end will apply statement
boundary marks because it has best information about the
input, and the optimizer will attempt to preserve these.
The exact method for placing these marks is still TBD;
ideally stopping "at" line N in unoptimized code will occur
at a point where none of the side effects of N have occurred
and all of the inputs for line N can still be observed.
The optimizer will work with the same markings supplied
for unoptimized code.
It is a goal that non-optimizing compilation should conserve
statement marks.
The optimizer will also use the not-a-statement annotation
to indicate instructions that have a line number (for
profiling purposes) but should not be the target of
debugger step, next, or breakpoints. Because instructions
marked as statements are sometimes removed, a third value
indicating that a position (instruction) can serve as a
statement if the optimizer removes the current instruction
marked as a statement for the same line. The optimizer
should attempt to conserve statement marks, but it is not
a bug if some are lost.
Includes changes to html output for GOSSAFUNC to indicate
not-default is-a-statement with bold and not-a-statement
with strikethrough.
Change-Id: Ia22c9a682f276e2ca2a4ef7a85d4b6ebf9c62b7f
Reviewed-on: https://go-review.googlesource.com/93663
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
The ssa.Func has Type field that is described as
function signature type.
It never gets any value and remains nil.
This leads to "<T>" signature printed representation.
Given this function declaration:
func foo(x int, f func() string) (int, error)
GOSSAFUNC printed it as below:
compiling foo
foo <T>
After this change:
compiling foo
foo func(int, func() string) (int, error)
Change-Id: Iec5eec8aac5c76ff184659e30f41b2f5fe86d329
Reviewed-on: https://go-review.googlesource.com/102375
Run-TryBot: Iskander Sharipov <iskander.sharipov@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Instead of creating a new &nodfp expression for every recover() call,
or a new nodpc variable for every function instrumented by the race
detector, this CL introduces two new uintptr-typed pseudo-variables
callerSP and callerPC. These pseudo-variables act just like calls to
the runtime's getcallersp() and getcallerpc() functions.
For consistency, change runtime.gorecover's builtin stub's parameter
type from "*int32" to "uintptr".
Passes toolstash-check, but toolstash-check -race fails because of
register allocator changes.
Change-Id: I985d644653de2dac8b7b03a28829ad04dfd4f358
Reviewed-on: https://go-review.googlesource.com/99416
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The first word of an interface is a pointer, but for the purposes
of GC we don't need to treat it as such.
1. If it is a non-empty interface, the pointer points to an itab
which is always in persistentalloc space.
2. If it is an empty interface, the pointer points to a _type.
a. If it is a compile-time-allocated type, it points into
the read-only data section.
b. If it is a reflect-allocated type, it points into the Go heap.
Reflect is responsible for keeping a reference to
the underlying type so it won't be GCd.
If we ever have a moving GC, we need to change this for 2b (as
well as scan itabs to update their itab._type fields).
Write barriers on the first word of interfaces have already been removed.
Change-Id: I643e91d7ac4de980ac2717436eff94097c65d959
Reviewed-on: https://go-review.googlesource.com/97518
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Not a big improvement, but does help edge cases like the SSA package.
Change-Id: I40e531110b97efd5f45955be477fd0f4faa8d545
Reviewed-on: https://go-review.googlesource.com/92396
Run-TryBot: Heschi Kreinick <heschi@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Completely redesign and reimplement location list generation to be more
efficient, and hopefully not too hard to understand.
RegKills are gone. Instead of using the regalloc's liveness
calculations, redo them using the Ops' clobber information. Besides
saving a lot of Values, this avoids adding RegKills to blocks that would
be empty otherwise, which was messing up optimizations. This does mean
that it's much harder to tell whether the generation process is buggy
(there's nothing to cross-check it with), and there may be disagreements
with GC liveness. But the performance gain is significant, and it's nice
not to be messing with earlier compiler phases.
The intermediate representations are gone. Instead of producing
ssa.BlockDebugs, then dwarf.LocationLists, and then finally real
location lists, go directly from the SSA to a (mostly) real location
list. Because the SSA analysis happens before assembly, it stores
encoded block/value IDs where PCs would normally go. It would be easier
to do the SSA analysis after assembly, but I didn't want to retain the
SSA just for that.
Generation proceeds in two phases: first, it traverses the function in
CFG order, storing the state of the block at the beginning and end. End
states are used to produce the start states of the successor blocks. In
the second phase, it traverses in program text order and produces the
location lists. The processing in the second phase is redundant, but
much cheaper than storing the intermediate representation. It might be
possible to combine the two phases somewhat to take advantage of cases
where the CFG matches the block layout, but I haven't tried.
Location lists are finalized by adding a base address selection entry,
translating each encoded block/value ID to a real PC, and adding the
terminating zero entry. This probably won't work on OSX, where dsymutil
will choke on the base address selection. I tried emitting CU-relative
relocations for each address, and it was *very* bad for performance --
it uses more memory storing all the relocations than it does for the
actual location list bytes. I think I'm going to end up synthesizing the
relocations in the linker only on OSX, but TBD.
TestNexting needs updating: with more optimizations working, the
debugger doesn't stop on the continue (line 88) any more, and the test's
duplicate suppression kicks in. Also, dx and dy live a little longer
now, but they have the correct values.
Change-Id: Ie772dfe23a4e389ca573624fac4d05401ae32307
Reviewed-on: https://go-review.googlesource.com/89356
Run-TryBot: Heschi Kreinick <heschi@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Now that the buffered write barrier is implemented for all
architectures, we can remove the old eager write barrier
implementation. This CL removes the implementation from the runtime,
support in the compiler for calling it, and updates some compiler
tests that relied on the old eager barrier support. It also makes sure
that all of the useful comments from the old write barrier
implementation still have a place to live.
Fixes#22460.
Updates #21640 since this fixes the layering concerns of the write
barrier (but not the other things in that issue).
Change-Id: I580f93c152e89607e0a72fe43370237ba97bae74
Reviewed-on: https://go-review.googlesource.com/92705
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Turn off append-to-itself optimization if optimizations are turned off.
This optimization triggered a bug when doing
s = append(s, s)
where we write to the leftmost s before reading the rightmost s.
Update #17039
Change-Id: I21996532d20a75db6ec8d49db50cb157a1360b80
Reviewed-on: https://go-review.googlesource.com/81816
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The ssa backend is aggressive about placing constants and
certain other values in the Entry block. It's implausible
that the original line numbers for these constants makes
any sort of sense when it appears to a user stepping in a
debugger, and they're also not that useful in dumps since
entry-block instructions tend to be constants (i.e.,
unlikely to be the cause of a crash).
Therefore, use src.NoXPos for any values that are explicitly
inserted into a function's entry block.
Passes all tests, including ssa/debug_test.go with both
gdb and a fairly recent dlv. Hand-verified that it solves
the reported problem; constructed a test that reproduced
a problem, and fixed it.
Modified test harness to allow injection of slightly more
interesting inputs.
Fixes#22558.
Change-Id: I4476927067846bc4366da7793d2375c111694c55
Reviewed-on: https://go-review.googlesource.com/81215
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This changes the assembly language output to use the
innermost (instead of outermost) position for line
number and file.
The file is printed separately, only when it changes,
to remove redundant and space-consuming noise from the
output.
Unknown positions have line number "?"
The output format was changed slightly to make it
easier to read.
One source of gratuitous variation in debugging output was
removed.
Change-Id: I1fd9c8b0ddd82766488582fb684dce4b04f35723
Reviewed-on: https://go-review.googlesource.com/78895
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Fixes#22703
The fix was already done by Cherry for defer/go of an interface call (CL 23820).
We just need to do it everywhere.
Change-Id: I0115d22e443931fe1bcce44c93c4d0770b5fd268
Reviewed-on: https://go-review.googlesource.com/77450
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Since the slice of names is almost exactly the same as what stringer is
already generating for us.
Passes toolstash -cmp on std cmd.
Change-Id: I3f1e95efc690c0108236689e721627f00f79a461
Reviewed-on: https://go-review.googlesource.com/77190
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Now possible, since stringer just got the -trimprefix flag added.
While at it, simplify a few Op stringifications since we can now use %v,
and no longer have to worry about o<len(opnames).
Passes toolstash -cmp on std cmd.
Fixes#15462.
Change-Id: Icdcde0b0a5eb165d18488918175024da274f782b
Reviewed-on: https://go-review.googlesource.com/76790
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Dave Cheney <dave@cheney.net>
The linker will refuse to work on objects larger than
2e9 bytes (see issue #9862 for why).
With this change, the compiler gives a useful error
message explaining this, instead of leaving it to the
linker to give a cryptic message later.
Fixes#1700.
Change-Id: I3933ce08ef846721ece7405bdba81dff644cb004
Reviewed-on: https://go-review.googlesource.com/74330
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Plain blocks that contain only uninteresting instructions
(that do not have reliable Pos information themselves)
need to have their Pos left unset so that they can
inherit it from their successors. The "uninteresting"
test was not properly applied and not properly defined.
OpFwdRef does not appear in the ssa.html debugging output,
but at the time of the test these instructions did appear,
and it needs to be part of the test.
Fixes#22365.
Change-Id: I99e5b271acd8f6bcfe0f72395f905c7744ea9a02
Reviewed-on: https://go-review.googlesource.com/74252
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Currently, assigning a []T where T is a go:notinheap type generates an
unnecessary write barrier for storing the slice pointer.
This fixes this by teaching HasHeapPointer that this type does not
have a heap pointer, and tweaking the lowering of slice assignments so
the pointer store retains the correct type rather than simply lowering
it to a *uint8 store.
Change-Id: I8bf7c66e64a7fefdd14f2bd0de8a5a3596340bab
Reviewed-on: https://go-review.googlesource.com/76027
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
A statement like
foo = bar + qux
might compile to
AX := AX + BX
resulting in a regkill for AX before this instruction.
The buggy behavior is to kill AX "at" this instruction,
before it has executed. (Code generation of no-instruction
values like RegKills applies their effects at the
next actual instruction emitted).
However, bar is still associated with AX until after the
instruction executes, so the effect of the regkill must
occur at the boundary between this instruction and the
next. Similarly, the new value bound to AX is not visible
until this instruction executes (and in the case of values
that require multiple instructions in code generation, until
all of them have executed).
The ranges are adjusted so that a value's start occurs
at the next following instruction after its evaluation,
and the end occurs after (execution of) the first
instruction following the end of the lifetime as a value.
(Notice the asymmetry; the entire value must be finished
before it is visible, but execution of a single instruction
invalidates. However, the value *is* visible before that
next instruction executes).
The test was adjusted to make it insensitive to the result
numbering for variables printed by gdb, since that is not
relevant to the test and makes the differences introduced
by small changes larger than necessary/useful.
The test was also improved to present variable probes
more intuitively, and also to allow explicit indication
of "this variable was optimized out"
Change-Id: I39453eead8399e6bb05ebd957289b112d1100c0e
Reviewed-on: https://go-review.googlesource.com/74090
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
We already do this for floor/ceil, but RoundToEven was added later.
Intrinsify it also.
name old time/op new time/op delta
RoundToEven-8 3.00ns ± 1% 0.68ns ± 2% -77.34% (p=0.000 n=10+10)
Change-Id: Ib158cbceb436c6725b2d9353a526c5c4be19bcad
Reviewed-on: https://go-review.googlesource.com/74852
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
Reviewed-by: Keith Randall <khr@golang.org>
Handle make(map[any]any) and make(map[any]any, hint) where
hint <= BUCKETSIZE special to allow for faster map initialization
and to improve binary size by using runtime calls with fewer arguments.
Given hint is smaller or equal to BUCKETSIZE in which case
overLoadFactor(hint, 0) is false and no buckets would be allocated by makemap:
* If hmap needs to be allocated on the stack then only hmap's hash0
field needs to be initialized and no call to makemap is needed.
* If hmap needs to be allocated on the heap then a new special
makehmap function will allocate hmap and intialize hmap's
hash0 field.
Reduces size of the godoc by ~36kb.
AMD64
name old time/op new time/op delta
NewEmptyMap 16.6ns ± 2% 5.5ns ± 2% -66.72% (p=0.000 n=10+10)
NewSmallMap 64.8ns ± 1% 56.5ns ± 1% -12.75% (p=0.000 n=9+10)
Updates #6853
Change-Id: I624e90da6775afaa061178e95db8aca674f44e9b
Reviewed-on: https://go-review.googlesource.com/61190
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The new RoundToEven function can be implemented as a single FIDBR
instruction on s390x.
name old time/op new time/op delta
RoundToEven 5.32ns ± 1% 0.86ns ± 1% -83.86% (p=0.000 n=10+10)
Change-Id: Iaf597e57a0d1085961701e3c75ff4f6f6dcebb5f
Reviewed-on: https://go-review.googlesource.com/74350
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This adds support for math Abs, Copysign to be instrinsics on ppc64x.
New instruction FCPSGN is added to generate fcpsgn. Some new
rules are added to improve the int<->float conversions that are
generated mainly due to the Float64bits and Float64frombits in
the math package. PPC64.rules is also modified as suggested
in the review for CL 63290.
Improvements:
benchmark old ns/op new ns/op delta
BenchmarkAbs-16 1.12 0.69 -38.39%
BenchmarkCopysign-16 1.30 0.93 -28.46%
BenchmarkNextafter32-16 9.34 8.05 -13.81%
BenchmarkFrexp-16 8.81 7.60 -13.73%
Others that used Copysign also saw smaller improvements.
I attempted to make this work using rules since that
seems to be preferred, but due to the use of Float64bits and
Float64frombits in these functions, several rules had to be added and
even then not all cases were matched. Using rules became too
complicated and seemed too fragile for these.
Updates #21390
Change-Id: Ia265da9a18355e08000818a4fba1a40e9e031995
Reviewed-on: https://go-review.googlesource.com/67130
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Keith Randall <khr@golang.org>
Most write barrier calls are inserted by SSA, but copy and append are
lowered to runtime.typedslicecopy during walk. Fix these to set
Func.WBPos and emit the "write barrier" warning, as done for the write
barriers inserted by SSA. As part of this, we refactor setting WBPos
and emitting this warning into the frontend so it can be shared by
both walk and SSA.
Change-Id: I5fe9997d9bdb55e03e01dd58aee28908c35f606b
Reviewed-on: https://go-review.googlesource.com/73411
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The current go:nowritebarrierrec checker has two problems that limit
its coverage:
1. It doesn't understand that systemstack calls its argument, which
means there are several cases where we fail to detect prohibited write
barriers.
2. It only observes calls in the AST, so calls constructed during
lowering by SSA aren't followed.
This CL completely rewrites this checker to address these issues.
The current checker runs entirely after walk and uses visitBottomUp,
which introduces several problems for checking across systemstack.
First, visitBottomUp itself doesn't understand systemstack calls, so
the callee may be ordered after the caller, causing the checker to
fail to propagate constraints. Second, many systemstack calls are
passed a closure, which is quite difficult to resolve back to the
function definition after transformclosure and walk have run. Third,
visitBottomUp works exclusively on the AST, so it can't observe calls
created by SSA.
To address these problems, this commit splits the check into two
phases and rewrites it to use a call graph generated during SSA
lowering. The first phase runs before transformclosure/walk and simply
records systemstack arguments when they're easy to get. Then, it
modifies genssa to record static call edges at the point where we're
lowering to Progs (which is the latest point at which position
information is conveniently available). Finally, the second phase runs
after all functions have been lowered and uses a direct BFS walk of
the call graph (combining systemstack calls with static calls) to find
prohibited write barriers and construct nice error messages.
Fixes#22384.
For #22460.
Change-Id: I39668f7f2366ab3c1ab1a71eaf25484d25349540
Reviewed-on: https://go-review.googlesource.com/72773
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
* replace a copy of IsMethod with a call of it.
* a few more switches where they simplify the code.
* prefer composite literals over "n := new(...); n.x = y; ...".
* use defers to get rid of three goto labels.
* rewrite updateHasCall into two funcs to remove gotos.
Passes toolstash-check on std cmd.
Change-Id: Icb5442a89a87319ef4b640bbc5faebf41b193ef1
Reviewed-on: https://go-review.googlesource.com/72070
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Use them to replace if/else chains with at least three comparisons,
where the code becomes clearly simpler.
Passes toolstash -cmp on std cmd.
Change-Id: Ic98aa3905944ddcab5aef5f9d9ba376853263d94
Reviewed-on: https://go-review.googlesource.com/70934
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
These functions are identical to math/bits.TrailingZeros{32,64}, which
are already intrinsified on ppc64.
Change-Id: If7ee57e7afe53154874f4b66bacdb6237806128a
Reviewed-on: https://go-review.googlesource.com/70350
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
In order to improve the line numbering for debuggers,
it's necessary to trace lines through compilation.
This makes it (much) easier to follow.
The format of the last column of the ssa.html output was
also changed to reduce the spamminess of the file name,
which is usually the same and makes it far harder to read
instructions and line numbers, and to make it wider and also
able to break words when wrapping (long path names still
can push off the end otherwise; side-to-side scrolling was
tried but was more annoying than the occasional wrapped
line).
Sample output now, where [...] is elision for sake of making
the CL character-counter happy -- and the (##) line numbers
are rendered in italics and a smaller font (11 point) under
control of a CSS class "line-number".
genssa
# /Users/drchase/[...]/ssa/testdata/hist.go
00000 (35) TEXT "".main(SB)
00001 (35) FUNCDATA $0, gclocals·7be4bb[...]1e8b(SB)
00002 (35) FUNCDATA $1, gclocals·9ab98a[...]4568(SB)
v920 00003 (36) LEAQ ""..autotmp_31-640(SP), DI
v858 00004 (36) XORPS X0, X0
v6 00005 (36) LEAQ -48(DI), DI
v6 00006 (36) DUFFZERO $277
v576 00007 (36) LEAQ ""..autotmp_31-640(SP), AX
v10 00008 (36) TESTB AX, (AX)
b1 00009 (36) JMP 10
and from an earlier phase:
b18: ← b17
v242 (47) = Copy <mem> v238
v243 (47) = VarKill <mem> {.autotmp_16} v242
v244 (48) = Addr <**bufio.Scanner> {scanner} v2
v245 (48) = Load <*bufio.Scanner> v244 v243
[...]
v279 (49) = Store <mem> {int64} v277 v276 v278
v280 (49) = Addr <*error> {.autotmp_18} v2
v281 (49) = Load <error> v280 v279
v282 (49) = Addr <*error> {err} v2
v283 (49) = VarDef <mem> {err} v279
v284 (49) = Store <mem> {error} v282 v281 v283
v285 (47) = VarKill <mem> {.autotmp_18} v284
v286 (47) = VarKill <mem> {.autotmp_17} v285
v287 (50) = Addr <*error> {err} v2
v288 (50) = Load <error> v287 v286
v289 (50) = NeqInter <bool> v288 v51
If v289 → b21 b22 (line 50)
Change-Id: I3f46310918f965761f59e6f03ea53067237c28a8
Reviewed-on: https://go-review.googlesource.com/69591
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
If the stack frame is too large, abort immediately.
We used to generate code first, then abort.
In issue 22200, generating code raised a panic
so we got an ICE instead of an error message.
Change the max frame size to 1GB (from 2GB).
Stack frames between 1.1GB and 2GB didn't used to work anyway,
the pcln table generation would have failed and generated an ICE.
Fixes#22200
Change-Id: I1d918ab27ba6ebf5c87ec65d1bccf973f8c8541e
Reviewed-on: https://go-review.googlesource.com/69810
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Add a compiler intrinsic for getcallersp. So we are able to get
rid of the argument (not done in this CL).
Change-Id: Ic38fda1c694f918328659ab44654198fb116668d
Reviewed-on: https://go-review.googlesource.com/69350
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
This attempts to choose better values for values that are
rematerialized (uses the XPos of the consumer, not the
original) and for unconditional branches (uses the last
assigned XPos in the block).
The JMP branches seem to sometimes end up with a PC in the
destination block, I think because of register movement
or rematerialization that gets placed in predecessor blocks.
This may be acceptable because (eyeball-empirically) that is
often the line number of the target block, so the line number
flow is correct.
Added proper test, that checks both -N -l and regular compilation.
The test is also capable (for gdb, delve soon) of tracking
variable printing based on comments in the source code.
There's substantial room for improvement in debugger behavior.
Updates #21098.
Change-Id: I13abd48a39141583b85576a015f561065819afd0
Reviewed-on: https://go-review.googlesource.com/50610
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
ORETJMP doesn't need an ONAME if we just set the target method on Sym
instead of Left. Conveniently, this is where fmt.go was looking for it
anyway.
Change the iface parameter and global compiling_wrappers to bool.
Passes toolstash-check.
Change-Id: I5333f8bcb4e06bf8161808041125eb95c439aafe
Reviewed-on: https://go-review.googlesource.com/68252
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Robert Griesemer <gri@golang.org>
Use entire inlining call stack to decide whether two panic calls
can be merged. We used to merge panic calls when only the leaf
line numbers matched, but that leads to places higher up the call
stack being merged incorrectly.
Fixes#22083
Change-Id: Ia41400a80de4b6ecf3e5089abce0c42b65e9b38a
Reviewed-on: https://go-review.googlesource.com/67632
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
The information that's used to generate DWARF location lists is very
ssa.Value centric; it uses Values as start and end coordinates to define
ranges. That mostly works fine, but control flow instructions don't come
from Values, so the ranges couldn't cover them.
Control flow instructions are generated when the SSA representation is
converted to assembly, so that's the best place to extend the ranges
to cover them. (Before that, there's nothing to refer to, and afterward
the boundaries between blocks have been lost.) That requires block
information in the debugInfo type, which then flows down to make
everything else awkward. On the plus side, there's a little less copying
slices around than there used to be, so it should be a little faster.
Previously, the ranges for empty blocks were not very meaningful. That
was fine, because they had no Values to cover, so no debug information
was generated for them. But they do have control flow instructions
(that's why they exist) and so now it's important that the information
be correct. Introduce two sentinel values, BlockStart and BlockEnd, that
denote the boundary of a block, even if the block is empty. BlockEnd
replaces the previous SurvivedBlock flag.
There's one more problem: the last instruction in the function will be a
control flow instruction, so any live ranges need to be extended past
it. But there's no instruction after it to use as the end of the range.
Instead, leave the EndProg field of those ranges as nil and fix it up to
point to past the end of the assembled text at the very last moment.
Change-Id: I81f884020ff36fd6fe8d7888fc57c99412c4245b
Reviewed-on: https://go-review.googlesource.com/63010
Reviewed-by: Alessandro Arzilli <alessandro.arzilli@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Heschi Kreinick <heschi@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
First step towards removing the mandatory argument for
getcallerpc, which solves certain problems for the runtime.
This might also slightly improve performance.
Intrinsic enabled on 386, amd64, amd64p32,
runtime asm implementation removed on those architectures.
Now-superfluous argument remains in getcallerpc signature
(for a future CL; non-386/amd64 asm funcs ignore it).
Added getcallerpc to the "not a real function" test
in dcl.go, that story is a little odd with respect to
unexported functions but that is not this CL.
Fixes#17327.
Change-Id: I5df1ad91f27ee9ac1f0dd88fa48f1329d6306c3e
Reviewed-on: https://go-review.googlesource.com/31851
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Ceil, Floor and Trunc are pre-existing intrinsics. Round is a new
function and has been added as an intrinsic in this CL. All of the
functions can be implemented as a single 'LOAD FP INTEGER'
instruction, FIDBR, on s390x.
name old time/op new time/op delta
Ceil 2.34ns ± 0% 0.85ns ± 0% -63.74% (p=0.000 n=5+4)
Floor 2.33ns ± 0% 0.85ns ± 1% -63.35% (p=0.008 n=5+5)
Round 4.23ns ± 0% 0.85ns ± 0% -79.89% (p=0.000 n=5+4)
Trunc 2.35ns ± 0% 0.85ns ± 0% -63.83% (p=0.029 n=4+4)
Change-Id: Idee7ba24a2899d12bf9afee4eedd6b4aaad3c510
Reviewed-on: https://go-review.googlesource.com/63890
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
We used to have {Arg,Auto,Extern}Symbol structs with which we wrapped
a *gc.Node or *obj.LSym before storing them in the Aux field
of an ssa.Value. This let the SSA part of the compiler distinguish
between autos and args, for example. We no longer need the wrappers
as we can query the underlying objects directly.
There was also some sloppy usage, where VarDef had a *gc.Node
directly in its Aux field, whereas the use of that variable had
that *gc.Node wrapped in an AutoSymbol. Thus the Aux fields didn't
match (using ==) when they probably should.
This sloppy usage cleanup is the only thing in the CL that changes the
generated code - we can get rid of some more unused auto variables if
the matching happens reliably.
Removing this wrapper also lets us get rid of the varsyms cache
(which was used to prevent wrapping the same *gc.Node twice).
Change-Id: I0dedf8f82f84bfee413d310342b777316bd1d478
Reviewed-on: https://go-review.googlesource.com/64452
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
As per golint's suggestions.
Change-Id: Ie0c6ad9aa5dc69966a279562a341c7b095c47ede
Reviewed-on: https://go-review.googlesource.com/64192
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This adds rules to match the code in math/bits RotateLeft,
RotateLeft32, and RotateLef64 to allow them to be inlined.
The rules are complicated because the code in these function
use different types, and the non-const version of these
shifts generate Mask and Carry instructions that become
subexpressions during the match process.
Also adds a testcase to asm_test.go.
Improvement in math/bits:
BenchmarkRotateLeft-16 1.57 1.32 -15.92%
BenchmarkRotateLeft32-16 1.60 1.37 -14.37%
BenchmarkRotateLeft64-16 1.57 1.32 -15.92%
Updates #21390
Change-Id: Ib6f17669ecc9cab54f18d690be27e2225ca654a4
Reviewed-on: https://go-review.googlesource.com/59932
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Went mainly for the ones that make no sense, such as the ones
mid-sentence or after commas.
Change-Id: Ie245d2c19cc7428a06295635cf6a9482ade25ff0
Reviewed-on: https://go-review.googlesource.com/57293
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is a crude compiler pass to eliminate stores to auto variables
that are only ever written to.
Eliminates an unnecessary store to x from the following code:
func f() int {
var x := 1
return *(&x)
}
Fixes#19765.
Change-Id: If2c63a8ae67b8c590b6e0cc98a9610939a3eeffa
Reviewed-on: https://go-review.googlesource.com/38746
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Just to get rid of lots of .Name() stutter in printf calls.
Change-Id: I86cf00b3f7b2172387a1c6a7f189c1897fab6300
Reviewed-on: https://go-review.googlesource.com/56630
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Prioritized the chunks of code with 8 or more levels of indentation.
Basically early breaks/returns and joining nested ifs.
Change-Id: I6817df1303226acf2eb904a29f2db720e4f7427a
Reviewed-on: https://go-review.googlesource.com/55630
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
We don't use it any more, remove it.
Change-Id: I76ce1a4c2e7048fdd13a37d3718b5abf39ed9d26
Reviewed-on: https://go-review.googlesource.com/44474
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
This implements trunc, floor, and ceil in the math package
as intrinsics on ppc64x. Significant improvement mainly due
to avoiding call overhead of args and return value.
BenchmarkCeil-16 5.95 0.69 -88.40%
BenchmarkFloor-16 5.95 0.69 -88.40%
BenchmarkTrunc-16 5.82 0.69 -88.14%
Updates #21390
Change-Id: I951e182694f6e0c431da79c577272b81fb0ebad0
Reviewed-on: https://go-review.googlesource.com/54654
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Carlos Eduardo Seo <cseo@linux.vnet.ibm.com>
Reviewed-by: David Chase <drchase@google.com>
gc.Sysfunc must not be called concurrently.
We set up runtime routines used by the backend
prior to doing any backend compilation.
I missed the 387 ones; fix that.
Sysfunc should have been unexported during 1.9.
I will rectify that in a subsequent CL.
Fixes#21352
Change-Id: I8386eaa1e05879c25c672b9c9fc693c938e9aeb6
Reviewed-on: https://go-review.googlesource.com/54090
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Avelino <t@avelino.xxx>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Debuggers use DWARF information to find local variables on the
stack and in registers. Prior to this CL, the DWARF information for
functions claimed that all variables were on the stack at all times.
That's incorrect when optimizations are enabled, and results in
debuggers showing data that is out of date or complete gibberish.
After this CL, the compiler is capable of representing variable
locations more accurately, and attempts to do so. Due to limitations of
the SSA backend, it's not possible to be completely correct.
There are a number of problems in the current design. One of the easier
to understand is that variable names currently must be attached to an
SSA value, but not all assignments in the source code actually result
in machine code. For example:
type myint int
var a int
b := myint(int)
and
b := (*uint64)(unsafe.Pointer(a))
don't generate machine code because the underlying representation is the
same, so the correct value of b will not be set when the user would
expect.
Generating the more precise debug information is behind a flag,
dwarflocationlists. Because of the issues described above, setting the
flag may not make the debugging experience much better, and may actually
make it worse in cases where the variable actually is on the stack and
the more complicated analysis doesn't realize it.
A number of changes are included:
- Add a new pseudo-instruction, RegKill, which indicates that the value
in the register has been clobbered.
- Adjust regalloc to emit RegKills in the right places. Significantly,
this means that phis are mixed with StoreReg and RegKills after
regalloc.
- Track variable decomposition in ssa.LocalSlots.
- After the SSA backend is done, analyze the result and build location
lists for each LocalSlot.
- After assembly is done, update the location lists with the assembled
PC offsets, recompose variables, and build DWARF location lists. Emit the
list as a new linker symbol, one per function.
- In the linker, aggregate the location lists into a .debug_loc section.
TODO:
- currently disabled for non-X86/AMD64 because there are no data tables.
go build -toolexec 'toolstash -cmp' -a std succeeds.
With -dwarflocationlists false:
before: f02812195637909ff675782c0b46836a8ff01976
after: 06f61e8112a42ac34fb80e0c818b3cdb84a5e7ec
benchstat -geomean /tmp/220352263 /tmp/621364410
completed 15 of 15, estimated time remaining 0s (eta 3:52PM)
name old time/op new time/op delta
Template 199ms ± 3% 198ms ± 2% ~ (p=0.400 n=15+14)
Unicode 96.6ms ± 5% 96.4ms ± 5% ~ (p=0.838 n=15+15)
GoTypes 653ms ± 2% 647ms ± 2% ~ (p=0.102 n=15+14)
Flate 133ms ± 6% 129ms ± 3% -2.62% (p=0.041 n=15+15)
GoParser 164ms ± 5% 159ms ± 3% -3.05% (p=0.000 n=15+15)
Reflect 428ms ± 4% 422ms ± 3% ~ (p=0.156 n=15+13)
Tar 123ms ±10% 124ms ± 8% ~ (p=0.461 n=15+15)
XML 228ms ± 3% 224ms ± 3% -1.57% (p=0.045 n=15+15)
[Geo mean] 206ms 377ms +82.86%
name old user-time/op new user-time/op delta
Template 292ms ±10% 301ms ±12% ~ (p=0.189 n=15+15)
Unicode 166ms ±37% 158ms ±14% ~ (p=0.418 n=15+14)
GoTypes 962ms ± 6% 963ms ± 7% ~ (p=0.976 n=15+15)
Flate 207ms ±19% 200ms ±14% ~ (p=0.345 n=14+15)
GoParser 246ms ±22% 240ms ±15% ~ (p=0.587 n=15+15)
Reflect 611ms ±13% 587ms ±14% ~ (p=0.085 n=15+13)
Tar 211ms ±12% 217ms ±14% ~ (p=0.355 n=14+15)
XML 335ms ±15% 320ms ±18% ~ (p=0.169 n=15+15)
[Geo mean] 317ms 583ms +83.72%
name old alloc/op new alloc/op delta
Template 40.2MB ± 0% 40.2MB ± 0% -0.15% (p=0.000 n=14+15)
Unicode 29.2MB ± 0% 29.3MB ± 0% ~ (p=0.624 n=15+15)
GoTypes 114MB ± 0% 114MB ± 0% -0.15% (p=0.000 n=15+14)
Flate 25.7MB ± 0% 25.6MB ± 0% -0.18% (p=0.000 n=13+15)
GoParser 32.2MB ± 0% 32.2MB ± 0% -0.14% (p=0.003 n=15+15)
Reflect 77.8MB ± 0% 77.9MB ± 0% ~ (p=0.061 n=15+15)
Tar 27.1MB ± 0% 27.0MB ± 0% -0.11% (p=0.029 n=15+15)
XML 42.7MB ± 0% 42.5MB ± 0% -0.29% (p=0.000 n=15+15)
[Geo mean] 42.1MB 75.0MB +78.05%
name old allocs/op new allocs/op delta
Template 402k ± 1% 398k ± 0% -0.91% (p=0.000 n=15+15)
Unicode 344k ± 1% 344k ± 0% ~ (p=0.715 n=15+14)
GoTypes 1.18M ± 0% 1.17M ± 0% -0.91% (p=0.000 n=15+14)
Flate 243k ± 0% 240k ± 1% -1.05% (p=0.000 n=13+15)
GoParser 327k ± 1% 324k ± 1% -0.96% (p=0.000 n=15+15)
Reflect 984k ± 1% 982k ± 0% ~ (p=0.050 n=15+15)
Tar 261k ± 1% 259k ± 1% -0.77% (p=0.000 n=15+15)
XML 411k ± 0% 404k ± 1% -1.55% (p=0.000 n=15+15)
[Geo mean] 439k 755k +72.01%
name old text-bytes new text-bytes delta
HelloSize 694kB ± 0% 694kB ± 0% -0.00% (p=0.000 n=15+15)
name old data-bytes new data-bytes delta
HelloSize 5.55kB ± 0% 5.55kB ± 0% ~ (all equal)
name old bss-bytes new bss-bytes delta
HelloSize 133kB ± 0% 133kB ± 0% ~ (all equal)
name old exe-bytes new exe-bytes delta
HelloSize 1.04MB ± 0% 1.04MB ± 0% ~ (all equal)
Change-Id: I991fc553ef175db46bb23b2128317bbd48de70d8
Reviewed-on: https://go-review.googlesource.com/41770
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
When the compiler decomposes a user variable, track its origin so that
it can be recomposed during DWARF generation.
Change-Id: Ia71c7f8e7f4d65f0652f1c97b0dda5d9cad41936
Reviewed-on: https://go-review.googlesource.com/50878
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
When we start tracking the mapping from Value to Prog, valueProgs will
be confusing. Disambiguate.
Change-Id: Ib3b302fedb7eb0ff1bde789d70a11656d82f0897
Reviewed-on: https://go-review.googlesource.com/50876
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Attaching positions to SB, SP, initial mem can result in
less-good line-numbering when compiled for debugging.
This "fix" also removes source position from a zero-valued
struct (but not from its fields) and from a zero-length
array constant.
This may be a general problem for constants in entry blocks.
Fixes#20367.
Change-Id: I7e9df3341be2e2f60f127d35bb31e43cdcfce9a1
Reviewed-on: https://go-review.googlesource.com/43531
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reuse block head or preceding instruction's line number for
register allocator's spill, fill, copy, rematerialization
instructionsl; and also for phi, and for no-src-pos
instructions. Assembler creates same line number tables
for copy-predecessor-line and for no-src-pos,
but copy-predecessor produces better-looking assembly
language output with -S and with GOSSAFUNC, and does not
require changes to tests of existing assembly language.
Split "copyInto" into two cases, one for register allocation,
one for otherwise. This caused the test score line change
count to increase by one, which may reflect legitimately
useful information preserved. Without any special treatment
for copyInto, the change count increases by 21 more, from
51 to 72 (i.e., quite a lot).
There is a test; using two naive "scores" for line number
churn, the old numbering is 2x or 4x worse.
Fixes#18902.
Change-Id: I0a0a69659d30ee4e5d10116a0dd2b8c5df8457b1
Reviewed-on: https://go-review.googlesource.com/36207
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This adds math/bits intrinsics for OnesCount, Len, TrailingZeros on
ppc64x.
benchmark old ns/op new ns/op delta
BenchmarkLeadingZeros-16 4.26 1.71 -59.86%
BenchmarkLeadingZeros16-16 3.04 1.83 -39.80%
BenchmarkLeadingZeros32-16 3.31 1.82 -45.02%
BenchmarkLeadingZeros64-16 3.69 1.71 -53.66%
BenchmarkTrailingZeros-16 2.55 1.62 -36.47%
BenchmarkTrailingZeros32-16 2.55 1.77 -30.59%
BenchmarkTrailingZeros64-16 2.78 1.62 -41.73%
BenchmarkOnesCount-16 3.19 0.93 -70.85%
BenchmarkOnesCount32-16 2.55 1.18 -53.73%
BenchmarkOnesCount64-16 3.22 0.93 -71.12%
Update #18616
I also made a change to bits_test.go because when debugging some failures
the output was not quite providing the right argument information.
Change-Id: Ia58d31d1777cf4582a4505f85b11a1202ca07d3e
Reviewed-on: https://go-review.googlesource.com/41630
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Carlos Eduardo Seo <cseo@linux.vnet.ibm.com>
Reviewed-by: Keith Randall <khr@golang.org>
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Generated hash and eq routines don't need nil checks.
Prior to this CL, this was accomplished by
temporarily incrementing the global variable disable_checknil.
However, that increment lasted only the lifetime of the
call to funccompile. After CL 41503, funccompile may
do nothing but enqueue the function for compilation,
resulting in nil checks being generated.
Fix this by adding an explicit flag to a function
indicating whether nil checks should be disabled
for that function.
While we're here, allow concurrent compilation
with the -w and -W flags, since that was needed
to investigate this issue.
Fixes#20242
Change-Id: Ib9140c22c49e9a09e62fa3cf350f5d3eff18e2bd
Reviewed-on: https://go-review.googlesource.com/42591
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
Use it to ensure that dowidth is not called
from the backend on a type whose size
has not yet been calculated.
This is an alternative to CL 42016.
Change-Id: I8c7b4410ee4c2a68573102f6b9b635f4fdcf392e
Reviewed-on: https://go-review.googlesource.com/42018
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Node.Used was written to from the backend
concurrently with reads of Node.Class
for the same ONAME Nodes.
I do not know why it was not failing consistently
under the race detector, but it is a race.
This is likely also a problem with Node.HasVal and Node.HasOpt.
They will be handled in a separate CL.
Fix Used by moving it to gc.Name and making it a separate bool.
There was one non-Name use of Used, marking OLABELs as used.
That is no longer needed, now that goto and label checking
happens early in the front end.
Leave the getters and setters in place,
to ease changing the representation in the future
(or changing to an interface!).
Updates #20144
Change-Id: I9bec7c6d33dcb129a4cfa9d338462ea33087f9f7
Reviewed-on: https://go-review.googlesource.com/42015
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Type.Size and Type.Alignment are for the front end:
They calculate size and alignment if needed.
Type.MustSize and Type.MustAlignment are for the back end:
They call Fatal if size and alignment are not already calculated.
Most uses are of MustSize and MustAlignment,
but that's because the back end is newer,
and this API was added to support it.
This CL was mostly generated with sed and selective reversion.
The only mildly interesting bit is the change of the ssa.Type interface
and the supporting ssa dummy types.
Follow-up to review feedback on CL 41970.
Passes toolstash-check.
Change-Id: I0d9b9505e57453dae8fb6a236a07a7a02abd459e
Reviewed-on: https://go-review.googlesource.com/42016
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
dowidth is fundamentally unsafe to call from the back end;
it will cause data races.
Replace all calls to dowidth in the backend with
assertions that the width has been calculated.
Then fix all the cases in which that was not so,
including the cases from #20145.
Fixes#20145.
Change-Id: Idba3d19d75638851a30ec2ebcdb703c19da3e92b
Reviewed-on: https://go-review.googlesource.com/41970
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
This CL adds initial support for concurrent backend compilation.
BACKGROUND
The compiler currently consists (very roughly) of the following phases:
1. Initialization.
2. Lexing and parsing into the cmd/compile/internal/syntax AST.
3. Translation into the cmd/compile/internal/gc AST.
4. Some gc AST passes: typechecking, escape analysis, inlining,
closure handling, expression evaluation ordering (order.go),
and some lowering and optimization (walk.go).
5. Translation into the cmd/compile/internal/ssa SSA form.
6. Optimization and lowering of SSA form.
7. Translation from SSA form to assembler instructions.
8. Translation from assembler instructions to machine code.
9. Writing lots of output: machine code, DWARF symbols,
type and reflection info, export data.
Phase 2 was already concurrent as of Go 1.8.
Phase 3 is planned for eventual removal;
we hope to go straight from syntax AST to SSA.
Phases 5–8 are per-function; this CL adds support for
processing multiple functions concurrently.
The slowest phases in the compiler are 5 and 6,
so this offers the opportunity for some good speed-ups.
Unfortunately, it's not quite that straightforward.
In the current compiler, the latter parts of phase 4
(order, walk) are done function-at-a-time as needed.
Making order and walk concurrency-safe proved hard,
and they're not particularly slow, so there wasn't much reward.
To enable phases 5–8 to be done concurrently,
when concurrent backend compilation is requested,
we complete phase 4 for all functions
before starting later phases for any functions.
Also, in reality, we automatically generate new
functions in phase 9, such as method wrappers
and equality and has routines.
Those new functions then go through phases 4–8.
This CL disables concurrent backend compilation
after the first, big, user-provided batch of
functions has been compiled.
This is done to keep things simple,
and because the autogenerated functions
tend to be small, few, simple, and fast to compile.
USAGE
Concurrent backend compilation still defaults to off.
To set the number of functions that may be backend-compiled
concurrently, use the compiler flag -c.
In future work, cmd/go will automatically set -c.
Furthermore, this CL has been intentionally written
so that the c=1 path has no backend concurrency whatsoever,
not even spawning any goroutines.
This helps ensure that, should problems arise
late in the development cycle,
we can simply have cmd/go set c=1 always,
and revert to the original compiler behavior.
MUTEXES
Most of the work required to make concurrent backend
compilation safe has occurred over the past month.
This CL adds a handful of mutexes to get the rest of the way there;
they are the mutexes that I didn't see a clean way to avoid.
Some of them may still be eliminable in future work.
In no particular order:
* gc.funcsymsmu. The global funcsyms slice is populated
lazily when we need function symbols for closures.
This occurs during gc AST to SSA translation.
The function funcsym also does a package lookup,
which is a source of races on types.Pkg.Syms;
funcsymsmu also covers that package lookup.
This mutex is low priority: it adds a single global,
it is in an infrequently used code path, and it is low contention.
Since funcsyms may now be added in any order,
we must sort them to preserve reproducible builds.
* gc.largeStackFramesMu. We don't discover until after SSA compilation
that a function's stack frame is gigantic.
Recording that error happens basically never,
but it does happen concurrently.
Fix with a low priority mutex and sorting.
* obj.Link.hashmu. ctxt.hash stores the mapping from
types.Syms (compiler symbols) to obj.LSyms (linker symbols).
It is accessed fairly heavily through all the phases.
This is the only heavily contended mutex.
* gc.signatlistmu. The global signatlist map is
populated with types through several of the concurrent phases,
including notably via ngotype during DWARF generation.
It is low priority for removal.
* gc.typepkgmu. Looking up symbols in the types package
happens a fair amount during backend compilation
and DWARF generation, particularly via ngotype.
This mutex helps us to avoid a broader mutex on types.Pkg.Syms.
It has low-to-moderate contention.
* types.internedStringsmu. gc AST to SSA conversion and
some SSA work introduce new autotmps.
Those autotmps have their names interned to reduce allocations.
That interning requires protecting types.internedStrings.
The autotmp names are heavily re-used, and the mutex
overhead and contention here are low, so it is probably
a worthwhile performance optimization to keep this mutex.
TESTING
I have been testing this code locally by running
'go install -race cmd/compile'
and then doing
'go build -a -gcflags=-c=128 std cmd'
for all architectures and a variety of compiler flags.
This obviously needs to be made part of the builders,
but it is too expensive to make part of all.bash.
I have filed #19962 for this.
REPRODUCIBLE BUILDS
This version of the compiler generates reproducible builds.
Testing reproducible builds also needs automation, however,
and is also too expensive for all.bash.
This is #19961.
Also of note is that some of the compiler flags used by 'toolstash -cmp'
are currently incompatible with concurrent backend compilation.
They still work fine with c=1.
Time will tell whether this is a problem.
NEXT STEPS
* Continue to find and fix races and bugs,
using a combination of code inspection, fuzzing,
and hopefully some community experimentation.
I do not know of any outstanding races,
but there probably are some.
* Improve testing.
* Improve performance, for many values of c.
* Integrate with cmd/go and fine tune.
* Support concurrent compilation with the -race flag.
It is a sad irony that it does not yet work.
* Minor code cleanup that has been deferred during
the last month due to uncertainty about the
ultimate shape of this CL.
PERFORMANCE
Here's the buried lede, at last. :)
All benchmarks are from my 8 core 2.9 GHz Intel Core i7 darwin/amd64 laptop.
First, going from tip to this CL with c=1 has almost no impact.
name old time/op new time/op delta
Template 195ms ± 3% 194ms ± 5% ~ (p=0.370 n=30+29)
Unicode 86.6ms ± 3% 87.0ms ± 7% ~ (p=0.958 n=29+30)
GoTypes 548ms ± 3% 555ms ± 4% +1.35% (p=0.001 n=30+28)
Compiler 2.51s ± 2% 2.54s ± 2% +1.17% (p=0.000 n=28+30)
SSA 5.16s ± 3% 5.16s ± 2% ~ (p=0.910 n=30+29)
Flate 124ms ± 5% 124ms ± 4% ~ (p=0.947 n=30+30)
GoParser 146ms ± 3% 146ms ± 3% ~ (p=0.150 n=29+28)
Reflect 354ms ± 3% 352ms ± 4% ~ (p=0.096 n=29+29)
Tar 107ms ± 5% 106ms ± 3% ~ (p=0.370 n=30+29)
XML 200ms ± 4% 201ms ± 4% ~ (p=0.313 n=29+28)
[Geo mean] 332ms 333ms +0.10%
name old user-time/op new user-time/op delta
Template 227ms ± 5% 225ms ± 5% ~ (p=0.457 n=28+27)
Unicode 109ms ± 4% 109ms ± 5% ~ (p=0.758 n=29+29)
GoTypes 713ms ± 4% 721ms ± 5% ~ (p=0.051 n=30+29)
Compiler 3.36s ± 2% 3.38s ± 3% ~ (p=0.146 n=30+30)
SSA 7.46s ± 3% 7.47s ± 3% ~ (p=0.804 n=30+29)
Flate 146ms ± 7% 147ms ± 3% ~ (p=0.833 n=29+27)
GoParser 179ms ± 5% 179ms ± 5% ~ (p=0.866 n=30+30)
Reflect 431ms ± 4% 429ms ± 4% ~ (p=0.593 n=29+30)
Tar 124ms ± 5% 123ms ± 5% ~ (p=0.140 n=29+29)
XML 243ms ± 4% 242ms ± 7% ~ (p=0.404 n=29+29)
[Geo mean] 415ms 415ms +0.02%
name old obj-bytes new obj-bytes delta
Template 382k ± 0% 382k ± 0% ~ (all equal)
Unicode 203k ± 0% 203k ± 0% ~ (all equal)
GoTypes 1.18M ± 0% 1.18M ± 0% ~ (all equal)
Compiler 3.98M ± 0% 3.98M ± 0% ~ (all equal)
SSA 8.28M ± 0% 8.28M ± 0% ~ (all equal)
Flate 230k ± 0% 230k ± 0% ~ (all equal)
GoParser 287k ± 0% 287k ± 0% ~ (all equal)
Reflect 1.00M ± 0% 1.00M ± 0% ~ (all equal)
Tar 190k ± 0% 190k ± 0% ~ (all equal)
XML 416k ± 0% 416k ± 0% ~ (all equal)
[Geo mean] 660k 660k +0.00%
Comparing this CL to itself, from c=1 to c=2
improves real times 20-30%, costs 5-10% more CPU time,
and adds about 2% alloc.
The allocation increase comes from allocating more ssa.Caches.
name old time/op new time/op delta
Template 202ms ± 3% 149ms ± 3% -26.15% (p=0.000 n=49+49)
Unicode 87.4ms ± 4% 84.2ms ± 3% -3.68% (p=0.000 n=48+48)
GoTypes 560ms ± 2% 398ms ± 2% -28.96% (p=0.000 n=49+49)
Compiler 2.46s ± 3% 1.76s ± 2% -28.61% (p=0.000 n=48+46)
SSA 6.17s ± 2% 4.04s ± 1% -34.52% (p=0.000 n=49+49)
Flate 126ms ± 3% 92ms ± 2% -26.81% (p=0.000 n=49+48)
GoParser 148ms ± 4% 107ms ± 2% -27.78% (p=0.000 n=49+48)
Reflect 361ms ± 3% 281ms ± 3% -22.10% (p=0.000 n=49+49)
Tar 109ms ± 4% 86ms ± 3% -20.81% (p=0.000 n=49+47)
XML 204ms ± 3% 144ms ± 2% -29.53% (p=0.000 n=48+45)
name old user-time/op new user-time/op delta
Template 246ms ± 9% 246ms ± 4% ~ (p=0.401 n=50+48)
Unicode 109ms ± 4% 111ms ± 4% +1.47% (p=0.000 n=44+50)
GoTypes 728ms ± 3% 765ms ± 3% +5.04% (p=0.000 n=46+50)
Compiler 3.33s ± 3% 3.41s ± 2% +2.31% (p=0.000 n=49+48)
SSA 8.52s ± 2% 9.11s ± 2% +6.93% (p=0.000 n=49+47)
Flate 149ms ± 4% 161ms ± 3% +8.13% (p=0.000 n=50+47)
GoParser 181ms ± 5% 192ms ± 2% +6.40% (p=0.000 n=49+46)
Reflect 452ms ± 9% 474ms ± 2% +4.99% (p=0.000 n=50+48)
Tar 126ms ± 6% 136ms ± 4% +7.95% (p=0.000 n=50+49)
XML 247ms ± 5% 264ms ± 3% +6.94% (p=0.000 n=48+50)
name old alloc/op new alloc/op delta
Template 38.8MB ± 0% 39.3MB ± 0% +1.48% (p=0.008 n=5+5)
Unicode 29.8MB ± 0% 30.2MB ± 0% +1.19% (p=0.008 n=5+5)
GoTypes 113MB ± 0% 114MB ± 0% +0.69% (p=0.008 n=5+5)
Compiler 443MB ± 0% 447MB ± 0% +0.95% (p=0.008 n=5+5)
SSA 1.25GB ± 0% 1.26GB ± 0% +0.89% (p=0.008 n=5+5)
Flate 25.3MB ± 0% 25.9MB ± 1% +2.35% (p=0.008 n=5+5)
GoParser 31.7MB ± 0% 32.2MB ± 0% +1.59% (p=0.008 n=5+5)
Reflect 78.2MB ± 0% 78.9MB ± 0% +0.91% (p=0.008 n=5+5)
Tar 26.6MB ± 0% 27.0MB ± 0% +1.80% (p=0.008 n=5+5)
XML 42.4MB ± 0% 43.4MB ± 0% +2.35% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
Template 379k ± 0% 378k ± 0% ~ (p=0.421 n=5+5)
Unicode 322k ± 0% 321k ± 0% ~ (p=0.222 n=5+5)
GoTypes 1.14M ± 0% 1.14M ± 0% ~ (p=0.548 n=5+5)
Compiler 4.12M ± 0% 4.11M ± 0% -0.14% (p=0.032 n=5+5)
SSA 9.72M ± 0% 9.72M ± 0% ~ (p=0.421 n=5+5)
Flate 234k ± 1% 234k ± 0% ~ (p=0.421 n=5+5)
GoParser 316k ± 1% 315k ± 0% ~ (p=0.222 n=5+5)
Reflect 980k ± 0% 979k ± 0% ~ (p=0.095 n=5+5)
Tar 249k ± 1% 249k ± 1% ~ (p=0.841 n=5+5)
XML 392k ± 0% 391k ± 0% ~ (p=0.095 n=5+5)
From c=1 to c=4, real time is down ~40%, CPU usage up 10-20%, alloc up ~5%:
name old time/op new time/op delta
Template 203ms ± 3% 131ms ± 5% -35.45% (p=0.000 n=50+50)
Unicode 87.2ms ± 4% 84.1ms ± 2% -3.61% (p=0.000 n=48+47)
GoTypes 560ms ± 4% 310ms ± 2% -44.65% (p=0.000 n=50+49)
Compiler 2.47s ± 3% 1.41s ± 2% -43.10% (p=0.000 n=50+46)
SSA 6.17s ± 2% 3.20s ± 2% -48.06% (p=0.000 n=49+49)
Flate 126ms ± 4% 74ms ± 2% -41.06% (p=0.000 n=49+48)
GoParser 148ms ± 4% 89ms ± 3% -39.97% (p=0.000 n=49+50)
Reflect 360ms ± 3% 242ms ± 3% -32.81% (p=0.000 n=49+49)
Tar 108ms ± 4% 73ms ± 4% -32.48% (p=0.000 n=50+49)
XML 203ms ± 3% 119ms ± 3% -41.56% (p=0.000 n=49+48)
name old user-time/op new user-time/op delta
Template 246ms ± 9% 287ms ± 9% +16.98% (p=0.000 n=50+50)
Unicode 109ms ± 4% 118ms ± 5% +7.56% (p=0.000 n=46+50)
GoTypes 735ms ± 4% 806ms ± 2% +9.62% (p=0.000 n=50+50)
Compiler 3.34s ± 4% 3.56s ± 2% +6.78% (p=0.000 n=49+49)
SSA 8.54s ± 3% 10.04s ± 3% +17.55% (p=0.000 n=50+50)
Flate 149ms ± 6% 176ms ± 3% +17.82% (p=0.000 n=50+48)
GoParser 181ms ± 5% 213ms ± 3% +17.47% (p=0.000 n=50+50)
Reflect 453ms ± 6% 499ms ± 2% +10.11% (p=0.000 n=50+48)
Tar 126ms ± 5% 149ms ±11% +18.76% (p=0.000 n=50+50)
XML 246ms ± 5% 287ms ± 4% +16.53% (p=0.000 n=49+50)
name old alloc/op new alloc/op delta
Template 38.8MB ± 0% 40.4MB ± 0% +4.21% (p=0.008 n=5+5)
Unicode 29.8MB ± 0% 30.9MB ± 0% +3.68% (p=0.008 n=5+5)
GoTypes 113MB ± 0% 116MB ± 0% +2.71% (p=0.008 n=5+5)
Compiler 443MB ± 0% 455MB ± 0% +2.75% (p=0.008 n=5+5)
SSA 1.25GB ± 0% 1.27GB ± 0% +1.84% (p=0.008 n=5+5)
Flate 25.3MB ± 0% 26.9MB ± 1% +6.31% (p=0.008 n=5+5)
GoParser 31.7MB ± 0% 33.2MB ± 0% +4.61% (p=0.008 n=5+5)
Reflect 78.2MB ± 0% 80.2MB ± 0% +2.53% (p=0.008 n=5+5)
Tar 26.6MB ± 0% 27.9MB ± 0% +5.19% (p=0.008 n=5+5)
XML 42.4MB ± 0% 44.6MB ± 0% +5.20% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
Template 380k ± 0% 379k ± 0% -0.39% (p=0.032 n=5+5)
Unicode 321k ± 0% 321k ± 0% ~ (p=0.841 n=5+5)
GoTypes 1.14M ± 0% 1.14M ± 0% ~ (p=0.421 n=5+5)
Compiler 4.12M ± 0% 4.14M ± 0% +0.52% (p=0.008 n=5+5)
SSA 9.72M ± 0% 9.76M ± 0% +0.37% (p=0.008 n=5+5)
Flate 234k ± 1% 234k ± 1% ~ (p=0.690 n=5+5)
GoParser 316k ± 0% 317k ± 1% ~ (p=0.841 n=5+5)
Reflect 981k ± 0% 981k ± 0% ~ (p=1.000 n=5+5)
Tar 250k ± 0% 249k ± 1% ~ (p=0.151 n=5+5)
XML 393k ± 0% 392k ± 0% ~ (p=0.056 n=5+5)
Going beyond c=4 on my machine tends to increase CPU time and allocs
without impacting real time.
The CPU time numbers matter, because when there are many concurrent
compilation processes, that will impact the overall throughput.
The numbers above are in many ways the best case scenario;
we can take full advantage of all cores.
Fortunately, the most common compilation scenario is incremental
re-compilation of a single package during a build/test cycle.
Updates #15756
Change-Id: I6725558ca2069edec0ac5b0d1683105a9fff6bea
Reviewed-on: https://go-review.googlesource.com/40693
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
node.Likely may once have held -1/0/+1,
but it is now only 0/1.
With improved SSA heuristics,
it may someday go away entirely.
Change-Id: I6451d17fd7fb47e67fea4d39df302b6db00ea57b
Reviewed-on: https://go-review.googlesource.com/41760
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
At VARKILLs, zero a variable if it is ambiguously live.
After the VARKILL anything this variable references
might be collected. If it were to become live again later,
the GC will see references to already-collected objects.
We don't know a variable is ambiguously live until very
late in compilation (after lowering, register allocation, ...),
so it is hard to generate the code in an arch-independent way.
We also have to be careful not to clobber any registers.
Fortunately, this almost never happens so performance is ~irrelevant.
There are only 2 instances where this triggers in the stdlib.
Fixes#20029
Change-Id: Ia9585a91d7b823fad4a9d141d954464cc7af31f4
Reviewed-on: https://go-review.googlesource.com/41076
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Instead of populating the aux symbol
of CALLudiv during rewrite rules,
populate it during genssa.
This simplifies the rewrite rules.
It also removes all remaining calls
to ctxt.Lookup from any rewrite rules.
This is a first step towards removing
ctxt from ssa.Cache entirely,
and also a first step towards converting
the obj.LSym.Version field into a boolean.
It should also speed up compilation.
Also, move func udiv into package runtime.
That's where it is anyway,
and it lets udiv look and act like the rest of
the runtime support functions.
Change-Id: I41462a632c14fdc41f61b08049ec13cd80a87bfe
Reviewed-on: https://go-review.googlesource.com/41191
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Now only cmd/asm and cmd/compile depend on cmd/internal/obj. Changing
the assembler backends no longer requires reinstalling cmd/link or
cmd/addr2line.
There's also now one canonical definition of the object file format in
cmd/internal/objabi/doc.go, with a warning to update all three
implementations.
objabi is still something of a grab bag of unrelated code (e.g., flag
and environment variable handling probably belong in a separate "tool"
package), but this is still progress.
Fixes#15165.
Fixes#20026.
Change-Id: Ic4b92fac7d0d35438e0d20c9579aad4085c5534c
Reviewed-on: https://go-review.googlesource.com/40972
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
The compiler handled gcargs and gclocals LSyms unusually.
It generated placeholder symbols (makefuncdatasym),
filled them in, and then renamed them for content-addressability.
This is an important binary size optimization;
the same locals information occurs over and over.
This CL continues to treat these LSyms unusually,
but in a slightly more explicit way,
and importantly for concurrent compilation,
in a way that does not require concurrent
modification of Ctxt.Hash.
Instead of creating gcargs and gclocals in the usual way,
by creating a types.Sym and then an obj.LSym,
we add them directly to obj.FuncInfo,
initialize them in obj.InitTextSym,
and deduplicate and add them to ctxt.Data at the end.
Then the backend's job is simply to fill them in
and rename them appropriately.
Updates #15756
name old alloc/op new alloc/op delta
Template 38.8MB ± 0% 38.7MB ± 0% -0.22% (p=0.016 n=5+5)
Unicode 29.8MB ± 0% 29.8MB ± 0% ~ (p=0.690 n=5+5)
GoTypes 113MB ± 0% 113MB ± 0% -0.24% (p=0.008 n=5+5)
SSA 1.25GB ± 0% 1.24GB ± 0% -0.39% (p=0.008 n=5+5)
Flate 25.3MB ± 0% 25.2MB ± 0% -0.43% (p=0.008 n=5+5)
GoParser 31.7MB ± 0% 31.7MB ± 0% -0.22% (p=0.008 n=5+5)
Reflect 78.2MB ± 0% 77.6MB ± 0% -0.80% (p=0.008 n=5+5)
Tar 26.6MB ± 0% 26.3MB ± 0% -0.85% (p=0.008 n=5+5)
XML 42.4MB ± 0% 41.9MB ± 0% -1.04% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
Template 378k ± 0% 377k ± 1% ~ (p=0.151 n=5+5)
Unicode 321k ± 1% 321k ± 0% ~ (p=0.841 n=5+5)
GoTypes 1.14M ± 0% 1.14M ± 0% -0.47% (p=0.016 n=5+5)
SSA 9.71M ± 0% 9.67M ± 0% -0.33% (p=0.008 n=5+5)
Flate 233k ± 1% 232k ± 1% ~ (p=0.151 n=5+5)
GoParser 316k ± 0% 315k ± 0% -0.49% (p=0.016 n=5+5)
Reflect 979k ± 0% 972k ± 0% -0.75% (p=0.008 n=5+5)
Tar 250k ± 0% 247k ± 1% -0.92% (p=0.008 n=5+5)
XML 392k ± 1% 389k ± 0% -0.67% (p=0.008 n=5+5)
Change-Id: Idc36186ca9d2f8214b5f7720bbc27b6bb22fdc48
Reviewed-on: https://go-review.googlesource.com/40697
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
These are never accessed.
Change-Id: I45975972d19d1f263f6545c9ed648511501094c6
Reviewed-on: https://go-review.googlesource.com/40315
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Popcount instructions on amd64 are not guaranteed to be
present, so we must guard their call. Rewrite rules can't
generate control flow at the moment, so the intrinsifier
needs to generate that code.
name old time/op new time/op delta
OnesCount-8 2.47ns ± 5% 1.04ns ± 2% -57.70% (p=0.000 n=10+10)
OnesCount16-8 1.05ns ± 1% 0.78ns ± 0% -25.56% (p=0.000 n=9+8)
OnesCount32-8 1.63ns ± 5% 1.04ns ± 2% -35.96% (p=0.000 n=10+10)
OnesCount64-8 2.45ns ± 0% 1.04ns ± 1% -57.55% (p=0.000 n=6+10)
Update #18616
Change-Id: I4aff2cc9aa93787898d7b22055fe272a7cf95673
Reviewed-on: https://go-review.googlesource.com/38320
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
This triggers 119 times during make.bash.
This CL reduces the time it takes for the
compiler to panic while compiling the code in #19751
from 22 minutes to 15 minutes. Yay, I guess.
Updates #19751
Change-Id: I8ca7f1ae75f89d1eb2a361d67b3055a975221734
Reviewed-on: https://go-review.googlesource.com/39294
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>