Commit Graph

38 Commits

Author SHA1 Message Date
cuiweixie f4bf9bceac runtime: use strings.Builder
Change-Id: I9274d36ca983fdf59088b71a97b139ea262370dd
Reviewed-on: https://go-review.googlesource.com/c/go/+/428276
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2022-09-07 07:02:51 +00:00
cuiweixie 4ad55cd93f runtime: convert local var started,progress at TestStackGrowth to atomic type
For #53821

Change-Id: I9c777ff642ea4b70073335279551cea6a2394569
Reviewed-on: https://go-review.googlesource.com/c/go/+/427138
Run-TryBot: xie cui <523516579@qq.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
2022-09-05 08:08:18 +00:00
Keith Randall 016d755213 runtime: measure stack usage; start stacks larger if needed
Measure the average stack size used by goroutines at every GC. When
starting a new goroutine, allocate an initial goroutine stack of that
average size. Intuition is that we'll waste at most 2x in stack space
because only half the goroutines can be below average. In turn, we
avoid some of the early stack growth / copying needed in the average
case.

More details in the design doc at: https://docs.google.com/document/d/1YDlGIdVTPnmUiTAavlZxBI1d9pwGQgZT7IKFKlIXohQ/edit?usp=sharing

name        old time/op  new time/op  delta
Issue18138  95.3µs ± 0%  67.3µs ±13%  -29.35%  (p=0.000 n=9+10)

Fixes #18138

Change-Id: Iba34d22ed04279da7e718bbd569bbf2734922eaa
Reviewed-on: https://go-review.googlesource.com/c/go/+/345889
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
2022-05-12 22:32:42 +00:00
Bryan C. Mills bec2cc3708 runtime: eliminate arbitrary timeout in TestStackGrowth
Instead, allow the test to run up until nearly the test's deadline,
whatever that may be, and then crash with a panic (instead of calling
t.Errorf) to get a useful goroutine dump.

With the arbitrary timeout removed, we can now also run this test in
short mode, reducing its impact on test latency.

Fixes #19381

Change-Id: Ie1fae321a2973fcb9b69a012103363f16214f529
Reviewed-on: https://go-review.googlesource.com/c/go/+/378034
Trust: Bryan Mills <bcmills@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-01-19 16:23:21 +00:00
Russ Cox 2580d0e08d all: gofmt -w -r 'interface{} -> any' src
And then revert the bootstrap cmd directories and certain testdata.
And adjust tests as needed.

Not reverting the changes in std that are bootstrapped,
because some of those changes would appear in API docs,
and we want to use any consistently.
Instead, rewrite 'any' to 'interface{}' in cmd/dist for those directories
when preparing the bootstrap copy.

A few files changed as a result of running gofmt -w
not because of interface{} -> any but because they
hadn't been updated for the new //go:build lines.

Fixes #49884.

Change-Id: Ie8045cba995f65bd79c694ec77a1b3d1fe01bb09
Reviewed-on: https://go-review.googlesource.com/c/go/+/368254
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2021-12-13 18:45:54 +00:00
Josh Bleecher Snyder 75773b0e7b runtime: add BenchmarkStackCopyWithStkobj
For benchmarking and improving recent stkobj-related changes.

Co-Authored-By: Cherry Mui <cherryyz@google.com>
Change-Id: I34c8b1a09e4cf98547460882b0d3908158269f57
Reviewed-on: https://go-review.googlesource.com/c/go/+/354071
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-10-05 20:35:41 +00:00
Austin Clements e8de596f04 runtime: use inlined function name for traceback elision
Currently, gentraceback decides which frames to print or elide when
unwinding inlined frames using only the name of the outermost
function. If the outermost function should be elided, then inlined
functions will also be elided, even if they shouldn't be.

This happens in practice in at least one situation. As of CL 258938,
exported Go functions (and functions they call) can now be inlined
into the generated _cgoexp_HASH_FN function. The runtime elides
_cgoexp_HASH_FN from tracebacks because it doesn't contain a ".".
Because of this bug, it also elides anything that was inlined into it.

This CL fixes this by synthesizing a funcInfo for the inlined
functions to pass to showframe.

Fixes #42754.

Change-Id: Ie6c663a4a1ac7f0d4beb1aa60bc26fc8cddd0f9d
Reviewed-on: https://go-review.googlesource.com/c/go/+/272131
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-11-24 21:47:44 +00:00
Keith Randall 07ad840098 runtime: remove unneeded noinline directives
Now that mid-stack inlining reports backtraces correctly, we no
longer need to protect against inlining in a few critical areas.

Update #19348
Update #28640
Update #34276

Change-Id: Ie68487e6482c3a9509ecf7ecbbd40fe43cee8381
Reviewed-on: https://go-review.googlesource.com/c/go/+/195818
Reviewed-by: David Chase <drchase@google.com>
2019-09-17 17:17:11 +00:00
Keith Randall 8f296f59de Revert "Revert "cmd/compile,runtime: allocate defer records on the stack""
This reverts CL 180761

Reason for revert: Reinstate the stack-allocated defer CL.

There was nothing wrong with the CL proper, but stack allocation of defers exposed two other issues.

Issue #32477: Fix has been submitted as CL 181258.
Issue #32498: Possible fix is CL 181377 (not submitted yet).

Change-Id: I32b3365d5026600069291b068bbba6cb15295eb3
Reviewed-on: https://go-review.googlesource.com/c/go/+/181378
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-06-10 16:19:39 +00:00
Keith Randall 49200e3f3e Revert "cmd/compile,runtime: allocate defer records on the stack"
This reverts commit fff4f599fe.

Reason for revert: Seems to still have issues around GC.

Fixes #32452

Change-Id: Ibe7af629f9ad6a3d5312acd7b066123f484da7f0
Reviewed-on: https://go-review.googlesource.com/c/go/+/180761
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2019-06-05 19:50:09 +00:00
Keith Randall fff4f599fe cmd/compile,runtime: allocate defer records on the stack
When a defer is executed at most once in a function body,
we can allocate the defer record for it on the stack instead
of on the heap.

This should make defers like this (which are very common) faster.

This optimization applies to 363 out of the 370 static defer sites
in the cmd/go binary.

name     old time/op  new time/op  delta
Defer-4  52.2ns ± 5%  36.2ns ± 3%  -30.70%  (p=0.000 n=10+10)

Fixes #6980
Update #14939

Change-Id: I697109dd7aeef9e97a9eeba2ef65ff53d3ee1004
Reviewed-on: https://go-review.googlesource.com/c/go/+/171758
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-06-04 17:35:20 +00:00
Russ Cox 018d9b42a7 runtime: disable some tests in -quick mode
Speeds up the "go test runtime -cpu=1,2,4 -short -quick" phase of all.bash.

For #26473.

Change-Id: I090f5a5aa754462b3253a2156dc31fa67ce7af2a
Reviewed-on: https://go-review.googlesource.com/c/go/+/177399
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-05-16 03:25:03 +00:00
Cherry Zhang 4f4c2a79d4 runtime: scan defer closure in stack scan
With stack objects, when we scan the stack, it scans defers with
tracebackdefers, but it seems to me that tracebackdefers doesn't
include the func value itself, which could be a stack allocated
closure. Scan it explicitly.

Alternatively, we can change tracebackdefers to include the func
value, which in turn needs to change the type of stkframe.

Fixes #30453.

Change-Id: I55a6e43264d6952ab2fa5c638bebb89fdc410e2b
Reviewed-on: https://go-review.googlesource.com/c/164118
Reviewed-by: Keith Randall <khr@golang.org>
2019-03-01 16:21:29 +00:00
Keith Randall 13baf4b2cd cmd/compile: encourage inlining of functions with single-call bodies
This is a simple tweak to allow a bit more mid-stack inlining.
In cases like this:

func f() {
    g()
}

We'd really like to inline f into its callers. It can't hurt.

We implement this optimization by making calls a bit cheaper, enough
to afford a single call in the function body, but not 2.
The remaining budget allows for some argument modification, or perhaps
a wrapping conditional:

func f(x int) {
    g(x, 0)
}
func f(x int) {
    if x > 0 {
        g()
    }
}

Update #19348

Change-Id: Ifb1ea0dd1db216c3fd5c453c31c3355561fe406f
Reviewed-on: https://go-review.googlesource.com/c/147361
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
2018-11-08 17:29:23 +00:00
Ian Lance Taylor b56e24782f runtime: scale timeout in TestStackGrowth
Updates #19381

Change-Id: I62b8b0cd7170941af77281eb3aada3802623ec27
Reviewed-on: https://go-review.googlesource.com/122587
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-07-09 02:18:16 +00:00
Austin Clements e391faded7 runtime: fix defer matching of leaf functions on LR machines
Traceback matches the defer stack with the function call stack using
the SP recorded in defer frames when the defer frame is created.
However, on LR machines this is ambiguous: if function A pushes a
defer and then calls function B, where B is a leaf function with a
zero-sized frame, then both A and B have the same SP and will *both*
match the defer on the defer stack. Since traceback unwinds through B
first, it will incorrectly match up the defer with B's frame instead
of A's frame.

Where this goes particularly wrong is if function B causes a signal
that turns into a panic (e.g., a nil pointer dereference). In order to
handle the fact that we may not have a liveness map at the location
that caused the signal and injected a sigpanic call, traceback has
logic to unwind the panicking frame's continuation PC to the PC where
the most recent defer was pushed (this is safe because the frame is
dead other than any defers it pushed). However, if traceback
mis-matches the defer stack, it winds up reporting the B's
continuation PC is in A. If the runtime then uses this continuation PC
to look up PCDATA in B, it will panic because the PC is out of range
for B. This failure mode can be seen in
sync/atomic/atomic_test.go:TestNilDeref. An example failure is:
https://build.golang.org/log/8e07a762487839252af902355f6b1379dbd463c5

This CL fixes all of this by recognizing that a function that pushes a
defer must also have a non-zero-sized frame and using this fact to
refine the defer matching logic.

Fixes the build for arm64, mips, mipsle, ppc64, ppc64le, and s390x.

Fixes #25499.

Change-Id: Iff7c01d08ad42f3de22b3a73658cc2f674900101
Reviewed-on: https://go-review.googlesource.com/114078
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2018-05-22 19:38:03 +00:00
Richard Musiol e3c684777a all: skip unsupported tests for js/wasm
The general policy for the current state of js/wasm is that it only
has to support tests that are also supported by nacl.

The test nilptr3.go makes assumptions about which nil checks can be
removed. Since WebAssembly does not signal on reading a null pointer,
all nil checks have to be explicit.

Updates #18892

Change-Id: I06a687860b8d22ae26b1c391499c0f5183e4c485
Reviewed-on: https://go-review.googlesource.com/110096
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-04-30 19:39:18 +00:00
Eric Daniels d9b006a705 runtime/traceback: support tracking goroutine ancestor tracebacks with GODEBUG="tracebackancestors=N"
Currently, collecting a stack trace via runtime.Stack captures the stack for the
immediately running goroutines. This change extends those tracebacks to include
the tracebacks of their ancestors. This is done with a low memory cost and only
utilized when debug option tracebackancestors is set to a value greater than 0.

Resolves #22289

Change-Id: I7edacc62b2ee3bd278600c4a21052c351f313f3a
Reviewed-on: https://go-review.googlesource.com/70993
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-04-13 20:42:38 +00:00
Josh Bleecher Snyder 00fab20582 runtime: improve StackCopy benchmarks
Make the StackCopyNoCache test easier to read.

Add a StackCopyPtr test that actually has some pointers
that need adjusting.

Change-Id: I5b07c26f40cb485c9de97ed63fac89a9e6f36650
Reviewed-on: https://go-review.googlesource.com/104195
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-04-04 04:45:12 +00:00
Austin Clements 2e5011d802 runtime: even more TestStackGrowth timeout debugging
This adds logging for the expected duration of a growStack, plus
progress information on the growStack that timed out.

Updates #19381.

Change-Id: Ic358f8350f499ff22dd213b658aece7d1aa62675
Reviewed-on: https://go-review.googlesource.com/81556
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2017-12-01 20:46:46 +00:00
Austin Clements 032678e0fb runtime: don't elide wrapper functions that call panic or at TOS
CL 45412 started hiding autogenerated wrapper functions from call
stacks so that call stack semantics better matched language semantics.
This is based on the theory that the wrapper function will call the
"real" function and all the programmer knows about is the real
function.

However, this theory breaks down in two cases:

1. If the wrapper is at the top of the stack, then it didn't call
   anything. This can happen, for example, if the "stack" was actually
   synthesized by the user.

2. If the wrapper panics, for example by calling panicwrap or by
   dereferencing a nil pointer, then it didn't call the wrapped
   function and the user needs to see what panicked, even if we can't
   attribute it nicely.

This commit modifies the traceback logic to include the wrapper
function in both of these cases.

Fixes #22231.

Change-Id: I6e4339a652f73038bd8331884320f0b8edd86eb1
Reviewed-on: https://go-review.googlesource.com/76770
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-11-13 21:43:44 +00:00
Austin Clements 15d6ab69fb runtime: make systemstack tail call if already switched
Currently systemstack always calls its argument, even if we're already
on the system stack. Unfortunately, traceback with _TraceJump stops at
the first systemstack it sees, which often cuts off runtime stacks
early in profiles.

Fix this by performing a tail call if we're already on the system
stack. This eliminates it from the traceback entirely, so it won't
stop prematurely (or all get mushed into a single node in the profile
graph).

Change-Id: Ibc69e8765e899f8d3806078517b8c7314da196f4
Reviewed-on: https://go-review.googlesource.com/74050
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2017-10-30 16:33:55 +00:00
Austin Clements e97209515a runtime: hide <autogenerated> methods from call stack
The compiler generates wrapper methods to forward interface method
calls (which are always pointer-based) to value methods. These
wrappers appear in the call stack even though they are an
implementation detail. This leaves ugly "<autogenerated>" functions in
stack traces and can throw off skip counts for stack traces.

Fix this by considering these runtime frames in printed stack traces
so they will only be printed if runtime frames are being printed, and
by eliding them from the call stack expansion used by CallersFrames
and Caller.

This removes the test for issue 4388 since that was checking that
"<autogenerated>" appeared in the stack trace instead of something
even weirder. We replace it with various runtime package tests.

Fixes #16723.

Change-Id: Ice3f118c66f254bb71478a664d62ab3fc7125819
Reviewed-on: https://go-review.googlesource.com/45412
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2017-09-22 22:17:20 +00:00
Austin Clements 286599f270 runtime: make TestStackGrowth a serial test
TestStackGrowth is currently a parallel test. However, it depends on a
20 second timeout, which is already dubious in a parallel test, and
became really problematic on slow builders when runtime.GC switched to
triggering concurrent GC instead of STW GC. Before that change, the
test spent much of its time in STW GC, so it wasn't *really* parallel.
After that change, it was competing with all of the other parallel
tests and GC likely started taking ~4 times longer. On most builders
the whole test runs in well under a second, but on the slow builders
that was enough to push it over the 20 second timeout.

Fix this by making the test serial.

Updates #19381 (probably fixes it, but we'll have to wait and see).

Change-Id: I21af7cf543ab07f1ec1c930bfcb355b0df75672d
Reviewed-on: https://go-review.googlesource.com/48110
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Elias Naur <elias.naur@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2017-07-11 18:32:56 +00:00
Josh Bleecher Snyder aafd96408f runtime: speed up stack copying
I was surprised to see readvarint show up in a cpu profile.

Use a few simple optimizations to speed up stack copying:

* Avoid making a copy of the cache.entries array or any of its elements.
* Use a shift instead of a signed division in stackmapdata.
* Change readvarint to return the number of bytes consumed
  rather than an updated slice.
* Make some minor optimizations to readvarint to help the compiler.
* Avoid called readvarint when the value fits in a single byte.

The first and last optimizations are the most significant,
although they all contribute a little.

Add a benchmark for stack copying that includes lots of different
functions in a recursive loop, to bust the cache.

This might speed up other runtime operations as well;
I only benchmarked stack copying.

name                old time/op  new time/op  delta
StackCopy-8         96.4ms ± 2%  82.7ms ± 1%  -14.24%  (p=0.000 n=20+19)
StackCopyNoCache-8   167ms ± 1%   131ms ± 1%  -21.58%  (p=0.000 n=20+20)

Change-Id: I13d5c455c65073c73b656acad86cf8e8e3c9807b
Reviewed-on: https://go-review.googlesource.com/43150
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2017-06-14 05:28:24 +00:00
Austin Clements 84890c72fc runtime: more diagnostics for TestStackGrowth
This adds diagnostics so we can tell if the finalizer has started, in
addition to whether or not it has finished.

Updates #19381.

Change-Id: Icb7b1b0380c9ad1128b17074828945511a6cca5d
Reviewed-on: https://go-review.googlesource.com/45138
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2017-06-08 18:21:37 +00:00
Austin Clements d089a6c718 runtime: remove stack barriers
Now that we don't rescan stacks, stack barriers are unnecessary. This
removes all of the code and structures supporting them as well as
tests that were specifically for stack barriers.

Updates #17503.

Change-Id: Ia29221730e0f2bbe7beab4fa757f31a032d9690c
Reviewed-on: https://go-review.googlesource.com/36620
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2017-02-14 15:52:54 +00:00
Brad Fitzpatrick 5fea2ccc77 all: single space after period.
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.

This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:

$ perl -i -npe 's,^(\s*// .+[a-z]\.)  +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.)  +([A-Z])')
$ go test go/doc -update

Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-02 00:13:47 +00:00
Brad Fitzpatrick 519474451a all: make copyright headers consistent with one space after period
This is a subset of https://golang.org/cl/20022 with only the copyright
header lines, so the next CL will be smaller and more reviewable.

Go policy has been single space after periods in comments for some time.

The copyright header template at:

    https://golang.org/doc/contribute.html#copyright

also uses a single space.

Make them all consistent.

Change-Id: Icc26c6b8495c3820da6b171ca96a74701b4a01b0
Reviewed-on: https://go-review.googlesource.com/20111
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-01 23:34:33 +00:00
Mikio Hara fa5e5478c8 runtime: don't call testing.Fatal from worker goroutines
Change-Id: I630d4d2d8a914d6c07f22351a56d5e44a937123e
Reviewed-on: https://go-review.googlesource.com/19245
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-02-07 03:18:28 +00:00
Brad Fitzpatrick 2ae77376f7 all: link to https instead of http
The one in misc/makerelease/makerelease.go is particularly bad and
probably warrants rotating our keys.

I didn't update old weekly notes, and reverted some changes involving
test code for now, since we're late in the Go 1.5 freeze. Otherwise,
the rest are all auto-generated changes, and all manually reviewed.

Change-Id: Ia2753576ab5d64826a167d259f48a2f50508792d
Reviewed-on: https://go-review.googlesource.com/12048
Reviewed-by: Rob Pike <r@golang.org>
2015-07-11 14:36:33 +00:00
Alan Donovan 232331f0c7 runtime: add blank assignment to defeat "declared but not used" error from go/types
gc should ideally consider this an error too; see golang/go#8560.

Change-Id: Ieee71c4ecaff493d7f83e15ba8c8a04ee90a4cf1
Reviewed-on: https://go-review.googlesource.com/10757
Reviewed-by: Robert Griesemer <gri@golang.org>
2015-06-05 18:05:16 +00:00
Austin Clements faa7a7e8ae runtime: implement GC stack barriers
This commit implements stack barriers to minimize the amount of
stack re-scanning that must be done during mark termination.

Currently the GC scans stacks of active goroutines twice during every
GC cycle: once at the beginning during root discovery and once at the
end during mark termination. The second scan happens while the world
is stopped and guarantees that we've seen all of the roots (since
there are no write barriers on writes to local stack
variables). However, this means pause time is proportional to stack
size. In particularly recursive programs, this can drive pause time up
past our 10ms goal (e.g., it takes about 150ms to scan a 50MB heap).

Re-scanning the entire stack is rarely necessary, especially for large
stacks, because usually most of the frames on the stack were not
active between the first and second scans and hence any changes to
these frames (via non-escaping pointers passed down the stack) were
tracked by write barriers.

To efficiently track how far a stack has been unwound since the first
scan (and, hence, how much needs to be re-scanned), this commit
introduces stack barriers. During the first scan, at exponentially
spaced points in each stack, the scan overwrites return PCs with the
PC of the stack barrier function. When "returned" to, the stack
barrier function records how far the stack has unwound and jumps to
the original return PC for that point in the stack. Then the second
scan only needs to proceed as far as the lowest barrier that hasn't
been hit.

For deeply recursive programs, this substantially reduces mark
termination time (and hence pause time). For the goscheme example
linked in issue #10898, prior to this change, mark termination times
were typically between 100 and 500ms; with this change, mark
termination times are typically between 10 and 20ms. As a result of
the reduced stack scanning work, this reduces overall execution time
of the goscheme example by 20%.

Fixes #10898.

The effect of this on programs that are not deeply recursive is
minimal:

name                   old time/op    new time/op    delta
BinaryTree17              3.16s ± 2%     3.26s ± 1%  +3.31%  (p=0.000 n=19+19)
Fannkuch11                2.42s ± 1%     2.48s ± 1%  +2.24%  (p=0.000 n=17+19)
FmtFprintfEmpty          50.0ns ± 3%    49.8ns ± 1%    ~     (p=0.534 n=20+19)
FmtFprintfString          173ns ± 0%     175ns ± 0%  +1.49%  (p=0.000 n=16+19)
FmtFprintfInt             170ns ± 1%     175ns ± 1%  +2.97%  (p=0.000 n=20+19)
FmtFprintfIntInt          288ns ± 0%     295ns ± 0%  +2.73%  (p=0.000 n=16+19)
FmtFprintfPrefixedInt     242ns ± 1%     252ns ± 1%  +4.13%  (p=0.000 n=18+18)
FmtFprintfFloat           324ns ± 0%     323ns ± 0%  -0.36%  (p=0.000 n=20+19)
FmtManyArgs              1.14µs ± 0%    1.12µs ± 1%  -1.01%  (p=0.000 n=18+19)
GobDecode                8.88ms ± 1%    8.87ms ± 0%    ~     (p=0.480 n=19+18)
GobEncode                6.80ms ± 1%    6.85ms ± 0%  +0.82%  (p=0.000 n=20+18)
Gzip                      363ms ± 1%     363ms ± 1%    ~     (p=0.077 n=18+20)
Gunzip                   90.6ms ± 0%    90.0ms ± 1%  -0.71%  (p=0.000 n=17+18)
HTTPClientServer         51.5µs ± 1%    50.8µs ± 1%  -1.32%  (p=0.000 n=18+18)
JSONEncode               17.0ms ± 0%    17.1ms ± 0%  +0.40%  (p=0.000 n=18+17)
JSONDecode               61.8ms ± 0%    63.8ms ± 1%  +3.11%  (p=0.000 n=18+17)
Mandelbrot200            3.84ms ± 0%    3.84ms ± 1%    ~     (p=0.583 n=19+19)
GoParse                  3.71ms ± 1%    3.72ms ± 1%    ~     (p=0.159 n=18+19)
RegexpMatchEasy0_32       100ns ± 0%     100ns ± 1%  -0.19%  (p=0.033 n=17+19)
RegexpMatchEasy0_1K       342ns ± 1%     331ns ± 0%  -3.41%  (p=0.000 n=19+19)
RegexpMatchEasy1_32      82.5ns ± 0%    81.7ns ± 0%  -0.98%  (p=0.000 n=18+18)
RegexpMatchEasy1_1K       505ns ± 0%     494ns ± 1%  -2.16%  (p=0.000 n=18+18)
RegexpMatchMedium_32      137ns ± 1%     137ns ± 1%  -0.24%  (p=0.048 n=20+18)
RegexpMatchMedium_1K     41.6µs ± 0%    41.3µs ± 1%  -0.57%  (p=0.004 n=18+20)
RegexpMatchHard_32       2.11µs ± 0%    2.11µs ± 1%  +0.20%  (p=0.037 n=17+19)
RegexpMatchHard_1K       63.9µs ± 2%    63.3µs ± 0%  -0.99%  (p=0.000 n=20+17)
Revcomp                   560ms ± 1%     522ms ± 0%  -6.87%  (p=0.000 n=18+16)
Template                 75.0ms ± 0%    75.1ms ± 1%  +0.18%  (p=0.013 n=18+19)
TimeParse                 358ns ± 1%     364ns ± 0%  +1.74%  (p=0.000 n=20+15)
TimeFormat                360ns ± 0%     372ns ± 0%  +3.55%  (p=0.000 n=20+18)

Change-Id: If8a9bfae6c128d15a4f405e02bcfa50129df82a2
Reviewed-on: https://go-review.googlesource.com/10314
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-06-02 20:00:57 +00:00
Keith Randall b2a950bb73 runtime: rename gothrow to throw
Rename "gothrow" to "throw" now that the C version of "throw"
is no longer needed.

This change is purely mechanical except in panic.go where the
old version of "throw" has been deleted.

sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go

Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3
Reviewed-on: https://go-review.googlesource.com/2150
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
2014-12-28 06:16:16 +00:00
Russ Cox 4bf4d9f86e runtime: reenable TestStackGrowth on 32-bit systems
If this needs to be turned back off, it should be done
just before the '// in finalizer' comment, not at the top
of the function.

GC is more precise now than it was (the only imprecise
stuff left is some global variables), so maybe the finalizer
test will work now on 32-bit systems.

LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/144030043
2014-09-16 17:46:25 -04:00
Russ Cox f95beae61d runtime: use traceback to traverse defer structures
This makes the GC and the stack copying agree about how
to interpret the defer structures. Previously, only the stack
copying treated them precisely.
This removes an untyped memory allocation and fixes
at least three copystack bugs.

To make sure the GC can find the deferred argument
frame until it has been copied, keep a Defer on the defer list
during its execution.

In addition to making it possible to remove the untyped
memory allocation, keeping the Defer on the list fixes
two races between copystack and execution of defers
(in both gopanic and Goexit). The problem is that once
the defer has been taken off the list, a stack copy that
happens before the deferred arguments have been copied
back to the stack will not update the arguments correctly.
The new tests TestDeferPtrsPanic and TestDeferPtrsGoexit
(variations on the existing TestDeferPtrs) pass now but
failed before this CL.

In addition to those fixes, keeping the Defer on the list
helps correct a dangling pointer error during copystack.
The traceback routines walk the Defer chain to provide
information about where a panic may resume execution.
When the executing Defer was not on the Defer chain
but instead linked from the Panic chain, the traceback
had to walk the Panic chain too. But Panic structs are
on the stack and being updated by copystack.
Traceback's use of the Panic chain while copystack is
updating those structs means that it can follow an
updated pointer and find itself reading from the new stack.
The new stack is usually all zeros, so it sees an incorrect
early end to the chain. The new TestPanicUseStack makes
this happen at tip and dies when adjustdefers finds an
unexpected argp. The new StackCopyPoison mode
causes an earlier bad dereference instead.
By keeping the Defer on the list, traceback can avoid
walking the Panic chain at all,  making it okay for copystack
to update the Panics.

We'd have the same problem for any Defers on the stack.
There was only one: gopanic's dabort. Since we are not
taking the executing Defer off the chain, we can use it
to do what dabort was doing, and then there are no
Defers on the stack ever, so it is okay for traceback to use
the Defer chain even while copystack is executing:
copystack cannot modify the Defer chain.

LGTM=khr
R=khr
CC=dvyukov, golang-codereviews, iant, rlh
https://golang.org/cl/141490043
2014-09-16 10:36:38 -04:00
Russ Cox 15b76ad94b runtime: assume precisestack, copystack, StackCopyAlways, ScanStackByFrames
Commit to stack copying for stack growth.

We're carrying around a surprising amount of cruft from older schemes.
I am confident that precise stack scans and stack copying are here to stay.

Delete fallback code for when precise stack info is disabled.
Delete fallback code for when copying stacks is disabled.
Delete fallback code for when StackCopyAlways is disabled.
Delete Stktop chain - there is only one stack segment now.
Delete M.moreargp, M.moreargsize, M.moreframesize, M.cret.
Delete G.writenbuf (unrelated, just dead).
Delete runtime.lessstack, runtime.oldstack.
Delete many amd64 morestack variants.
Delete initialization of morestack frame/arg sizes (shortens split prologue!).

Replace G's stackguard/stackbase/stack0/stacksize/
syscallstack/syscallguard/forkstackguard with simple stack
bounds (lo, hi).

Update liblink, runtime/cgo for adjustments to G.

LGTM=khr
R=khr, bradfitz
CC=golang-codereviews, iant, r
https://golang.org/cl/137410043
2014-09-09 13:39:57 -04:00
Russ Cox c007ce824d build: move package sources from src/pkg to src
Preparation was in CL 134570043.
This CL contains only the effect of 'hg mv src/pkg/* src'.
For more about the move, see golang.org/s/go14nopkg.
2014-09-08 00:08:51 -04:00