The current wasm write barrier implementation incorrectly implements
the "deletion" part of the barrier. It correctly greys the new value
of the pointer, but rather than also greying the old value of the
pointer, it greys the object containing the slot (which, since the old
value was just overwritten, is not going to contain the old value).
This can lead to unmarked, reachable objects.
Often, this is masked by other marking activity, but one specific
sequence that can lead to an unmarked object because of this bug is:
1. Initially, GC is off, object A is reachable from just one pointer
in the heap.
2. GC starts and scans the stack of goroutine G.
3. G copies the pointer to A on to its stack and overwrites the
pointer to A in the heap. (Now A is reachable only from G's stack.)
4. GC finishes while A is still reachable from G's stack.
With a functioning deletion barrier, step 3 causes A to be greyed.
Without a functioning deletion barrier, nothing causes A to be greyed,
so A will be freed even though it's still reachable from G's stack.
This CL fixes the wasm write barrier.
Fixes#30873.
Change-Id: I8a74ee517facd3aa9ad606e5424bcf8f0d78e754
Reviewed-on: https://go-review.googlesource.com/c/go/+/167743
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
(cherry picked from commit d9db9e32e9)
Reviewed-on: https://go-review.googlesource.com/c/go/+/167745
Reviewed-by: Katie Hockman <katie@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This reverts commit 731ebf4d87.
Reason for revert: It's not working for large directories.
Change-Id: Ie0f88e0ed1d36c4ea4f32d2acd4e223bd8229ca0
Reviewed-on: https://go-review.googlesource.com/c/go/+/170882
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Andrew Bonventre <andybons@golang.org>
Getdirentries is implemented with the __getdirentries64 function
in libSystem.dylib. That function works, but it's on Apple's
can't-be-used-in-an-app-store-application list.
Implement Getdirentries using the underlying fdopendir/readdir_r/closedir.
The simulation isn't faithful, and could be slow, but it should handle
common cases.
Don't use Getdirentries in the stdlib, use fdopendir/readdir_r/closedir
instead (via (*os.File).readdirnames).
(Incorporates CL 170837 and CL 170698, which were small fixes to the
original tip CL.)
Fixes#31244
Update #28984
RELNOTE=yes
Change-Id: Ia6b5d003e5bfe43ba54b1e1d9cfa792cc6511717
Reviewed-on: https://go-review.googlesource.com/c/go/+/168479
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
(cherry picked from commit 9da6530faa)
Reviewed-on: https://go-review.googlesource.com/c/go/+/170640
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
While many other call sites have been moved to using the proper
higher-level system loading, these areas were left out. This prevents
DLL directory injection attacks. This includes both the runtime load
calls (using LoadLibrary prior) and the implicitly linked ones via
cgo_import_dynamic, which we move to our LoadLibraryEx. The goal is to
only loosely load kernel32.dll and strictly load all others.
Meanwhile we make sure that we never fallback to insecure loading on
older or unpatched systems.
This is CVE-2019-9634.
Fixes#30666
Updates #14959
Updates #28978
Updates #30642
Change-Id: I401a13ed8db248ab1bb5039bf2d31915cac72b93
Reviewed-on: https://go-review.googlesource.com/c/go/+/165798
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
(cherry picked from commit 9b6e9f0c8c)
Reviewed-on: https://go-review.googlesource.com/c/go/+/168339
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Andrew Bonventre <andybons@golang.org>
With stack objects, when we scan the stack, it scans defers with
tracebackdefers, but it seems to me that tracebackdefers doesn't
include the func value itself, which could be a stack allocated
closure. Scan it explicitly.
Alternatively, we can change tracebackdefers to include the func
value, which in turn needs to change the type of stkframe.
Updates #30453.
Fixes#30470.
Change-Id: I55a6e43264d6952ab2fa5c638bebb89fdc410e2b
Reviewed-on: https://go-review.googlesource.com/c/164118
Reviewed-by: Keith Randall <khr@golang.org>
(cherry picked from commit 4f4c2a79d4)
Reviewed-on: https://go-review.googlesource.com/c/164629
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
In runtime.gopanic, the _panic object p is stack allocated and
referenced from gp._panic. With stack objects, p on stack is dead
at the point preprintpanics runs. gp._panic points to p, but
stack scan doesn't look at gp. Heap scan of gp does look at
gp._panic, but it stops and ignores the pointer as it points to
the stack. So whatever p points to may be collected and clobbered.
We need to scan gp._panic explicitly during stack scan.
To test it reliably, we introduce a GODEBUG mode "clobberfree",
which clobbers the memory content when the GC frees an object.
Fixes#30150.
Change-Id: I11128298f03a89f817faa221421a9d332b41dced
Reviewed-on: https://go-review.googlesource.com/c/161778
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
(cherry picked from commit af8f4062c2)
Reviewed-on: https://go-review.googlesource.com/c/162358
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
When scavenging small amounts it's possible we over-scavenge by a
significant margin since we choose to scavenge the largest spans first.
This over-scavenging is never accounted for.
With this change, we add a scavenge credit pool, similar to the reclaim
credit pool. Any time scavenging triggered by RSS growth starts up, it
checks if it can cash in some credit first. If after using all the
credit it still needs to scavenge, then any extra it does it adds back
into the credit pool.
This change mitigates the performance impact of golang.org/cl/159500 on
the Garbage benchmark. On Go1 it suggests some improvements, but most of
that is within the realm of noise (Revcomp seems very sensitive to
GC-related changes, both postively and negatively).
Garbage: https://perf.golang.org/search?q=upload:20190131.5
Go1: https://perf.golang.org/search?q=upload:20190131.4
Performance change with both changes:
Garbage: https://perf.golang.org/search?q=upload:20190131.7
Go1: https://perf.golang.org/search?q=upload:20190131.6
Change-Id: I87bd3c183e71656fdafef94714194b9fdbb77aa2
Reviewed-on: https://go-review.googlesource.com/c/160297
Reviewed-by: Austin Clements <austin@google.com>
Because scavenged and unscavenged spans no longer coalesce, memory that
is freed no longer has a high likelihood of being re-scavenged. As a
result, if an application is allocating at a fast rate, it may work fast
enough to undo all the scavenging work performed by the runtime's
current scavenging mechanisms. This behavior is exacerbated by the
global best-fit allocation policy the runtime uses, since scavenged
spans are just as likely to be chosen as unscavenged spans on average.
To remedy that, we treat each allocation of scavenged space as a heap
growth, and scavenge other memory to make up for the allocation.
This change makes performance of the runtime slightly worse, as now
we're scavenging more often during allocation. The regression is
particularly obvious with the garbage benchmark (3%) but most of the Go1
benchmarks are within the margin of noise. A follow-up change should
help.
Garbage: https://perf.golang.org/search?q=upload:20190131.3
Go1: https://perf.golang.org/search?q=upload:20190131.2
Updates #14045.
Change-Id: I44a7e6586eca33b5f97b6d40418db53a8a7ae715
Reviewed-on: https://go-review.googlesource.com/c/159500
Reviewed-by: Austin Clements <austin@google.com>
Remove an unnecessary check on the heap sampling code that forced sampling
of all heap allocations larger than the sampling rate. This need to follow
a poisson process so that they can be correctly unsampled. Maintain a check
for MemProfileRate==1 to provide a mechanism for full sampling, as
documented in https://golang.org/pkg/runtime/#pkg-variables.
Additional testing for this change is on cl/129117.
Fixes#26618
Change-Id: I7802bde2afc655cf42cffac34af9bafeb3361957
GitHub-Last-Rev: 471f747af8
GitHub-Pull-Request: golang/go#29791
Reviewed-on: https://go-review.googlesource.com/c/158337
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
As a result of changes earlier in Go 1.12, the scavenger became much
more aggressive. In particular, when scavenged and unscavenged spans
coalesced, they would always become scavenged. This resulted in most
spans becoming scavenged over time. While this is good for keeping the
RSS of the program low, it also causes many more undue page faults and
many more calls to madvise.
For most applications, the impact of this was negligible. But for
applications that repeatedly grow and shrink the heap by large amounts,
the overhead can be significant. The overhead was especially obvious on
older versions of Linux where MADV_FREE isn't available and
MADV_DONTNEED must be used.
This change makes it so that scavenged spans will never coalesce with
unscavenged spans. This results in fewer page faults overall. Aside
from this, the expected impact of this change is more heap growths on
average, as span allocations will be less likely to be fulfilled. To
mitigate this slightly, this change also coalesces spans eagerly after
scavenging, to at least ensure that all scavenged spans and all
unscavenged spans are coalesced with each other.
Also, this change adds additional logic in the case where two adjacent
spans cannot coalesce. In this case, on platforms where the physical
page size is larger than the runtime's page size, we realign the
boundary between the two adjacent spans to a physical page boundary. The
advantage of this approach is that "unscavengable" spans, that is, spans
which cannot be scavenged because they don't cover at least a single
physical page are grown to a size where they have a higher likelihood of
being discovered by the runtime's scavenging mechanisms when they border
a scavenged span. This helps prevent the runtime from accruing pockets
of "unscavengable" memory in between scavenged spans, preventing them
from coalescing.
We specifically choose to apply this logic to all spans because it
simplifies the code, even though it isn't strictly necessary. The
expectation is that this change will result in a slight loss in
performance on platforms where the physical page size is larger than the
runtime page size.
Update #14045.
Change-Id: I64fd43eac1d6de6f51d7a2ecb72670f10bb12589
Reviewed-on: https://go-review.googlesource.com/c/158078
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently the code surrounding coalescing is duplicated between merging
with the span before the span considered for coalescing and merging with
the span after. This change factors out the shared portions of these
codepaths into a local closure which acts as a helper.
Change-Id: I7919fbed3f9a833eafb324a21a4beaa81f2eaa91
Reviewed-on: https://go-review.googlesource.com/c/158077
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The coalescing process is complex and in a follow-up change we'll need
to do it in more than one place, so this change factors out the
coalescing code in freeSpanLocked into a method on mheap.
Change-Id: Ia266b6cb1157c1b8d3d8a4287b42fbcc032bbf3a
Reviewed-on: https://go-review.googlesource.com/c/157838
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reuse the strict mechanism from FileLine for FuncForPC, so we don't
crash when asking the pcln table about bad pcs.
Fixes#29735
Change-Id: Iaffb32498b8586ecf4eae03823e8aecef841aa68
Reviewed-on: https://go-review.googlesource.com/c/157799
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change makes mTreap's iterator type, treapIter, bidirectional
instead of unidirectional. This change helps support moving the find
operation on a treap to return an iterator instead of a treapNode, in
order to hide the details of the treap when accessing elements.
For #28479.
Change-Id: I5dbea4fd4fb9bede6e81bfd089f2368886f98943
Reviewed-on: https://go-review.googlesource.com/c/156918
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This commit allows to cross-compiling aix/ppc64. The nosplit limit must
twice as large as on others platforms because of AIX syscalls.
The stack limit, especially stackGuardMultiplier, was set by cmd/dist
during the bootstrap and doesn't depend on GOOS/GOARCH target.
Fixes#29572
Change-Id: Id51e38885e1978d981aa9e14972eaec17294322e
Reviewed-on: https://go-review.googlesource.com/c/157117
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Returning the innermost frame instead of the outermost
makes code that walks the results of runtime.Caller{,s}
still work correctly in the presence of mid-stack inlining.
Fixes#29582
Change-Id: I2392e3dd5636eb8c6f58620a61cef2194fe660a7
Reviewed-on: https://go-review.googlesource.com/c/156364
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
In 1.11 we stored "return addresses" in the result of runtime.Callers.
I changed that behavior in CL 152537 to store an address in the call
instruction itself. This CL reverts that part of 152537.
The change in 152537 was made because we now store pcs of inline marks
in the result of runtime.Callers as well. This CL will now store the
address of the inline mark + 1 in the results of runtime.Callers, so
that the subsequent -1 done in CallersFrames will pick out the correct
inline mark instruction.
This CL means that the results of runtime.Callers can be passed to
runtime.FuncForPC as they were before. There are a bunch of packages
in the wild that take the results of runtime.Callers, subtract 1, and
then call FuncForPC. This CL keeps that pattern working as it did in
1.11.
The changes to runtime/pprof in this CL are exactly a revert of the
changes to that package in 152537 (except the locForPC comment).
Update #29582
Change-Id: I04d232000fb482f0f0ff6277f8d7b9c72e97eb48
Reviewed-on: https://go-review.googlesource.com/c/156657
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The in-tree GDB is too old (6.1.1) on all the builders except the
FreeBSD 12.0 one, where it was removed from the base system.
Update #29508
Change-Id: Ib6091cd86440ea005f3f903549a0223a96621a6f
Reviewed-on: https://go-review.googlesource.com/c/156717
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
With gccgo, if a profiling signal arrives in certain time during
traceback, it may crash or hang. The fix is CL 156037 and
CL 156038. This CL adds a test.
Updates #29448.
Change-Id: Idb36af176b4865b8fb31a85cad185ed4c07ade0c
Reviewed-on: https://go-review.googlesource.com/c/156018
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
We still don't understand what's causing there to be remaining GC work
when we enter mark termination, but in order to move forward on this
issue, this CL implements a work-around for the problem.
If debugCachedWork is false, this CL does a second check for remaining
GC work as soon as it stops the world for mark termination. If it
finds any work, it starts the world again and re-enters concurrent
mark. This will increase STW time by a small amount proportional to
GOMAXPROCS, but fixes a serious correctness issue.
This works-around #27993.
Change-Id: Ia23b85dd6c792ee8d623428bd1a3115631e387b8
Reviewed-on: https://go-review.googlesource.com/c/156140
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
As a followon to CL 152537, modify the panic-printing traceback
to also handle mid-stack inlining correctly.
Also declare -fm functions (aka method functions) as wrappers, so that
they get elided during traceback. This fixes part 2 of #26839.
Fixes#28640Fixes#24488
Update #26839
Change-Id: I1c535a9b87a9a1ea699621be1e6526877b696c21
Reviewed-on: https://go-review.googlesource.com/c/153477
Reviewed-by: David Chase <drchase@google.com>
Use the length of the bitmap to decide how much to pass to the
write barrier, not the total length of the arguments.
The test needs enough arguments so that two distinct bitmaps
get interpreted as a single longer bitmap.
Update #29362
Change-Id: I78f3f7f9ec89c2ad4678f0c52d3d3def9cac8e72
Reviewed-on: https://go-review.googlesource.com/c/156123
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
After CL 31455, "go fun(n)" may put "n" to write barrier buffer
when there are no pointers in fun's arguments.
Fixes#29362
Change-Id: Icfa42b8759ce8ad9267dcb3859c626feb6fda381
Reviewed-on: https://go-review.googlesource.com/c/155779
Reviewed-by: Keith Randall <khr@golang.org>
In the twoNonZero function in hash_test, the buffer is sliced as [:] three times. This change deletes them.
Change-Id: I0701d0c810b4f3e267f80133a0dcdb4ed81fe356
Reviewed-on: https://go-review.googlesource.com/c/156138
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently it's possible for the runtime to deadlock if checkPut is
called in a non-preemptible context. In this case, checkPut may spin,
so it won't leave the non-preemptible context, but the thread running
gcMarkDone needs to preempt all of the goroutines before it can
release the checkPut spin loops.
Fix this by returning from checkPut if it's called under any of the
conditions that would prevent gcMarkDone from preempting it. In this
case, it leaves a note behind that this happened; if the runtime does
later detect left-over work it can at least indicate that it was
unable to catch it in the act.
For #27993.
Updates #29385 (may fix it).
Change-Id: Ic71c10701229febb4ddf8c104fb10e06d84b122e
Reviewed-on: https://go-review.googlesource.com/c/156017
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
A notetsleepg may get stuck if its timeout callback gets invoked
exactly on its deadline due to low precision of nanotime. This change
fixes the comparison so it also resolves the note if the timestamps are
equal.
Updates #28975
Change-Id: I045d2f48b7f41cea0caec19b56876e9de01dcd6c
Reviewed-on: https://go-review.googlesource.com/c/153558
Run-TryBot: Richard Musiol <neelance@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Use EnumTimeFormatsEx() to test panics across callback boundaries
instead of EnumWindows(). EnumWindows() is incompatible with Go's panic
unwinding mechanism. See the associated issue for more information.
Updates #26148
Change-Id: If1dd70885d9c418b980b6827942cb1fd16c73803
Reviewed-on: https://go-review.googlesource.com/c/155923
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Reorg map flags a bit so we don't need any extra space for the extra flag.
Fixes#23734
Change-Id: I436812156240ae90de53d0943fe1aabf3ea37417
Reviewed-on: https://go-review.googlesource.com/c/155918
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Work involved in getting a stack trace is divided between
runtime.Callers and runtime.CallersFrames.
Before this CL, runtime.Callers returns a pc per runtime frame.
runtime.CallersFrames is responsible for expanding a runtime frame
into potentially multiple user frames.
After this CL, runtime.Callers returns a pc per user frame.
runtime.CallersFrames just maps those to user frame info.
Entries in the result of runtime.Callers are now pcs
of the calls (or of the inline marks), not of the instruction
just after the call.
Fixes#29007Fixes#28640
Update #26320
Change-Id: I1c9567596ff73dc73271311005097a9188c3406f
Reviewed-on: https://go-review.googlesource.com/c/152537
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
This change splits a testprog out of TestLockOSThreadExit and makes it
its own test. Then, this change makes the testprog exit prematurely with
a special message if unshare fails with EPERM because not all of the
builders allow the user to call the unshare syscall.
Also, do some minor cleanup on the TestLockOSThread* tests.
Fixes#29366.
Change-Id: Id8a9f6c4b16e26af92ed2916b90b0249ba226dbe
Reviewed-on: https://go-review.googlesource.com/c/155437
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
When a locked M has its G exit without calling UnlockOSThread, then
lockedExt on it was getting cleared. Unfortunately, this meant that
during P handoff, if a new M was started, it might get forked (on
most OSs besides Windows) from the locked M, which could have kernel
state attached to it.
To solve this, just don't clear lockedExt. At the point where the
locked M has its G exit, it will also exit in accordance with the
LockOSThread API. So, we can safely assume that it's lockedExt state
will no longer be used. For the case of the main thread where it just
gets wedged instead of exiting, it's probably better for it to keep
the locked marker since it more accurately represents its state.
Fixed#28979.
Change-Id: I7d3d71dd65bcb873e9758086d2cbcb9a06429b0f
Reviewed-on: https://go-review.googlesource.com/c/153078
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
This change disables the test TestArenaCollision on Darwin in race mode
to deal with the fact that Darwin 10.10 must use MAP_FIXED in race mode
to ensure we retain our heap in a particular portion of the address
space which the race detector needs. The test specifically checks to
make sure a manually mapped region's space isn't re-used, which is
definitely possible with MAP_FIXED because it replaces whatever mapping
already exists at a given address.
This change then also makes it so that MAP_FIXED is only used in race
mode and on Darwin, not all BSDs, because using MAP_FIXED breaks this
test for FreeBSD in addition to Darwin.
Updates #26475.
Fixes#29340.
Change-Id: I1c59349408ccd7eeb30c4bf2593f48316b23ab2f
Reviewed-on: https://go-review.googlesource.com/c/155097
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This reverts change https://golang.org/cl/154758.
Restore the previous implementations of nanotime and time.now, which
are sufficiently high resolution and more efficient than
QueryPerformanceCounter. The intent of the change was to improve
resolution of tracing timestamps, but the change was overly broad
as it was only necessary to fix cputicks(). cputicks() is fixed in
a subsequent change.
Updates #26148
Change-Id: Ib9883d02fe1af2cc4940e866d8f6dc7622d47781
Reviewed-on: https://go-review.googlesource.com/c/154761
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
startpanic_m could be called correctly in a context where there's a
valid G, a valid M, but no P, for example in a signal handler which
panics. Currently, startpanic_m has write barriers enabled because
write barriers are permitted if a G's M is dying. However, all the
current write barrier implementations assume the current G has a P.
Therefore, in this change we disable write barriers in startpanic_m,
remove the only pointer write which clears g.writebuf, and fix up gwrite
to ignore the writebuf if the current G's M is dying, rather than
relying on it being nil in the dying case.
Fixes#26575.
Change-Id: I9b29e6b9edf00d8e99ffc71770c287142ebae086
Reviewed-on: https://go-review.googlesource.com/c/154837
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The previous implementation of nanotime and time.now used a time source
that was updated on the system clock tick, which has a maximum
resolution of about 1ms. On 386 and amd64, this time source maps to
the system performance counter, so has much higher resolution.
On ARM, use QueryPerformanceCounter() to get a high resolution timestamp.
Updates #26148
Change-Id: I1abc99baf927a95b472ac05020a7788626c71d08
Reviewed-on: https://go-review.googlesource.com/c/154758
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change makes it so that reserving more of the address space for the
heap calls mmap with MAP_FIXED in race mode. Race mode requires certain
guarantees on where the heap is located in the address space, and on
Darwin 10.10 it appears that the kernel may end up ignoring the hint
quite often (#26475). Using MAP_FIXED is relatively OK in race mode
because nothing else should be mapped in the memory region provided by
the initial hints.
Fixes#26475.
Change-Id: Id7ac1534ee74f6de491bc04441f27dbda09f0285
Reviewed-on: https://go-review.googlesource.com/c/153897
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This commit fixes backtrace if a crash or an exit signal is received
during a C syscall on aix/ppc64.
This is similar to Solaris, Darwin or Windows implementation.
Change-Id: I6040c0b1577a9f5b298f58bd4ee6556258a135ef
Reviewed-on: https://go-review.googlesource.com/c/154718
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently, we flush the write barrier buffer on every write barrier
once throwOnGCWork is set, but not during the mark completion
algorithm itself. As seen in recent failures like
https://build.golang.org/log/317369853b803b4ee762b27653f367e1aa445ac1
by the time we actually catch a late gcWork put, the write barrier
buffer is full-size again.
As a result, we're probably not catching the actual problematic write
barrier, which is probably somewhere in the buffer.
Fix this by using the gcWork pause generation to also keep the write
barrier buffer small between the mark completion flushes it and when
mark completion is done.
For #27993.
Change-Id: I77618169441d42a7d562fb2a998cfaa89891edb2
Reviewed-on: https://go-review.googlesource.com/c/154638
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Add support for cgo on openbsd/arm.The gcc shipped with base OpenBSD armv7
is old/inadequate, so use clang by default.
Change-Id: I945a26d369378952d357727718e69249411e1127
Reviewed-on: https://go-review.googlesource.com/c/154381
Run-TryBot: Joel Sing <joel@sing.id.au>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change modifies the behavior of span allocations to no longer
prefer the free treap over the scavenged treap.
While there is an additional cost to allocating out of the scavenged
treap, the current behavior of preferring the unscavenged spans can
lead to unbounded growth of a program's virtual memory footprint.
In small programs (low # of Ps, low resident set size, low allocation
rate) this behavior isn't really apparent and is difficult to
reproduce.
However, in relatively large, long-running programs we see this
unbounded growth in free spans, and an unbounded amount of heap
growths.
It still remains unclear how this policy change actually ends up
increasing the number of heap growths over time, but switching the
policy back to best-fit does indeed solve the problem.
Change-Id: Ibb88d24f9ef6766baaa7f12b411974cc03341e7b
Reviewed-on: https://go-review.googlesource.com/c/148979
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This change adds the treapIter type which provides an iterator
abstraction for walking over an mTreap. In particular, the mTreap type
now has iter() and rev() for iterating both forwards (smallest to
largest) and backwards (largest to smallest). It also has an erase()
method for erasing elements at the iterator's current position.
For #28479.
While the expectation is that this change will slow down Go programs,
the impact on Go1 and Garbage is negligible.
Go1: https://perf.golang.org/search?q=upload:20181214.6
Garbage: https://perf.golang.org/search?q=upload:20181214.11
Change-Id: I60dbebbbe73cbbe7b78d45d2093cec12cc0bc649
Reviewed-on: https://go-review.googlesource.com/c/151537
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>