On the write side, g.selectDone has been converted
from non-atomic to atomic access.
For #53821.
Change-Id: Iac46bc6acce7eed51dfd990285dd57f0d58b4ae2
Reviewed-on: https://go-review.googlesource.com/c/go/+/425414
Run-TryBot: hopehook <hopehook@qq.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Change-Id: I30c125be6cb321aa03ea827bd11c3169087e3d4c
Reviewed-on: https://go-review.googlesource.com/c/go/+/420314
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
[This CL is part of a sequence implementing the proposal #51082.
The design doc is at https://go.dev/s/godocfmt-design.]
Run the updated gofmt, which reformats doc comments,
on the main repository. Vendored files are excluded.
For #51082.
Change-Id: I7332f099b60f716295fb34719c98c04eb1a85407
Reviewed-on: https://go-review.googlesource.com/c/go/+/384268
Reviewed-by: Jonathan Amsterdam <jba@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
A future change to gofmt will rewrite
// Doc comment.
//go:foo
to
// Doc comment.
//
//go:foo
Apply that change preemptively to all comments (not necessarily just doc comments).
For #51082.
Change-Id: Iffe0285418d1e79d34526af3520b415a12203ca9
Reviewed-on: https://go-review.googlesource.com/c/go/+/384260
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
A future change to gofmt will rewrite
// Doc comment.
//
func f()
to
// Doc comment.
func f()
Apply that change preemptively to all doc comments.
For #51082.
Change-Id: I4023e16cfb0729b64a8590f071cd92f17343081d
Reviewed-on: https://go-review.googlesource.com/c/go/+/384259
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
A run of lines that are indented with any number of spaces or tabs
format as a <pre> block. This commit fixes various doc comments
that format badly according to that (standard) rule.
For example, consider:
// - List item.
// Second line.
// - Another item.
Because the - lines are unindented, this is actually two paragraphs
separated by a one-line <pre> block. This CL rewrites it to:
// - List item.
// Second line.
// - Another item.
Today, that will format as a single <pre> block.
In a future release, we hope to format it as a bulleted list.
Various other minor fixes as well, all in preparation for reformatting.
For #51082.
Change-Id: I95cf06040d4186830e571cd50148be3bf8daf189
Reviewed-on: https://go-review.googlesource.com/c/go/+/384257
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
At this point all funcPC references are ABIInternal functions.
Replace with the intrinsics.
Change-Id: I3ba7e485c83017408749b53f92877d3727a75e27
Reviewed-on: https://go-review.googlesource.com/c/go/+/321954
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
The only different between selectnbrecv and selectnbrecv2 is the later
set the input pointer value by second return value from chanrecv.
So by making selectnbrecv return two values from chanrecv, we can get
rid of selectnbrecv2, the compiler can now call only selectnbrecv and
generate simpler code.
Change-Id: Ifaf6cf1314c4f47b06ed9606b1578319be808507
Reviewed-on: https://go-review.googlesource.com/c/go/+/292890
Trust: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
When c.elemsize==0 we call raceacquire() and racerelease()
as opposed to calling racereleaseacquire()
The reason for this change is that, when elemsize==0, we don't
allocate a full buffer for the channel. Instead of individual
buffer entries, the race detector uses the c.buf as the only
buffer entry. This simplification prevents us following the
memory model's happens-before rules implemented in racereleaseacquire().
So, instead of calling racereleaseacquire(), we accumulate
happens-before information in the synchronization object associated
with c.buf.
The functionality in this change is implemented in a new function
called racenotify()
Fixes#42598
Change-Id: I75b92708633fdfde658dc52e06264e2171824e51
Reviewed-on: https://go-review.googlesource.com/c/go/+/271987
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Ian Lance Taylor <iant@golang.org>
In chansend() and chanrecv() of chan.go, the order of calls to
raceacquire() and racerelease() was swapped, which meant that the
code was not following the memory model "by the letter of the law."
Similar for bufrecv and bufsend in select.go
The memory model says:
- A send happens before the corresponding receive completes, and
- the kth receive on a channel with capacity C happens before the
k+C send on that channel completes.
The operative word here is "completes." For example, a sender obtains
happens-before information on completion of the send-operation, which
means, after the sender has deposited its message onto the channel.
Similarly for receives.
If the order of raceacquire() and racerelease() is incorrect, the race
detector may fail to report some race conditions.
The fix is minimal from the point of view of Go. The fix does, however,
rely on a new function added to TSan:
https://reviews.llvm.org/D76322
This commit only affects execution when race detection is enabled.
Added two tests into `runtime/race/output_test.go`:
- `chanmm` tests for the issue addressed by this patch
- `mutex` is a test for inverted semaphores, which must not be broken
by this (or any other) patch
Fixes#37355
Change-Id: I5e886879ead2bd456a4b7dd1d17253641b767f63
Reviewed-on: https://go-review.googlesource.com/c/go/+/220419
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Currently activeStackChans is set before a goroutine blocks on a channel
operation in an unlockf passed to gopark. The trouble is that the
unlockf is called *after* the G's status is changed, and the G's status
is what is used by a concurrent mark worker (calling suspendG) to
determine that a G has successfully been suspended. In this window
between the status change and unlockf, the mark worker could try to
shrink the G's stack, and in particular observe that activeStackChans is
false. This observation will cause the mark worker to *not* synchronize
with concurrent channel operations when it should, and so updating
pointers in the sudog for the blocked goroutine (which may point to the
goroutine's stack) races with channel operations which may also
manipulate the pointer (read it, dereference it, update it, etc.).
Fix the problem by adding a new atomically-updated flag to the g struct
called parkingOnChan, which is non-zero in the race window above. Then,
in isShrinkStackSafe, check if parkingOnChan is zero. The race is
resolved like so:
* Blocking G sets parkingOnChan, then changes status in gopark.
* Mark worker successfully suspends blocking G.
* If the mark worker observes parkingOnChan is non-zero when checking
isShrinkStackSafe, then it's not safe to shrink (we're in the race
window).
* If the mark worker observes parkingOnChan as zero, then because
the mark worker observed the G status change, it can be sure that
gopark's unlockf completed, and gp.activeStackChans will be correct.
The risk of this change is low, since although it reduces the number of
places that stack shrinking is allowed, the window here is incredibly
small. Essentially, every place that it might crash now is replaced with
no shrink.
This change adds a test, but the race window is so small that it's hard
to trigger without a well-placed sleep in park_m. Also, this change
fixes stackGrowRecursive in proc_test.go to actually allocate a 128-byte
stack frame. It turns out the compiler was destructuring the "pad" field
and only allocating one uint64 on the stack.
Fixes#40641.
Change-Id: I7dfbe7d460f6972b8956116b137bc13bc24464e8
Reviewed-on: https://go-review.googlesource.com/c/go/+/247050
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
The current wakeup protocol for channel communications is that the
second goroutine sets gp.param to the sudog when a value is
successfully communicated over the channel, and to nil when the wakeup
is due to closing the channel.
Setting nil to indicate channel closure works okay for chansend and
chanrecv, because they're only communicating with one channel, so they
know it must be the channel that was closed. However, it means
selectgo has to re-poll all of the channels to figure out which one
was closed.
This commit adds a "success" field to sudog, and changes the wakeup
protocol to always set gp.param to sg, and to use sg.success to
indicate successful communication vs channel closure.
While here, this also reorganizes the chansend code slightly so that
the sudog is still released to the pool if the send blocks and then is
awoken because the channel closed.
Updates #40410.
Change-Id: I6cd9a20ebf9febe370a15af1b8afe24c5539efc6
Reviewed-on: https://go-review.googlesource.com/c/go/+/245019
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes#38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When we copy a stack of a goroutine blocked in a channel operation, we
have to be very careful because other goroutines may be writing to
that goroutine's stack. To handle this, stack copying acquires the
locks for the channels a goroutine is waiting on.
One complication is that stack growth may happen while a goroutine
holds these locks, in which case stack copying must *not* acquire
these locks because that would self-deadlock.
Currently, stack growth never acquires these locks because stack
growth only happens when a goroutine is running, which means it's
either not blocking on a channel or it's holding the channel locks
already. Stack shrinking always acquires these locks because shrinking
happens asynchronously, so the goroutine is never running, so there
are either no locks or they've been released by the goroutine.
However, we're about to change when stack shrinking can happen, which
is going to break the current rules. Rather than find a new way to
derive whether to acquire these locks or not, this CL simply adds a
flag to the g struct that indicates that stack copying should acquire
channel locks. This flag is set while the goroutine is blocked on a
channel op.
For #10958, #24543.
Change-Id: Ia2ac8831b1bfda98d39bb30285e144c4f7eaf9ab
Reviewed-on: https://go-review.googlesource.com/c/go/+/172982
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Now the net package is back to no longer depending on unicode. And lock that in
with a test.
Fixes#30440
Change-Id: I18b89b02f7d96488783adc07308da990f505affd
Reviewed-on: https://go-review.googlesource.com/c/go/+/169137
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
We already have the ptrdata field in a type, which encodes exactly
the same information that kindNoPointers does.
My problem with kindNoPointers is that it often leads to
double-negative code like:
t.kind & kindNoPointers != 0
Much clearer is:
t.ptrdata == 0
Update #27167
Change-Id: I92307d7f018a6bbe3daca4a4abb4225e359349b1
Reviewed-on: https://go-review.googlesource.com/c/go/+/169157
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
I had been finding these over a year or so, but none were big enough
changes to warrant CLs. They're a handful now, so clean them all up in a
single commit.
The smaller bodies get a bit simpler, but most importantly, the larger
bodies get unindented.
Change-Id: I5707a6fee27d4c9ff9efd3d363af575d7a4bf2aa
Reviewed-on: https://go-review.googlesource.com/c/go/+/165340
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The previous CL introduced stack objects. This CL removes the old
ambiguously live liveness analysis. After this CL we're relying
on stack objects exclusively.
Update a bunch of liveness tests to reflect the new world.
Fixes#22350
Change-Id: I739b26e015882231011ce6bc1a7f426049e59f31
Reviewed-on: https://go-review.googlesource.com/c/134156
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
They aren't really races, or at least they don't have any
observable effect. The spec is silent on whether these are actually
races or not.
Fix this problem by not using the address of len (or of cap)
as the location where channel operations are recorded to occur.
Use a random other field of hchan for that.
I'm not 100% sure we should in fact fix this. Opinions welcome.
Fixes#27070
Change-Id: Ib4efd4b62e0d1ef32fa51e373035ef207a655084
Reviewed-on: https://go-review.googlesource.com/135698
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Every time I poke at #14921, the g.waitreason string
pointer writes show up.
They're not particularly important performance-wise,
but it'd be nice to clear the noise away.
And it does open up a few extra bytes in the g struct
for some future use.
This is a re-roll of CL 99078, which was rolled
back because of failures on s390x.
Those failures were apparently due to an old version of gdb.
Change-Id: Icc2c12f449b2934063fd61e272e06237625ed589
Reviewed-on: https://go-review.googlesource.com/111256
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Munday <mike.munday@ibm.com>
Every time I poke at #14921, the g.waitreason string
pointer writes show up.
They're not particularly important performance-wise,
but it'd be nice to clear the noise away.
And it does open up a few extra bytes in the g struct
for some future use.
Change-Id: I7ffbd52fbc2a286931a2218038fda52ed6473cc9
Reviewed-on: https://go-review.googlesource.com/99078
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Now that we have memLimit, also having _MaxMem is a bit confusing.
Replace it with maxAlloc, which better conveys what it limits. We also
define maxAlloc slightly differently: since it's now clear that it
limits allocation size, we can account for a subtle difference between
32-bit and 64-bit.
Change-Id: Iac39048018cc0dae7f0919e25185fee4b3eed529
Reviewed-on: https://go-review.googlesource.com/85890
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Calls to writebarrierptr can simply be actual pointer writes. Calls to
writebarrierptr_prewrite need to go through the write barrier buffer.
Updates #22460.
Change-Id: I92cee4da98c5baa499f1977563757c76f95bf0ca
Reviewed-on: https://go-review.googlesource.com/92704
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
Now that getcallerpc is a compiler intrinsic on x86 and non-x86
platforms don't need the argument, we can drop it.
Sadly, this doesn't let us remove any dummy arguments since all of
those cases also use getcallersp, which still takes the argument
pointer, but this is at least an improvement.
Change-Id: I9c34a41cf2c18cba57f59938390bf9491efb22d2
Reviewed-on: https://go-review.googlesource.com/65474
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Use mallogc instead of newarray to save some overhead since
makechan already checks for _MaxMem constraints.
Flattens the if else construct that determines if buf and hchan struct
should be allocated in one mallocgc call and where buf should point to.
Uses maxSliceCap to avoid divisions similar to makeslice.
name old time/op new time/op delta
MakeChan/Byte 82.0ns ± 8% 81.4ns ± 7% ~ (p=0.643 n=10+10)
MakeChan/Int 97.9ns ± 2% 96.6ns ± 2% -1.40% (p=0.009 n=10+10)
MakeChan/Ptr 128ns ± 3% 120ns ± 1% -6.63% (p=0.000 n=10+10)
MakeChan/Struct/0 66.7ns ± 4% 66.4ns ± 2% ~ (p=0.697 n=10+10)
MakeChan/Struct/32 136ns ± 1% 130ns ± 0% -4.42% (p=0.000 n=10+10)
MakeChan/Struct/40 150ns ± 1% 150ns ± 1% ~ (p=0.725 n=10+10)
Change-Id: Ibb5675d0843a072aae2bfa58ecd39cf4cd926533
Reviewed-on: https://go-review.googlesource.com/55132
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Writing to selectdone on the stack of another goroutine meant a
pretty subtle dance between the select code and the stack copying
code. Instead move the selectdone variable into the g struct.
Change-Id: Id246aaf18077c625adef7ca2d62794afef1bdd1b
Reviewed-on: https://go-review.googlesource.com/53390
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The chanrecv funcs don't use it at all. The chansend ones do, but the
element type is now part of the hchan struct, which is already a
parameter.
hchan can be nil in chansend when sending to a nil channel, so when
instrumenting we must copy to the stack to be able to read the channel
type.
name old time/op new time/op delta
ChanUncontended 6.42µs ± 1% 6.22µs ± 0% -3.06% (p=0.000 n=19+18)
Initially found by github.com/mvdan/unparam.
Fixes#19591.
Change-Id: I3a5e8a0082e8445cc3f0074695e3593fd9c88412
Reviewed-on: https://go-review.googlesource.com/38351
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently selectgo is just a wrapper around selectgoImpl. This keeps
the hard-coded frame skip counts for tracing the same between the
channel implementation and the select implementation.
However, this is fragile and confusing, so pass a skip parameter to
send and recv, join selectgo and selectgoImpl into one function, and
use decrease all of the skips in selectgo by one.
Change-Id: I11b8cbb7d805b55f5dc6ab4875ac7dde79412ff2
Reviewed-on: https://go-review.googlesource.com/37860
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently we have write barriers for direct channel sends, where the
receiver is blocked and the sender is writing directly to the
receiver's stack; but not for direct channel receives, where the
sender is blocked and the receiver is reading directly from the
sender's stack.
This was okay with the old write barrier because either 1) the
receiver would write the received pointer into the heap (causing it to
be shaded), 2) the pointer would still be on the receiver's stack at
mark termination and we would rescan it, or 3) the receiver dropped
the pointer so it wasn't necessarily reachable anyway.
This is not okay with the write barrier because it lets a grey stack
send a white pointer to a black stack and then remove it from its own
stack. If the grey stack was the sole grey-protector of this pointer,
this hides the object from the garbage collector.
Fix this by making direct receives perform a stack-to-stack write
barrier just like direct sends do.
Fixes#17694.
Change-Id: I1a4cb904e4138d2ac22f96a3e986635534a5ae41
Reviewed-on: https://go-review.googlesource.com/32450
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, we perform write barriers after performing pointer writes.
At the moment, it simply doesn't matter what order this happens in, as
long as they appear atomic to GC. But both the hybrid barrier and ROC
are going to require a pre-write write barrier.
For the hybrid barrier, this is important because the barrier needs to
observe both the current value of the slot and the value that will be
written to it. (Alternatively, the caller could do the write and pass
in the old value, but it seems easier and more useful to just swap the
order of the barrier and the write.)
For ROC, this is necessary because, if the pointer write is going to
make the pointer reachable to some goroutine that it currently is not
visible to, the garbage collector must take some special action before
that pointer becomes more broadly visible.
This commits swaps pointer writes around so the write barrier occurs
before the pointer write.
The main subtlety here is bulk memory writes. Currently, these copy to
the destination first and then use the pointer bitmap of the
destination to find the copied pointers and invoke the write barrier.
This is necessary because the source may not have a pointer bitmap. To
handle these, we pass both the source and the destination to the bulk
memory barrier, which uses the pointer bitmap of the destination, but
reads the pointer values from the source.
Updates #17503.
Change-Id: I78ecc0c5c94ee81c29019c305b3d232069294a55
Reviewed-on: https://go-review.googlesource.com/31763
Reviewed-by: Rick Hudson <rlh@golang.org>
The hybrid barrier requires distinguishing typed and untyped memory
even when zeroing because the *current* contents of the memory matters
even when overwriting.
This commit introduces runtime.typedmemclr and runtime.memclrHasPointers
as a typed memory clearing functions parallel to runtime.typedmemmove.
Currently these simply call memclr, but with the hybrid barrier we'll
need to shade any pointers we're overwriting. These will provide us
with the necessary hooks to do so.
Updates #17503.
Change-Id: I74478619f8907825898092aaa204d6e4690f27e6
Reviewed-on: https://go-review.googlesource.com/31366
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Consistently use type int for the size argument of
runtime.newarray, runtime.reflect_unsafe_NewArray
and reflect.unsafe_NewArray.
Change-Id: Ic77bf2dde216c92ca8c49462f8eedc0385b6314e
Reviewed-on: https://go-review.googlesource.com/22311
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Martin Möhrmann <martisch@uos.de>
TryBot-Result: Gobot Gobot <gobot@golang.org>
mallocgc can calculate noscan itself. The only remaining
flag argument is needzero, so we just make that a boolean arg.
Fixes#15379
Change-Id: I839a70790b2a0c9dbcee2600052bfbd6c8148e20
Reviewed-on: https://go-review.googlesource.com/22290
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, locking a G's stack by setting its status to _Gcopystack or
_Gscan is unordered with respect to channel locks. However, when we
make stack shrinking concurrent, stack shrinking will need to lock the
G and then acquire channel locks, which imposes an order on these.
Document this lock ordering and fix closechan to respect it.
Everything else already happens to respect it.
For #12967.
Change-Id: I4dd02675efffb3e7daa5285cf75bf24f987d90d4
Reviewed-on: https://go-review.googlesource.com/20041
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently sudog.elem is never accessed concurrently, so in several
cases we drop the channel lock just before reading/writing the
sent/received value from/to sudog.elem. However, concurrent stack
shrinking is going to have to adjust sudog.elem to point to the new
stack, which means it needs a way to synchronize with accesses to
sudog.elem. Hence, add sudog.elem to the fields protected by
hchan.lock and scoot the unlocks down past the uses of sudog.elem.
While we're here, better document the channel synchronization rules.
For #12967.
Change-Id: I3ad0ca71f0a74b0716c261aef21b2f7f13f74917
Reviewed-on: https://go-review.googlesource.com/20040
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Given a G, there's currently no way to find the channel it's blocking
on. We'll need this information to fix a (probably theoretical) bug in
select and to implement concurrent stack shrinking, so record the
channel in the sudog.
For #12967.
Change-Id: If8fb63a140f1d07175818824d08c0ebeec2bdf66
Reviewed-on: https://go-review.googlesource.com/20035
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.
This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:
$ perl -i -npe 's,^(\s*// .+[a-z]\.) +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.) +([A-Z])')
$ go test go/doc -update
Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>