Commit Graph

2283 Commits

Author SHA1 Message Date
Austin Clements 5b7497f327 runtime: update heap profile stats after world is started
Updating the heap profile stats is one of the most expensive parts of
mark termination other than stack rescanning, but there's really no
need to do this with the world stopped. Move it to right after we've
started the world back up. This creates a *very* small window where
allocations from the next cycle can slip into the profile, but the
exact point where mark termination happens is so non-deterministic
already that a slight reordering here is unimportant.

Change-Id: I2f76f22c70329923ad6a594a2c26869f0736d34e
Reviewed-on: https://go-review.googlesource.com/31363
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-19 21:36:24 +00:00
Austin Clements 9429aab999 runtime: remove gcWork flushes in mark termination
The only reason these flushes are still necessary at all is that
gcmarknewobject doesn't flush its gcWork stats like it's supposed to.
By changing gcmarknewobject to follow the standard protocol, the
flushes become completely unnecessary because mark 2 ensures caches
are flushed (and stay flushed) before we ever enter mark termination.

In the garbage benchmark, this takes roughly 50 µs, which is
surprisingly long for doing nothing. We still double-check after
draining that they are in fact empty.

Change-Id: Ia1c7cf98a53f72baa513792eb33eca6a0b4a7128
Reviewed-on: https://go-review.googlesource.com/31134
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-19 21:36:09 +00:00
Ian Lance Taylor a16954b8a7 cmd/cgo: always use a function literal for pointer checking
The pointer checking code needs to know the exact type of the parameter
expected by the C function, so that it can use a type assertion to
convert the empty interface returned by cgoCheckPointer to the correct
type. Previously this was done by using a type conversion, but that
meant that the code accepted arguments that were convertible to the
parameter type, rather than arguments that were assignable as in a
normal function call. In other words, some code that should not have
passed type checking was accepted.

This CL changes cgo to always use a function literal for pointer
checking. Now the argument is passed to the function literal, which has
the correct argument type, so type checking is performed just as for a
function call as it should be.

Since we now always use a function literal, simplify the checking code
to run as a statement by itself. It now no longer needs to return a
value, and we no longer need a type assertion.

This does have the cost of introducing another function call into any
call to a C function that requires pointer checking, but the cost of the
additional call should be minimal compared to the cost of pointer
checking.

Fixes #16591.

Change-Id: I220165564cf69db9fd5f746532d7f977a5b2c989
Reviewed-on: https://go-review.googlesource.com/31233
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2016-10-19 21:20:50 +00:00
Russ Cox 40d81cf061 sync: throw, not panic, for unlock of unlocked mutex
The panic leaves the lock in an unusable state.
Trying to panic with a usable state makes the lock significantly
less efficient and scalable (see early CL patch sets and discussion).

Instead, use runtime.throw, which will crash the program directly.

In general throw is reserved for when the runtime detects truly
serious, unrecoverable problems. This problem is certainly serious,
and, without a significant performance hit, is unrecoverable.

Fixes #13879.

Change-Id: I41920d9e2317270c6f909957d195bd8b68177f8d
Reviewed-on: https://go-review.googlesource.com/31359
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-10-19 17:46:27 +00:00
Matthew Dempsky 7dd9c385f6 Revert "cmd/compile: inline convI2E"
This reverts commit 395d36a67d.

Appears to be responsible for builder failures.

Change-Id: Ic6c6307f662767e529060b88704a9f074785d99e
Reviewed-on: https://go-review.googlesource.com/31310
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2016-10-17 20:37:19 +00:00
Lynn Boger 2190f771d8 bytes: fix typo in ppc64le asm for Compare
Correcting a line in asm_ppc64x.s in the cmpbodyLE function
that originally was R14 but accidentally changed to R4.

Fixes #17488

Change-Id: Id4ca6fb2e0cd81251557a0627e17b5e734c39e01
Reviewed-on: https://go-review.googlesource.com/31266
Reviewed-by: Michael Munday <munday@ca.ibm.com>
Run-TryBot: Michael Munday <munday@ca.ibm.com>
2016-10-17 20:01:17 +00:00
Austin Clements 984753b665 runtime: fix GC assist retry path
GC assists retry if preempted or if they fail to park. However, on the
retry path they currently use stale statistics. In particular, the
retry can use "debtBytes", but debtBytes isn't updated when the debt
changes (since other than retries it is only used once). Also, though
less of a problem, the if the assist ratio has changed while the
assist was blocked, the retry will still use the old assist ratio.

Fix all of this by simply making the retry jump back to where we
compute these statistics, rather than just after.

Change-Id: I2ed8b4f0fc9f008ff060aa926f4334b662ac7d3f
Reviewed-on: https://go-review.googlesource.com/30701
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-17 19:09:37 +00:00
Austin Clements 81c431a537 runtime: abstract out assist queue management
This puts all of the assist queue-related code together and makes it
easier to modify how the assist queue works.

Change-Id: Id54e06702bdd5a5dd3fef2ce2c14cd7ca215303c
Reviewed-on: https://go-review.googlesource.com/30700
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-17 19:09:30 +00:00
Keith Randall 395d36a67d cmd/compile: inline convI2E
It's pretty simple.  For:
  e = (interface{})(i)
Do:
  tmp = i.itab
  if tmp != nil {
    tmp = tmp.typ_ // load type from itab
  }
  e = eface{tmp, i.data}

It is smaller and faster than calling the runtime.

Change-Id: I0ad27f62f4ec0b6cd53bc8530e4da0eae3e67a6c
Reviewed-on: https://go-review.googlesource.com/31260
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2016-10-17 19:09:07 +00:00
Austin Clements d5bd797ee5 runtime: fix getArgInfo for deferred reflection calls
getArgInfo for reflect.makeFuncStub and reflect.methodValueCall is
necessarily special. These have dynamically determined argument maps
that are stored in their context (that is, their *funcval). These
functions are written to store this context at 0(SP) when called, and
getArgInfo retrieves it from there.

This technique works if getArgInfo is passed an active call frame for
one of these functions. However, getArgInfo is also used in
tracebackdefers, where the "call" is not a true call with an active
stack frame, but a deferred call. In this situation, getArgInfo
currently crashes because tracebackdefers passes a frame with sp set
to 0. However, the entire approach used by getArgInfo is flawed in
this situation because the wrapper has not actually executed, and
hence hasn't saved this metadata to any stack frame.

In the defer case, we know the *funcval from the _defer itself, so we
can fix this by teaching getArgInfo to use the *funcval context
directly when its available, and otherwise get it from the active call
frame.

While we're here, this commit simplifies getArgInfo a bit by making it
play more nicely with the type system. Rather than decoding the
*reflect.methodValue that is the wrapper's context as a *[2]uintptr,
just write out a copy of the reflect.methodValue type in the runtime.

Fixes #16331. Fixes #17471.

Change-Id: I81db4d985179b4a81c68c490cceeccbfc675456a
Reviewed-on: https://go-review.googlesource.com/31138
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-10-17 18:57:01 +00:00
Austin Clements 687d9d5d78 runtime: print a message on bad morestack
If morestack runs on the g0 or gsignal stack, it currently performs
some abort operation that typically produces a signal (e.g., it does
an INT $3 on x86). This is useful if you're running in a debugger, but
if you're not, the runtime tries to trap this signal, which is likely
to send the program into a deeper spiral of collapse and lead to very
confusing diagnostic output.

Help out people trying to debug without a debugger by making morestack
print an informative message before blowing up.

Change-Id: I2814c64509b137bfe20a00091d8551d18c2c4749
Reviewed-on: https://go-review.googlesource.com/31133
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-10-17 18:56:09 +00:00
Lynn Boger 1e28dce80a bytes: improve performance for bytes.Compare on ppc64x
This improves the performance for byte.Compare by rewriting
the cmpbody function in runtime/asm_ppc64x.s.  The previous code
had a simple loop which loaded a pair of bytes and compared them,
which is inefficient for long buffers.  The updated function checks
for 8 or 32 byte chunks and then loads and compares double words where
possible.

Because the byte.Compare result indicates greater or less than,
the doubleword loads must take endianness into account, using a
byte reversed load in the little endian case.

Fixes #17433

benchmark                                   old ns/op     new ns/op     delta
BenchmarkBytesCompare/8-16                  13.6          7.16          -47.35%
BenchmarkBytesCompare/16-16                 25.7          7.83          -69.53%
BenchmarkBytesCompare/32-16                 38.1          7.78          -79.58%
BenchmarkBytesCompare/64-16                 63.0          10.6          -83.17%
BenchmarkBytesCompare/128-16                112           13.0          -88.39%
BenchmarkBytesCompare/256-16                211           28.1          -86.68%
BenchmarkBytesCompare/512-16                410           38.6          -90.59%
BenchmarkBytesCompare/1024-16               807           60.2          -92.54%
BenchmarkBytesCompare/2048-16               1601          103           -93.57%

Change-Id: I121acc74fcd27c430797647b8d682eb0607c63eb
Reviewed-on: https://go-review.googlesource.com/30949
Reviewed-by: David Chase <drchase@google.com>
2016-10-17 18:46:20 +00:00
Martin Möhrmann d295174030 runtime: speed up non-ASCII rune decoding
Copies utf8 constants and EncodeRune implementation from unicode/utf8.

Adds a new decoderune implementation that is used by the compiler
in code generated for ranging over strings. It does not handle
ASCII runes since these are handled directly before calls to decoderune.

The DecodeRuneInString implementation from unicode/utf8 is not used
since it uses a lookup table that would increase the use of cpu caches.

Adds more tests that check decoding of valid and invalid utf8 sequences.

name                              old time/op  new time/op  delta
RuneIterate/range2/ASCII-4        7.45ns ± 2%  7.45ns ± 1%     ~     (p=0.634 n=16+16)
RuneIterate/range2/Japanese-4     53.5ns ± 1%  49.2ns ± 2%   -8.03%  (p=0.000 n=20+20)
RuneIterate/range2/MixedLength-4  46.3ns ± 1%  41.0ns ± 2%  -11.57%  (p=0.000 n=20+20)

new:
"".decoderune t=1 size=423 args=0x28 locals=0x0
old:
"".charntorune t=1 size=666 args=0x28 locals=0x0

Change-Id: I1df1fdb385bb9ea5e5e71b8818ea2bf5ce62de52
Reviewed-on: https://go-review.googlesource.com/28490
Run-TryBot: Martin Möhrmann <martisch@uos.de>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-10-17 11:25:22 +00:00
Austin Clements 9897e40811 runtime: use more go:nowritebarrierrec in proc.go
Currently we use go:nowritebarrier in many places in proc.go.
go:notinheap and go:yeswritebarrierrec now let us use
go:nowritebarrierrec (the recursive form of the go:nowritebarrier
pragma) more liberally. Do so in proc.go

Change-Id: Ia7fcbc12ce6c51cb24730bf835fb7634ad53462f
Reviewed-on: https://go-review.googlesource.com/30942
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-15 17:58:23 +00:00
Austin Clements 1bc6be6423 runtime: mark several types go:notinheap
This covers basically all sysAlloc'd, persistentalloc'd, and
fixalloc'd types.

Change-Id: I0487c887c2a0ade5e33d4c4c12d837e97468e66b
Reviewed-on: https://go-review.googlesource.com/30941
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-15 17:58:20 +00:00
Austin Clements 991a85c889 runtime: make mSpanList more go:notinheap-friendly
Currently mspan links to its previous mspan using a **mspan field that
points to the previous span's next field. This simplifies some of the
list manipulation code, but is going to make it very hard to convince
the compiler that mspan list manipulations don't need write barriers.

Fix this by using a more traditional ("boring") linked list that uses
a simple *mspan pointer to the previous mspan. This complicates some
of the list manipulation slightly, but it will let us eliminate all
write barriers from the mspan list manipulation code by marking mspan
go:notinheap.

Change-Id: I0d0b212db5f20002435d2a0ed2efc8aa0364b905
Reviewed-on: https://go-review.googlesource.com/30940
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-15 17:58:17 +00:00
Austin Clements 77527a316b cmd/compile: add go:notinheap type pragma
This adds a //go:notinheap pragma for declarations of types that must
not be heap allocated. We ensure these rules by disallowing new(T),
make([]T), append([]T), or implicit allocation of T, by disallowing
conversions to notinheap types, and by propagating notinheap to any
struct or array that contains notinheap elements.

The utility of this pragma is that we can eliminate write barriers for
writes to pointers to go:notinheap types, since the write barrier is
guaranteed to be a no-op. This will let us mark several scheduler and
memory allocator structures as go:notinheap, which will let us
disallow write barriers in the scheduler and memory allocator much
more thoroughly and also eliminate some problematic hybrid write
barriers.

This also makes go:nowritebarrierrec and go:yeswritebarrierrec much
more powerful. Currently we use go:nowritebarrier all over the place,
but it's almost never what you actually want: when write barriers are
illegal, they're typically illegal for a whole dynamic scope. Partly
this is because go:nowritebarrier has been around longer, but it's
also because go:nowritebarrierrec couldn't be used in situations that
had no-op write barriers or where some nested scope did allow write
barriers. go:notinheap eliminates many no-op write barriers and
go:yeswritebarrierrec makes it possible to opt back in to write
barriers, so these two changes will let us use go:nowritebarrierrec
far more liberally.

This updates #13386, which is about controlling pointers from non-GC'd
memory to GC'd memory. That would require some additional pragma (or
pragmas), but could build on this pragma.

Change-Id: I6314f8f4181535dd166887c9ec239977b54940bd
Reviewed-on: https://go-review.googlesource.com/30939
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2016-10-15 17:58:14 +00:00
Austin Clements a9e6cebde2 cmd/compile, runtime: add go:yeswritebarrierrec pragma
This pragma cancels the effect of go:nowritebarrierrec. This is useful
in the scheduler because there are places where we enter a function
without a valid P (and hence cannot have write barriers), but then
obtain a P. This allows us to annotate the function with
go:nowritebarrierrec and split out the part after we've obtained a P
into a go:yeswritebarrierrec function.

Change-Id: Ic8ce4b6d3c074a1ecd8280ad90eaf39f0ffbcc2a
Reviewed-on: https://go-review.googlesource.com/30938
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2016-10-15 17:58:11 +00:00
Keith Randall 442de98c14 cmd/compile,runtime: redo how map assignments work
To compile:
  m[k] = v
instead of:
  mapassign(maptype, m, &k, &v), do
do:
  *mapassign(maptype, m, &k) = v

mapassign returns a pointer to the value slot in the map.  It is just
like mapaccess except that it will allocate a new slot if k is not
already present in the map.

This makes map accesses faster but potentially larger (codewise).

It is faster because the write into the map is done when the compiler
knows the concrete type, so it can be done with a few store
instructions instead of calling typedmemmove.  We also potentially
avoid stack temporaries to hold v.

The code can be larger when the map has pointers in its value type,
since there is a write barrier call in addition to the mapassign call.
That makes the code at the callsite a bit bigger (go binary is 0.3%
bigger).

This CL is in preparation for doing operations like m[k] += v with
only a single runtime call.  That will roughly double the speed of
such operations.

Update #17133
Update #5147

Change-Id: Ia435f032090a2ed905dac9234e693972fe8c2dc5
Reviewed-on: https://go-review.googlesource.com/30815
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-12 20:41:23 +00:00
Emmanuel Odeke 898ca6ba0a runtime: update mkduff legacy comments
Update comments for duffzero and duffcopy
which referred to legacy locations:
+ cmd/?g/cgen.go
+ cmd/?g/ggen.go

Remnants of the old days when we had 5g, 6g etc.

Those locations have since moved to:
+ cmd/compile/internal/<arch>/cgen.go
+ cmd/compile/internal/<arch>/ggen.go

Change-Id: Ie2ea668559d52d42b747260ea69a6d5b3d70e859
Reviewed-on: https://go-review.googlesource.com/29073
Reviewed-by: Russ Cox <rsc@golang.org>
2016-10-12 14:51:50 +00:00
Joshua Boelter 1a3b739b26 runtime: check for errors returned by windows sema calls
Add checks for failure of CreateEvent, SetEvent or
WaitForSingleObject. Any failures are considered fatal and
will throw() after printing an informative message.

Updates #16646

Change-Id: I3bacf9001d2abfa8667cc3aff163ff2de1c99915
Reviewed-on: https://go-review.googlesource.com/26655
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-12 13:39:43 +00:00
Gyu-Ho Lee 456b7f5a97 runtime/pprof: preallocate slice in pprof.go
To prevent slice growth when appending.

Change-Id: I2cdb9b09bc33f63188b19573c8b9a77601e63801
Reviewed-on: https://go-review.googlesource.com/23783
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2016-10-12 06:33:34 +00:00
Gleb Stepanov 460d112f6c runtime: fix typo in comments
Fix typo in word synchronization in comments.

Change-Id: I453b4e799301e758799c93df1e32f5244ca2fb84
Reviewed-on: https://go-review.googlesource.com/25174
Reviewed-by: Russ Cox <rsc@golang.org>
2016-10-12 02:48:31 +00:00
Michael Munday 8728df645c runtime: remove canBackTrace variable from TestGdbPython
The canBackTrace variable is true for all of the architectures
Go supports and this is likely to remain the case as new
architectures are added.

Change-Id: I73900c018eb4b2e5c02fccd8d3e89853b2ba9d90
Reviewed-on: https://go-review.googlesource.com/22423
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-12 02:16:11 +00:00
Matthew Dempsky 303b69feb7 cmd/compile, runtime: stop padding stackmaps to 4 bytes
Shrinks cmd/go's text segment by 0.9%.

   text	   data	    bss	    dec	    hex	filename
6447148	 231643	 146328	6825119	 68249f	go.before
6387404	 231643	 146328	6765375	 673b3f	go.after

Change-Id: I431e8482dbb11a7c1c77f2196cada43d5dad2981
Reviewed-on: https://go-review.googlesource.com/30817
Reviewed-by: Austin Clements <austin@google.com>
2016-10-11 20:23:30 +00:00
Cherry Zhang 7c431cb7f9 cmd/link: insert trampolines for too-far jumps on ARM
ARM direct CALL/JMP instruction has 24 bit offset, which can only
encodes jumps within +/-32M. When the target is too far, the top
bits get truncated and the program jumps wild.

This CL detects too-far jumps and automatically insert trampolines,
currently only internal linking on ARM.

It is necessary to make the following changes to the linker:
- Resolve direct jump relocs when assigning addresses to functions.
  this allows trampoline insertion without moving all code that
  already laid down.
- Lay down packages in dependency order, so that when resolving a
  inter-package direct jump reloc, the target address is already
  known. Intra-package jumps are assumed never too far.
- a linker flag -debugtramp is added for debugging trampolines:
    "-debugtramp=1 -v" prints trampoline debug message
    "-debugtramp=2"    forces all inter-package jump to use
                       trampolines (currently ARM only)
    "-debugtramp=2 -v" does both
- Some data structures are changed for bookkeeping.

On ARM, pseudo DIV/DIVU/MOD/MODU instructions now clobber R8
(unfortunate). In the standard library there is no ARM assembly
code that uses these instructions, and the compiler no longer emits
them (CL 29390).

all.bash passes with -debugtramp=2, except a disassembly test (this
is unavoidable as we changed the instruction).

TBD: debug info of trampolines?

Fixes #17028.

Change-Id: Idcce347ea7e0af77c4079041a160b2f6e114b474
Reviewed-on: https://go-review.googlesource.com/29397
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-11 13:35:33 +00:00
Ian Lance Taylor d03e8b226c runtime: record current PC for SIGPROF on non-Go thread
If we get a SIGPROF on a non-Go thread, and the program has not called
runtime.SetCgoTraceback so we have no way to collect a stack trace, then
record a profile that is just the PC where the signal occurred. That
will at least point the user to the right area.

Retrieving the PC from the sigctxt in a signal handler on a non-G thread
required marking a number of trivial sigctxt methods as nosplit, and,
for extra safety, nowritebarrierrec.

The test shows that the existing test CgoPprofThread test does not test
the stack trace, just the profile signal. Leaving that for later.

Change-Id: I8f8f3ff09ac099fc9d9df94b5a9d210ffc20c4ab
Reviewed-on: https://go-review.googlesource.com/30252
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2016-10-11 12:56:15 +00:00
Alex Brainman dd307da10c runtime/cgo: do not explicitly link msvcrt.dll
CL 14472 solved issue #12030 by explicitly linking msvcrt.dll
to every cgo executable we build. This CL achieves the same
by manually loading ntdll.dll during startup.

Updates #12030

Change-Id: I5d9cd925ef65cc34c5d4031c750f0f97794529b2
Reviewed-on: https://go-review.googlesource.com/30737
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
2016-10-11 05:45:06 +00:00
Austin Clements 94589054d3 cmd/trace: label mark termination spans as such
Currently these are labeled "MARK", which was accurate in the STW
collector, but these really indicate mark termination now, since
marking happens for the full duration of the concurrent GC. Re-label
them as "MARK TERMINATION" to clarify this.

Change-Id: Ie98bd961195acde49598b4fa3f9e7d90d757c0a6
Reviewed-on: https://go-review.googlesource.com/30018
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2016-10-07 18:33:23 +00:00
Austin Clements fa9b57bb1d runtime: make next_gc ^0 when GC is disabled
When GC is disabled, we set gcpercent to -1. However, we still use
gcpercent to compute several values, such as next_gc and gc_trigger.
These calculations are meaningless when gcpercent is -1 and result in
meaningless values. This is okay in a sense because we also never use
these values if gcpercent is -1, but they're confusing when exposed to
the user, for example via MemStats or the execution trace. It's
particularly unfortunate in the execution trace because it attempts to
plot the underflowed value of next_gc, which scales all useful
information in the heap row into oblivion.

Fix this by making next_gc ^0 when gcpercent < 0. This has the
advantage of being true in a way: next_gc is effectively infinite when
gcpercent < 0. We can also detect this special value when updating the
execution trace and report next_gc as 0 so it doesn't blow up the
display of the heap line.

Change-Id: I4f366e4451f8892a4908da7b2b6086bdc67ca9a9
Reviewed-on: https://go-review.googlesource.com/30016
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-07 18:32:51 +00:00
Ian Lance Taylor 15937ccb89 runtime: fix sigset type for ppc64 big-endian GNU/Linux
On 64-bit big-endian GNU/Linux machines we need to treat sigset as a
single uint64, not as a pair of uint32 values. This fix was already made
for s390x, but not for ppc64 (which is big-endian--the little endian
version is known as ppc64le). So copy os_linux_390.x to
os_linux_be64.go, and use build constraints as needed.

Fixes #17361

Change-Id: Ia0eb18221a8f5056bf17675fcfeb010407a13fb0
Reviewed-on: https://go-review.googlesource.com/30602
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-10-06 22:24:40 +00:00
Brad Fitzpatrick 4103fedf19 runtime: skip gdb tests on linux/ppc64 for now
Updates #17366

Change-Id: Ia4bd3c74c48b85f186586184a7c2b66d3b80fc9c
Reviewed-on: https://go-review.googlesource.com/30596
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-10-06 19:52:09 +00:00
Denis Nagorny d7507e9d11 runtime: improve memmove for amd64
Use AVX if available on 4th generation of Intel(TM) Core(TM) processors.

    (collected on E5 2609v3 @1.9GHz)
    name                        old speed      new speed       delta
    Memmove/1-6                  158MB/s ± 0%    172MB/s ± 0%    +9.09% (p=0.000 n=16+16)
    Memmove/2-6                  316MB/s ± 0%    345MB/s ± 0%    +9.09% (p=0.000 n=18+16)
    Memmove/3-6                  517MB/s ± 0%    517MB/s ± 0%      ~ (p=0.445 n=16+16)
    Memmove/4-6                  687MB/s ± 1%    690MB/s ± 0%    +0.35% (p=0.000 n=20+17)
    Memmove/5-6                  729MB/s ± 0%    729MB/s ± 0%    +0.01% (p=0.000 n=16+18)
    Memmove/6-6                  875MB/s ± 0%    875MB/s ± 0%    +0.01% (p=0.000 n=18+18)
    Memmove/7-6                 1.02GB/s ± 0%   1.02GB/s ± 1%      ~ (p=0.139 n=19+20)
    Memmove/8-6                 1.26GB/s ± 0%   1.26GB/s ± 0%    +0.00% (p=0.000 n=18+18)
    Memmove/9-6                 1.42GB/s ± 0%   1.42GB/s ± 0%    +0.00% (p=0.000 n=17+18)
    Memmove/10-6                1.58GB/s ± 0%   1.58GB/s ± 0%    +0.00% (p=0.000 n=19+19)
    Memmove/11-6                1.74GB/s ± 0%   1.74GB/s ± 0%    +0.00% (p=0.001 n=18+17)
    Memmove/12-6                1.90GB/s ± 0%   1.90GB/s ± 0%    +0.00% (p=0.000 n=19+19)
    Memmove/13-6                2.05GB/s ± 0%   2.05GB/s ± 0%    +0.00% (p=0.000 n=18+19)
    Memmove/14-6                2.21GB/s ± 0%   2.21GB/s ± 0%    +0.00% (p=0.000 n=16+20)
    Memmove/15-6                2.37GB/s ± 0%   2.37GB/s ± 0%    +0.00% (p=0.004 n=19+20)
    Memmove/16-6                2.53GB/s ± 0%   2.53GB/s ± 0%    +0.00% (p=0.000 n=16+16)
    Memmove/32-6                4.67GB/s ± 0%   4.67GB/s ± 0%    +0.00% (p=0.000 n=17+17)
    Memmove/64-6                8.67GB/s ± 0%   8.64GB/s ± 0%    -0.33% (p=0.000 n=18+17)
    Memmove/128-6               12.6GB/s ± 0%   11.6GB/s ± 0%    -8.05% (p=0.000 n=16+19)
    Memmove/256-6               16.3GB/s ± 0%   16.6GB/s ± 0%    +1.66% (p=0.000 n=20+18)
    Memmove/512-6               21.5GB/s ± 0%   24.4GB/s ± 0%   +13.35% (p=0.000 n=18+17)
    Memmove/1024-6              24.7GB/s ± 0%   33.7GB/s ± 0%   +36.12% (p=0.000 n=18+18)
    Memmove/2048-6              27.3GB/s ± 0%   43.3GB/s ± 0%   +58.77% (p=0.000 n=19+17)
    Memmove/4096-6              37.5GB/s ± 0%   50.5GB/s ± 0%   +34.56% (p=0.000 n=19+19)
    MemmoveUnalignedDst/1-6      135MB/s ± 0%    146MB/s ± 0%    +7.69% (p=0.000 n=16+14)
    MemmoveUnalignedDst/2-6      271MB/s ± 0%    292MB/s ± 0%    +7.69% (p=0.000 n=18+18)
    MemmoveUnalignedDst/3-6      438MB/s ± 0%    438MB/s ± 0%      ~ (p=0.352 n=16+19)
    MemmoveUnalignedDst/4-6      584MB/s ± 0%    584MB/s ± 0%      ~ (p=0.876 n=17+17)
    MemmoveUnalignedDst/5-6      631MB/s ± 1%    632MB/s ± 0%    +0.25% (p=0.000 n=20+17)
    MemmoveUnalignedDst/6-6      759MB/s ± 0%    759MB/s ± 0%    +0.00% (p=0.000 n=19+16)
    MemmoveUnalignedDst/7-6      885MB/s ± 0%    883MB/s ± 1%      ~ (p=0.647 n=18+20)
    MemmoveUnalignedDst/8-6     1.08GB/s ± 0%   1.08GB/s ± 0%    +0.00% (p=0.035 n=19+18)
    MemmoveUnalignedDst/9-6     1.22GB/s ± 0%   1.22GB/s ± 0%      ~ (p=0.251 n=18+17)
    MemmoveUnalignedDst/10-6    1.35GB/s ± 0%   1.35GB/s ± 0%      ~ (p=0.327 n=17+18)
    MemmoveUnalignedDst/11-6    1.49GB/s ± 0%   1.49GB/s ± 0%      ~ (p=0.531 n=18+19)
    MemmoveUnalignedDst/12-6    1.63GB/s ± 0%   1.63GB/s ± 0%      ~ (p=0.886 n=19+18)
    MemmoveUnalignedDst/13-6    1.76GB/s ± 0%   1.76GB/s ± 1%    -0.24% (p=0.006 n=18+20)
    MemmoveUnalignedDst/14-6    1.90GB/s ± 0%   1.90GB/s ± 0%      ~ (p=0.818 n=20+19)
    MemmoveUnalignedDst/15-6    2.03GB/s ± 0%   2.03GB/s ± 0%      ~ (p=0.294 n=17+16)
    MemmoveUnalignedDst/16-6    2.17GB/s ± 0%   2.17GB/s ± 0%      ~ (p=0.602 n=16+18)
    MemmoveUnalignedDst/32-6    4.05GB/s ± 0%   4.05GB/s ± 0%    +0.00% (p=0.010 n=18+17)
    MemmoveUnalignedDst/64-6    7.59GB/s ± 0%   7.59GB/s ± 0%    +0.00% (p=0.022 n=18+16)
    MemmoveUnalignedDst/128-6   11.1GB/s ± 0%   11.4GB/s ± 0%    +2.79% (p=0.000 n=18+17)
    MemmoveUnalignedDst/256-6   16.4GB/s ± 0%   16.7GB/s ± 0%    +1.59% (p=0.000 n=20+17)
    MemmoveUnalignedDst/512-6   15.7GB/s ± 0%   21.3GB/s ± 0%   +35.87% (p=0.000 n=18+20)
    MemmoveUnalignedDst/1024-6  16.0GB/s ±20%   31.5GB/s ± 0%   +96.93% (p=0.000 n=20+14)
    MemmoveUnalignedDst/2048-6  19.6GB/s ± 0%   42.1GB/s ± 0%  +115.16% (p=0.000 n=17+18)
    MemmoveUnalignedDst/4096-6  6.41GB/s ± 0%  33.18GB/s ± 0%  +417.56% (p=0.000 n=17+18)
    MemmoveUnalignedSrc/1-6      171MB/s ± 0%    166MB/s ± 0%    -3.33% (p=0.000 n=19+16)
    MemmoveUnalignedSrc/2-6      343MB/s ± 0%    342MB/s ± 1%    -0.41% (p=0.000 n=17+20)
    MemmoveUnalignedSrc/3-6      508MB/s ± 0%    493MB/s ± 1%    -2.90% (p=0.000 n=17+17)
    MemmoveUnalignedSrc/4-6      677MB/s ± 0%    660MB/s ± 2%    -2.55% (p=0.000 n=17+20)
    MemmoveUnalignedSrc/5-6      790MB/s ± 0%    790MB/s ± 0%      ~ (p=0.139 n=17+17)
    MemmoveUnalignedSrc/6-6      948MB/s ± 0%    946MB/s ± 1%      ~ (p=0.330 n=17+19)
    MemmoveUnalignedSrc/7-6     1.11GB/s ± 0%   1.11GB/s ± 0%    -0.05% (p=0.026 n=17+17)
    MemmoveUnalignedSrc/8-6     1.38GB/s ± 0%   1.38GB/s ± 0%      ~ (p=0.091 n=18+16)
    MemmoveUnalignedSrc/9-6     1.42GB/s ± 0%   1.40GB/s ± 1%    -1.04% (p=0.000 n=19+20)
    MemmoveUnalignedSrc/10-6    1.58GB/s ± 0%   1.56GB/s ± 1%    -1.15% (p=0.000 n=18+19)
    MemmoveUnalignedSrc/11-6    1.73GB/s ± 0%   1.71GB/s ± 1%    -1.30% (p=0.000 n=20+20)
    MemmoveUnalignedSrc/12-6    1.89GB/s ± 0%   1.87GB/s ± 1%    -1.18% (p=0.000 n=17+20)
    MemmoveUnalignedSrc/13-6    2.05GB/s ± 0%   2.02GB/s ± 1%    -1.18% (p=0.000 n=17+20)
    MemmoveUnalignedSrc/14-6    2.21GB/s ± 0%   2.18GB/s ± 1%    -1.14% (p=0.000 n=17+20)
    MemmoveUnalignedSrc/15-6    2.36GB/s ± 0%   2.34GB/s ± 1%    -1.04% (p=0.000 n=17+20)
    MemmoveUnalignedSrc/16-6    2.52GB/s ± 0%   2.49GB/s ± 1%    -1.26% (p=0.000 n=19+20)
    MemmoveUnalignedSrc/32-6    4.82GB/s ± 0%   4.61GB/s ± 0%    -4.40% (p=0.000 n=19+20)
    MemmoveUnalignedSrc/64-6    5.03GB/s ± 4%   7.97GB/s ± 0%   +58.55% (p=0.000 n=20+16)
    MemmoveUnalignedSrc/128-6   11.1GB/s ± 0%   11.2GB/s ± 0%    +0.52% (p=0.000 n=17+18)
    MemmoveUnalignedSrc/256-6   16.5GB/s ± 0%   16.4GB/s ± 0%    -0.10% (p=0.000 n=20+18)
    MemmoveUnalignedSrc/512-6   21.0GB/s ± 0%   22.1GB/s ± 0%    +5.48% (p=0.000 n=14+17)
    MemmoveUnalignedSrc/1024-6  24.9GB/s ± 0%   31.9GB/s ± 0%   +28.20% (p=0.000 n=19+20)
    MemmoveUnalignedSrc/2048-6  23.3GB/s ± 0%   33.8GB/s ± 0%   +45.22% (p=0.000 n=17+19)
    MemmoveUnalignedSrc/4096-6  37.3GB/s ± 0%   42.7GB/s ± 0%   +14.30% (p=0.000 n=17+17)

Change-Id: Id66aa3e499ccfb117cb99d623ef326b50d057b64
Reviewed-on: https://go-review.googlesource.com/29590
Run-TryBot: Denis Nagorny <denis.nagorny@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-10-06 10:21:58 +00:00
Ian Lance Taylor e5421e21ef runtime: add threadprof tag for test that starts busy thread
The CgoExternalThreadSIGPROF test starts a thread at constructor time
that does a busy loop. That can throw off some other tests. So only
build that code if testprogcgo is built with the tag threadprof, and
adjust the tests that use that code to pass that build tag.

This revealed that the CgoPprofThread test was not testing what it
should have, as it never actually started the cpuHog thread. It was
passing because of the busy loop thread. Fix it to start the thread as
intended.

Change-Id: I087a9e4fc734a86be16a287456441afac5676beb
Reviewed-on: https://go-review.googlesource.com/30362
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-10-06 01:23:09 +00:00
Lynn Boger 3107c91e2d runtime: memclr perf improvements on ppc64x
This updates runtime/memclr_ppc64x.s to improve performance,
by unrolling loops for larger clears.

Fixes #17348

benchmark                    old MB/s     new MB/s     speedup
BenchmarkMemclr/5-80         199.71       406.63       2.04x
BenchmarkMemclr/16-80        693.66       1817.41      2.62x
BenchmarkMemclr/64-80        2309.35      5793.34      2.51x
BenchmarkMemclr/256-80       5428.18      14765.81     2.72x
BenchmarkMemclr/4096-80      8611.65      27191.94     3.16x
BenchmarkMemclr/65536-80     8736.69      28604.23     3.27x
BenchmarkMemclr/1M-80        9304.94      27600.09     2.97x
BenchmarkMemclr/4M-80        8705.66      27589.64     3.17x
BenchmarkMemclr/8M-80        8575.74      23631.04     2.76x
BenchmarkMemclr/16M-80       8443.10      19240.68     2.28x
BenchmarkMemclr/64M-80       8390.40      9493.04      1.13x
BenchmarkGoMemclr/5-80       263.05       630.37       2.40x
BenchmarkGoMemclr/16-80      904.33       1148.49      1.27x
BenchmarkGoMemclr/64-80      2830.20      8756.70      3.09x
BenchmarkGoMemclr/256-80     6064.59      20299.46     3.35x

Change-Id: Ic76c9183c8b4129ba3df512ca8b0fe6bd424e088
Reviewed-on: https://go-review.googlesource.com/30373
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Munday <munday@ca.ibm.com>
Reviewed-by: David Chase <drchase@google.com>
2016-10-05 22:18:14 +00:00
Cherry Zhang 4c9a372946 runtime, cmd/internal/obj: get rid of rewindmorestack
In the function prologue, we emit a jump to the beginning of
the function immediately after calling morestack. And in the
runtime stack growing code, it decodes and emulates that jump.
This emulation was necessary before we had per-PC SP deltas,
since the traceback code assumed that the frame size was fixed
for the whole function, except on the first instruction where
it was 0. Since we now have per-PC SP deltas and PCDATA, we
can correctly record that the frame size is 0. This makes the
emulation unnecessary.

This may be helpful for registerized calling convention, where
there may be unspills of arguments after calling morestack. It
also simplifies the runtime.

Change-Id: I7ebee31eaee81795445b33f521ab6a79624c4ceb
Reviewed-on: https://go-review.googlesource.com/30138
Reviewed-by: David Chase <drchase@google.com>
2016-10-05 18:19:46 +00:00
Michael Munday f15f1ff46f runtime/testdata/testprogcgo: add explicit return value to signalThread
Should fix the clang builder.

Change-Id: I3ee34581b6a7ec902420de72a8a08a2426997782
Reviewed-on: https://go-review.googlesource.com/30363
Run-TryBot: Michael Munday <munday@ca.ibm.com>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-05 15:36:00 +00:00
Ian Lance Taylor 6c13a1db2e runtime: don't call cgocallback from signal handler
Calling cgocallback from a signal handler can fail when using the race
detector. Calling cgocallback will lead to a call to newextram which
will call oneNewExtraM which will call racegostart. The racegostart
function will set up some race detector data structures, and doing that
will sometimes call the C memory allocator. If we are running the signal
handler from a signal that interrupted the C memory allocator, we will
crash or hang.

Instead, change the signal handler code to call needm and dropm. The
needm function will grab allocated m and g structures and initialize the
g to use the current stack--the signal stack. That is all we need to
safely call code that allocates memory and checks whether it needs to
split the stack. This may temporarily leave us with no m available to
run a cgo callback, but that is OK in this case since the code we call
will quickly either crash or call dropm to return the m.

Implementing this required changing some of the setSignalstackSP
functions to avoid a write barrier. These functions never need a write
barrier but in some cases generated one anyhow because on some systems
the ss_sp field is a pointer.

Change-Id: I3893f47c3a66278f85eab7f94c1ab11d4f3be133
Reviewed-on: https://go-review.googlesource.com/30218
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2016-10-05 13:21:49 +00:00
Ian Lance Taylor 7faf702396 runtime: avoid endless loop if printing the panic value panics
Change-Id: I56de359a5ccdc0a10925cd372fa86534353c6ca0
Reviewed-on: https://go-review.googlesource.com/30358
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-10-05 13:13:27 +00:00
Carl Mastrangelo c1e267cc73 runtime: make append only clear uncopied memory
Also add a benchmark that shows off the new behavior.  The
existing benchmarks reuse the same slice, and thus don't ever have
to clear memory.  Running the Append|Grow benchmarks in runtime:

name                              old time/op  new time/op  delta
AppendSliceLarge/1024Bytes-12      265ns ± 1%   265ns ± 3%     ~     (p=0.524 n=17+20)
AppendSliceLarge/4096Bytes-12      807ns ± 3%   772ns ± 1%   -4.38%  (p=0.000 n=20+20)
AppendSliceLarge/16384Bytes-12    3.20µs ± 4%  2.82µs ± 4%  -11.93%  (p=0.000 n=19+20)
AppendSliceLarge/65536Bytes-12    13.0µs ± 4%  11.0µs ± 3%  -15.22%  (p=0.000 n=20+20)
AppendSliceLarge/262144Bytes-12   62.7µs ± 1%  51.6µs ± 1%  -17.67%  (p=0.000 n=19+20)
AppendSliceLarge/1048576Bytes-12   337µs ± 3%   289µs ± 3%  -14.36%  (p=0.000 n=20+20)
GrowSliceBytes-12                 31.2ns ± 4%  31.4ns ±11%     ~     (p=0.308 n=19+18)
GrowSliceInts-12                  53.4ns ±14%  45.0ns ± 6%  -15.74%  (p=0.000 n=20+19)
GrowSlicePtr-12                   87.0ns ± 3%  83.3ns ± 3%   -4.26%  (p=0.000 n=18+17)
GrowSliceStruct24Bytes-12         88.9ns ± 5%  77.8ns ± 2%  -12.45%  (p=0.000 n=20+19)
Append-12                         17.2ns ± 1%  17.3ns ± 2%     ~     (p=0.464 n=18+17)
AppendGrowByte-12                 2.28ms ± 1%  1.92ms ± 2%  -15.65%  (p=0.000 n=20+18)
AppendGrowString-12                255ms ± 3%   253ms ± 4%     ~     (p=0.065 n=19+19)
AppendSlice/1Bytes-12             3.13ns ± 0%  3.11ns ± 1%   -0.65%  (p=0.000 n=17+18)
AppendSlice/4Bytes-12             3.02ns ± 2%  3.11ns ± 1%   +3.27%  (p=0.000 n=18+17)
AppendSlice/7Bytes-12             4.14ns ± 3%  4.13ns ± 2%     ~     (p=0.380 n=19+18)
AppendSlice/8Bytes-12             3.74ns ± 3%  3.68ns ± 1%   -1.76%  (p=0.000 n=19+18)
AppendSlice/15Bytes-12            4.03ns ± 2%  4.04ns ± 2%     ~     (p=0.261 n=19+20)
AppendSlice/16Bytes-12            4.03ns ± 2%  4.03ns ± 0%     ~     (p=0.062 n=18+17)
AppendSlice/32Bytes-12            3.23ns ± 4%  3.43ns ± 1%   +6.10%  (p=0.000 n=17+18)
AppendStr/1Bytes-12               3.51ns ± 1%  3.52ns ± 1%     ~     (p=0.321 n=18+19)
AppendStr/4Bytes-12               3.46ns ± 1%  3.46ns ± 1%     ~     (p=0.977 n=18+20)
AppendStr/8Bytes-12               3.18ns ± 1%  3.19ns ± 1%     ~     (p=0.650 n=16+17)
AppendStr/16Bytes-12              6.08ns ±27%  5.52ns ± 3%   -9.16%  (p=0.002 n=18+19)
AppendStr/32Bytes-12              3.71ns ± 1%  3.53ns ± 1%   -4.73%  (p=0.000 n=20+19)
AppendSpecialCase-12              17.7ns ± 1%  17.8ns ± 3%   +0.86%  (p=0.045 n=17+18)
AppendInPlace/NoGrow/Byte-12       375ns ± 1%   376ns ± 1%   +0.35%  (p=0.021 n=20+18)
AppendInPlace/NoGrow/1Ptr-12      1.01µs ± 1%  1.10µs ± 1%   +9.28%  (p=0.000 n=18+20)
AppendInPlace/NoGrow/2Ptr-12      1.85µs ± 2%  1.71µs ± 1%   -7.51%  (p=0.000 n=19+18)
AppendInPlace/NoGrow/3Ptr-12      2.57µs ± 2%  2.44µs ± 1%   -5.08%  (p=0.000 n=19+19)
AppendInPlace/NoGrow/4Ptr-12      3.52µs ± 2%  3.35µs ± 2%   -4.70%  (p=0.000 n=20+19)
AppendInPlace/Grow/Byte-12         212ns ± 1%   217ns ± 8%   +2.57%  (p=0.000 n=20+20)
AppendInPlace/Grow/1Ptr-12         214ns ± 2%   217ns ± 3%   +1.23%  (p=0.001 n=18+19)
AppendInPlace/Grow/2Ptr-12         298ns ± 2%   300ns ± 2%   +0.55%  (p=0.038 n=19+20)
AppendInPlace/Grow/3Ptr-12         367ns ± 2%   366ns ± 2%     ~     (p=0.452 n=20+18)
AppendInPlace/Grow/4Ptr-12         416ns ± 2%   411ns ± 2%   -1.18%  (p=0.000 n=20+19)
StackGrowth-12                    43.4ns ± 1%  43.4ns ± 0%     ~     (p=1.000 n=16+16)
StackGrowthDeep-12                11.4µs ± 4%  10.3µs ± 4%   -9.65%  (p=0.000 n=20+19)

Change-Id: I69a8afbd942c787c591d95b9d9439bd6db4d1e49
Reviewed-on: https://go-review.googlesource.com/30192
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-04 22:40:20 +00:00
Nick Craig-Wood 6b795e77df runtime: correct function name in throw message
Change-Id: I8fd271066925734c3f7196f64db04f27c4ce27cb
Reviewed-on: https://go-review.googlesource.com/30274
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-04 13:48:07 +00:00
Brad Fitzpatrick ad26bb5e30 all: use sort.Slice where applicable
I avoided anywhere in the compiler or things which might be used by
the compiler in the future, since they need to build with Go 1.4.

I also avoided anywhere where there was no benefit to changing it.

I probably missed some.

Updates #16721

Change-Id: Ib3c895ff475c6dec2d4322393faaf8cb6a6d4956
Reviewed-on: https://go-review.googlesource.com/30250
TryBot-Result: Gobot Gobot <gobot@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2016-10-04 05:10:56 +00:00
Austin Clements 7aab88a31e runtime: fix missing space in error message
Change-Id: I422708d50c3c727246e7991039877660ca034dc9
Reviewed-on: https://go-review.googlesource.com/30144
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-03 22:00:13 +00:00
Austin Clements bf776a988b runtime: document bmap.tophash
In particular, it wasn't obvious that some values are special (unless
you also found those special values), so document that it isn't
necessarily a hash value.

Change-Id: Iff292822b44408239e26cd882dc07be6df2c1d38
Reviewed-on: https://go-review.googlesource.com/30143
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-10-03 22:00:06 +00:00
Austin Clements 38f1df66ff runtime: make gcDumpObject useful on stack frames
gcDumpObject is often used on a stack pointer (for example, when
checkmark finds an unmarked object on the stack), but since stack
spans don't have an elemsize, it doesn't print any of the memory from
the frame. Make it at least slightly more useful by printing
everything between obj and obj+off (inclusive). While we're here, also
print out the span state.

Change-Id: I51be064ea8791b4a365865bfdc7afa7b5aaecfbd
Reviewed-on: https://go-review.googlesource.com/30142
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-03 21:59:54 +00:00
Austin Clements 6879dbde4e runtime: introduce a type for span states
Currently span states are untyped constants and the field is just a
uint8. Make this more type-safe by introducing a type for the span
state.

Change-Id: I369bf59fe6e8234475f4921611424fceb7d0a6de
Reviewed-on: https://go-review.googlesource.com/30141
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-03 21:59:45 +00:00
Austin Clements 99339dd445 runtime: weaken claim about SetFinalizer panicking
Currently the SetFinalizer documentation makes a strong claim that
SetFinalizer will panic if the pointer is not to an object allocated
by calling new, to a composite literal, or to a local variable. This
is not true. For example, it doesn't panic when passed the address of
a package-level variable. Nor can we practically make it true. For
example, we can't distinguish between passing a pointer to a composite
literal and passing a pointer to its first field.

Hence, weaken the guarantee to say that it "may" panic.

Updates #17311. (Might fix it, depending on what we want to do with
package-level variables.)

Change-Id: I1c68ea9d0a5bbd3dd1b7ce329d92b0f05e2e0877
Reviewed-on: https://go-review.googlesource.com/30137
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-10-03 16:12:48 +00:00
Mike Appleby 360f2e43b7 runtime: sleep on CLOCK_MONOTONIC in futexsleep1 on freebsd
In FreeBSD 10.0, the _umtx_op syscall was changed to allow sleeping on
any supported clock, but the default clock was switched from a monotonic
clock to CLOCK_REALTIME.

Prior to 10.0, the __umtx_op_wait* functions ignored the fourth argument
to _umtx_op (uaddr1), expected the fifth argument (uaddr2) to be a
struct timespec pointer, and used a monotonic clock (nanouptime(9)) for
timeout calculations.

Since 10.0, if callers want a clock other than CLOCK_REALTIME, they must
call _umtx_op with uaddr1 set to a value greater than sizeof(struct
timespec), and with uaddr2 as pointer to a struct _umtx_time, rather
than a timespec. Callers can set the _clockid field of the struct
_umtx_time to request the clock they want.

The relevant FreeBSD commit:
    https://svnweb.freebsd.org/base?view=revision&revision=232144

Fixes #17168

Change-Id: I3dd7b32b683622b8d7b4a6a8f9eb56401bed6bdf
Reviewed-on: https://go-review.googlesource.com/30154
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-01 01:25:21 +00:00
David Crawshaw 441502154f runtime: remove defer from standard cgo call
The endcgo function call is currently deferred in case a cgo
callback into Go panics and unwinds through cgocall. Typical cgo
calls do not have callbacks into Go, and even fewer panic, so we
pay the cost of this defer for no typical benefit.

Amazingly, there is another defer on the cgocallback path also used
to cleanup in case the Go code called by cgo panics. This CL folds
the first defer into the second, to reduce the cost of typical cgo
calls.

This reduces the overhead for a no-op cgo call significantly:

	name       old time/op  new time/op  delta
	CgoNoop-8  93.5ns ± 0%  51.1ns ± 1%  -45.34%  (p=0.016 n=4+5)

The total effect between Go 1.7 and 1.8 is even greater, as CL 29656
reduced the cost of defer recently. Hopefully a future Go release
will drop the cost of defer to nothing, making this optimization
unnecessary. But until then, this is nice.

Change-Id: Id1a5648f687a87001d95bec6842e4054bd20ee4f
Reviewed-on: https://go-review.googlesource.com/30080
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-30 13:31:07 +00:00
Matthew Dempsky 9828b7c468 runtime, syscall: use FP instead of SP for parameters
Consistently access function parameters using the FP pseudo-register
instead of SP (e.g., x+0(FP) instead of x+4(SP) or x+8(SP), depending
on register size). Two reasons: 1) doc/asm says the SP pseudo-register
should use negative offsets in the range [-framesize, 0), and 2)
cmd/vet only validates parameter offsets when indexed from the FP
pseudo-register.

No binary changes to the compiled object files for any of the affected
package/OS/arch combinations.

Change-Id: I0efc6079bc7519fcea588c114ec6a39b245d68b0
Reviewed-on: https://go-review.googlesource.com/30085
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-30 05:40:43 +00:00
Mikio Hara a0d330f1dd runtime, runtime/cgo: revert CL 18814; don't drop signal stack in new thread on dragonfly
This change reverts CL 18814 which is a workaroud for older DragonFly
BSD kernels, and fixes #13945 and #13947 in a more general way the
same as other platforms except NetBSD.

This is a followup to CL 29491.

Updates #16329.

Change-Id: I771670bc672c827f2b3dbc7fd7417c49897cb991
Reviewed-on: https://go-review.googlesource.com/29971
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-28 14:45:06 +00:00
Ian Lance Taylor eb268cb321 runtime: minor simplifications to signal code
Change setsig, setsigstack, getsig, raise, raiseproc to take uint32 for
signal number parameter, as that is the type mostly used for signal
numbers.  Same for dieFromSignal, sigInstallGoHandler, raisebadsignal.

Remove setsig restart parameter, as it is always either true or
irrelevant.

Don't check the handler in setsigstack, as the only caller does that
anyhow.

Don't bother to convert the handler from sigtramp to sighandler in
getsig, as it will never be called when the handler is sigtramp or
sighandler.

Don't check the return value from rt_sigaction in the GNU/Linux version
of setsigstack; no other setsigstack checks it, and it never fails.

Change-Id: I6bbd677e048a77eddf974dd3d017bc3c560fbd48
Reviewed-on: https://go-review.googlesource.com/29953
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-28 13:12:47 +00:00
Alex Brainman db82cf4e50 runtime: use RtlGenRandom instead of CryptGenRandom
This change replaces the use of CryptGenRandom with RtlGenRandom in
Windows to generate cryptographically random numbers during process
startup. RtlGenRandom uses the same RNG as CryptGenRandom, but it has many
fewer DLL dependencies and so does not affect process startup time as
much.

This makes running simple Go program on my computers faster.

Windows XP:
benchmark                      old ns/op     new ns/op     delta
BenchmarkRunningGoProgram-2     47408573      10784148      -77.25%

Windows 7 (VM):
benchmark                    old ns/op     new ns/op     delta
BenchmarkRunningGoProgram     16260390      12792150      -21.33%

Windows 7:
benchmark                      old ns/op     new ns/op     delta
BenchmarkRunningGoProgram-2     13600778      10050574      -26.10%

Fixes #15589

Change-Id: I2816239a2056e3d4a6dcd86a6fa2bb619c6008fe
Reviewed-on: https://go-review.googlesource.com/29700
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-28 05:21:53 +00:00
Ian Lance Taylor fdc167164e runtime: remove sigmask type, use sigset instead
The OS-independent sigmask type was not pulling its weight. Replace it
with the OS-dependent sigset type. This requires adding an OS-specific
sigaddset function, but permits removing the OS-specific sigmaskToSigset
function.

Change-Id: I43307b512b0264ec291baadaea902f05ce212305
Reviewed-on: https://go-review.googlesource.com/29950
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-27 21:33:44 +00:00
Ian Lance Taylor 097a581dc0 runtime: simplify signalstack by dropping nil as argument
Change the two calls to signalstack(nil) to inline the code
instead (it's two lines).

Change-Id: Ie92a05494f924f279e40ac159f1b677fda18f281
Reviewed-on: https://go-review.googlesource.com/29854
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-27 17:08:29 +00:00
Elias Naur 60482d8a8b runtime: relax SetFinalizer documentation to allow &local
The SetFinalizer documentation states that

"The argument obj must be a pointer to an object allocated by calling
new or by taking the address of a composite literal."

which precludes pointers to local variables. According to a comment
on #6591, this case is expected to work. This CL updates the documentation
for SetFinalizer accordingly.

Fixes #6591

Change-Id: Id861b3436bc1c9521361ea2d51c1ce74a121c1af
Reviewed-on: https://go-review.googlesource.com/29592
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2016-09-27 16:38:47 +00:00
Michael Munday 3436f0776f reflect, runtime: optimize Value.Call on s390x and add benchmark
Use an MVC loop to copy arguments in runtime.call* rather than copying
bytes individually.

I've added the benchmark CallArgCopy to test the speed of Value.Call
for various argument sizes.

name                    old speed      new speed       delta
CallArgCopy/size=128     439MB/s ± 1%    582MB/s ± 1%   +32.41%  (p=0.000 n=10+10)
CallArgCopy/size=256     695MB/s ± 1%   1172MB/s ± 1%   +68.67%  (p=0.000 n=10+10)
CallArgCopy/size=1024    573MB/s ± 8%   4175MB/s ± 2%  +628.11%  (p=0.000 n=10+10)
CallArgCopy/size=4096   1.46GB/s ± 2%  10.19GB/s ± 1%  +600.52%  (p=0.000 n=10+10)
CallArgCopy/size=65536  1.51GB/s ± 0%  12.30GB/s ± 1%  +716.30%   (p=0.000 n=9+10)

Change-Id: I87dae4809330e7964f6cb4a9e40e5b3254dd519d
Reviewed-on: https://go-review.googlesource.com/28096
Run-TryBot: Michael Munday <munday@ca.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Bill O'Farrell <billotosyr@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-27 16:00:19 +00:00
Cherry Zhang 9d4b40f55d runtime, cmd/compile: implement and use DUFFCOPY on ARM64
Change-Id: I8984eac30e5df78d4b94f19412135d3cc36969f8
Reviewed-on: https://go-review.googlesource.com/29910
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-09-27 15:07:31 +00:00
Austin Clements f8b2314c56 runtime: optimize defer code
This optimizes deferproc and deferreturn in various ways.

The most important optimization is that it more carefully arranges to
prevent preemption or stack growth. Currently we do this by switching
to the system stack on every deferproc and every deferreturn. While we
need to be on the system stack for the slow path of allocating and
freeing defers, in the common case we can fit in the nosplit stack.
Hence, this change pushes the system stack switch down into the slow
paths and makes everything now exposed to the user stack nosplit. This
also eliminates the need for various acquirem/releasem pairs, since we
are now preventing preemption by preventing stack split checks.

As another smaller optimization, we special case the common cases of
zero-sized and pointer-sized defer frames to respectively skip the
copy and perform the copy in line instead of calling memmove.

This speeds up the runtime defer benchmark by 42%:

name           old time/op  new time/op  delta
Defer-4        75.1ns ± 1%  43.3ns ± 1%  -42.31%   (p=0.000 n=8+10)

In reality, this speeds up defer by about 2.2X. The two benchmarks
below compare a Lock/defer Unlock pair (DeferLock) with a Lock/Unlock
pair (NoDeferLock). NoDeferLock establishes a baseline cost, so these
two benchmarks together show that this change reduces the overhead of
defer from 61.4ns to 27.9ns.

name           old time/op  new time/op  delta
DeferLock-4    77.4ns ± 1%  43.9ns ± 1%  -43.31%  (p=0.000 n=10+10)
NoDeferLock-4  16.0ns ± 0%  15.9ns ± 0%   -0.39%    (p=0.000 n=9+8)

This also shaves 34ns off cgo calls:

name       old time/op  new time/op  delta
CgoNoop-4   122ns ± 1%  88.3ns ± 1%  -27.72%  (p=0.000 n=8+9)

Updates #14939, #16051.

Change-Id: I2baa0dea378b7e4efebbee8fca919a97d5e15f38
Reviewed-on: https://go-review.googlesource.com/29656
Reviewed-by: Keith Randall <khr@golang.org>
2016-09-26 22:01:35 +00:00
Austin Clements d211c2d377 runtime: implement getcallersp in Go
This makes it possible to inline getcallersp. getcallersp is on the
hot path of defers, so this slightly speeds up defer:

name           old time/op  new time/op  delta
Defer-4        78.3ns ± 2%  75.1ns ± 1%  -4.00%   (p=0.000 n=9+8)

Updates #14939.

Change-Id: Icc1cc4cd2f0a81fc4c8344432d0b2e783accacdd
Reviewed-on: https://go-review.googlesource.com/29655
TryBot-Result: Gobot Gobot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-09-26 22:01:32 +00:00
Austin Clements aaf4099a5c runtime: update malloc.go documentation
The big documentation comment at the top of malloc.go has gotten
woefully out of date. Update it.

Change-Id: Ibdb1bdcfdd707a6dc9db79d0633a36a28882301b
Reviewed-on: https://go-review.googlesource.com/29731
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-09-26 22:00:53 +00:00
Austin Clements f67c9de656 runtime: document MemStats
This documents all fields in MemStats and more clearly documents where
mstats differs from MemStats.

Fixes #15849.

Change-Id: Ie09374bcdb3a5fdd2d25fe4bba836aaae92cb1dd
Reviewed-on: https://go-review.googlesource.com/28972
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
2016-09-26 22:00:50 +00:00
Austin Clements 2098e5d39a runtime: eliminate memstats.heap_reachable
We used to compute an estimate of the reachable heap size that was
different from the marked heap size. This ultimately caused more
problems than it solved, so we pulled it out, but memstats still has
both heap_reachable and heap_marked, and there are some leftover TODOs
about the problems with this estimate.

Clean this up by eliminating heap_reachable in favor of heap_marked
and deleting the stale TODOs.

Change-Id: I713bc20a7c90683d2b43ff63c0b21a440269cc4d
Reviewed-on: https://go-review.googlesource.com/29271
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-09-26 22:00:47 +00:00
Austin Clements ec9c84c884 runtime: disentangle next_gc from GC trigger
Back in Go 1.4, memstats.next_gc was both the heap size at which GC
would trigger, and the size GC kept the heap under. When we switched
to concurrent GC in Go 1.5, we got somewhat confused and made this
variable the trigger heap size, while gcController.heapGoal became the
goal heap size.

memstats.next_gc is exposed to the user via MemStats.NextGC, while
gcController.heapGoal is not. This is unfortunate because 1) the heap
goal is far more useful for diagnostics, and 2) the trigger heap size
is just part of the GC trigger heuristic, which means it wouldn't be
useful to an application even if it tried to use it.

We never noticed this mess because MemStats.NextGC is practically
undocumented. Now that we're trying to document MemStats, it became
clear that this field had diverged from its original usefulness.

Clean up this mess by shuffling things back around so that next_gc is
the goal heap size and the new (unexposed) memstats.gc_trigger field
is the trigger heap size. This eliminates gcController.heapGoal.

Updates #15849.

Change-Id: I2cbbd43b1d78bdf613cb43f53488bd63913189b7
Reviewed-on: https://go-review.googlesource.com/29270
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-09-26 22:00:44 +00:00
Ian Lance Taylor e2e11f02a4 runtime: unify Unix implementations of unminit
Change-Id: I2cbb13eb85876ad05a52cbd498a9b86e7a28899c
Reviewed-on: https://go-review.googlesource.com/29772
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-26 19:33:26 +00:00
Ian Lance Taylor ac24388e5e runtime: merge setting new signal mask in minit
All the variants that sets the new signal mask in minit do the same
thing, so merge them. This requires an OS-specific sigdelset function;
the function already exists for linux, and is now added for other OS's.

Change-Id: Ie96f6f02e2cf09c43005085985a078bd9581f670
Reviewed-on: https://go-review.googlesource.com/29771
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-09-26 18:29:32 +00:00
Ian Lance Taylor c2735039f3 runtime: unify sigtrampgo
Combine the various versions of sigtrampgo into a single function in
signal_unix.go. This requires defining a fixsigcode method on sigctxt
for all operating systems; it only does something on Darwin. This also
requires changing the darwin/amd64 signal handler to call sigreturn
itself, rather than relying on sigtrampgo to call sigreturn for it. We
can then drop the Darwin sigreturn function, as it is no longer used.

Change-Id: I5a0b9d2d2c141957e151b41e694efeb20e4b4b9a
Reviewed-on: https://go-review.googlesource.com/29761
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-26 17:22:42 +00:00
Ian Lance Taylor d15295c679 runtime: unify handling of alternate signal stack
Change all Unix systems to use stackt for the alternate signal
stack (some were using sigaltstackt). Add OS-specific setSignalstackSP
function to handle different types for ss_sp field, and unify all
OS-specific signalstack functions into one. Unify handling of alternate
signal stack in OS-specific minit and sigtrampgo functions via new
functions minitSignalstack and setGsignalStack.

Change-Id: Idc316dc69b1dd725717acdf61a1cd8b9f33ed174
Reviewed-on: https://go-review.googlesource.com/29757
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-26 04:07:31 +00:00
Ian Lance Taylor f05cd4cde5 runtime: simplify conditions testing g.paniconfault
Implement a comment by Ralph Corderoy on CL 29754.

Change-Id: I22bbede211ddcb8a057f16b4f47d335a156cc8d2
Reviewed-on: https://go-review.googlesource.com/29756
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-25 20:29:50 +00:00
Ian Lance Taylor 343bec53c7 runtime: merge sigpanic_unix.go into signal_unix.go
Change-Id: Iba541045b4878405834c637095627631b6559a35
Reviewed-on: https://go-review.googlesource.com/29754
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-25 19:27:05 +00:00
Dmitry Vyukov 38765eba73 runtime/race: don't crash on invalid PCs
Currently raceSymbolizeCode uses funcline, which is internal runtime
function which crashes on incorrect PCs. Use FileLine instead,
it is public and does not crash on invalid data.

Note: FileLine returns "?" file on failure. That string is not NUL-terminated,
so we need to additionally check what FileLine returns.

Fixes #17190

Change-Id: Ic6fbd4f0e68ddd52e9b2dd25e625b50adcb69a98
Reviewed-on: https://go-review.googlesource.com/29714
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-25 12:22:04 +00:00
Dmitry Vyukov c14050646f runtime: fix newextram PC passed to race detector
PC passed to racegostart is expected to be a return PC
of the go statement. Race runtime will subtract 1 from the PC
before symbolization. Passing start PC of a function is wrong.
Add sys.PCQuantum to the function start PC.

Update #17190

Change-Id: Ia504c49e79af84ed4ea360c2aea472b370ea8bf5
Reviewed-on: https://go-review.googlesource.com/29712
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-25 12:15:40 +00:00
Ian Lance Taylor 159a90b93a runtime: merge Unix sighandler functions
Replace all the Unix sighandler functions with a single instance.
Push the relatively small amount of processor-specific code into five
methods on sigctxt: sigpc, sigsp, siglr, fault, preparePanic.
(Some processors already had a fault method.)

Change-Id: Ib459412ff8f7e0f5ad06bfd43eb827c8b196fc32
Reviewed-on: https://go-review.googlesource.com/29752
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2016-09-25 03:55:33 +00:00
Keith Randall 60074b0fd3 runtime: remove TestCollisions from -short
Takes a bit too long to run it all the time.

Fixes #17217
Update #17104

Change-Id: I4802190ea16ee0f436a7f95b093ea0f995f5b11d
Reviewed-on: https://go-review.googlesource.com/29751
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-09-24 03:10:13 +00:00
Ian Lance Taylor ab552aa3b6 runtime: unify some signal handling functions
Unify the OS-specific versions of msigsave, msigrestore, sigblock,
updatesigmask, and unblocksig into single versions in signal_unix.go.
To do this, make sigprocmask work the same way on all systems, which
required adding a definition of sigprocmask for linux and openbsd.
Also add a single OS-specific function sigmaskToSigset.

Change-Id: I7cbf75131dddb57eeefe648ef845b0791404f785
Reviewed-on: https://go-review.googlesource.com/29689
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2016-09-24 01:39:48 +00:00
David Crawshaw ed915ad421 runtime: use sched_yield instead of pthread_yield
Attempt to fix the linux-amd64-clang builder, which broke
with CL 29472.

Turns out pthread_yield is a non-portable Linux function, and
should have #define _GNU_SOURCE before #include <pthread.h>.
GCC doesn't complain about this, but Clang does:

	./raceprof.go:44:3: warning: implicit declaration of function 'pthread_yield' is invalid in C99 [-Wimplicit-function-declaration]

(Though the error, while explicable, certainly could be clearer.)

There is a portable POSIX equivalent, sched_yield, so this
CL uses it instead.

Change-Id: I58ca7a3f73a2b3697712fdb02e72a8027c391169
Reviewed-on: https://go-review.googlesource.com/29675
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-23 04:32:38 +00:00
David Crawshaw b4c9829c22 runtime: check plugin-loaded moduledata addresses
Inspired by difficulties with plugin support on darwin.

Change-Id: I2cef8410837946454e75d00e94e46791f03f2267
Reviewed-on: https://go-review.googlesource.com/29391
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-23 02:16:17 +00:00
Ian Lance Taylor 9d8522fdc7 cmd/compile: don't instrument copy and append in runtime
Instrumenting copy and append for the race detector changes them to call
different functions. In the runtime package the alternate functions are
not marked as nosplit. This caused a crash in the SIGPROF handler when
invoked on a non-Go thread in a program built with the race detector. In
some cases the handler can call copy, the race detector changed that to
a call to a non-nosplit function, the function tried to check the stack
guard, and crashed because it was running on a non-Go thread. The
SIGPROF handler is written carefully to avoid such problems, but hidden
function calls are difficult to avoid.

Fix this by changing the compiler to not instrument copy and append when
compiling the runtime package. Change the runtime package to add
explicit race checks for the only code I could find where copy is used
to write to user data (append is never used).

Change-Id: I11078a66c0aaa459a7d2b827b49f4147922050af
Reviewed-on: https://go-review.googlesource.com/29472
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2016-09-22 23:37:17 +00:00
Ian Lance Taylor 9e028b70ea runtime: merge signal[12]_unix.go into signal_unix.go
Requires adding a sigfwd function for Solaris, as previously
signal2_unix.go was not built for Solaris.

Change-Id: Iea3ff0ddfa15af573813eb075bead532b324a3fc
Reviewed-on: https://go-review.googlesource.com/29550
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-21 23:04:34 +00:00
Mikio Hara 5db80c30e6 runtime: revert CL 18835; don't install new signal stack unconditionally on dragonfly
This change reverts CL 18835 which is a workaroud for older DragonFly
BSD kernels, and fixes #14051, #14052 and #14067 in a more general way
the same as other platforms except NetBSD.

This change also bumps the minimum required version of DragonFly BSD
kernel to 4.4.4.

Fixes #16329.

Change-Id: I0b44b6afa675f5ed9523914226bd9ec4809ba5ae
Reviewed-on: https://go-review.googlesource.com/29491
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-21 22:18:06 +00:00
Lynn Boger b4efd09d18 cmd/link: split large elf text sections on ppc64x
Some applications built with Go on ppc64x with external linking
can fail to link with relocation truncation errors if the elf
text section that is generated is larger than 2^26 bytes and that
section contains a call instruction (bl) which calls a function
beyond the limit addressable by the 24 bit field in the
instruction.

This solution consists of generating multiple text sections where
each is small enough to allow the GNU linker to resolve the calls
by generating long branch code where needed.  Other changes were added
to handle differences in processing when multiple text sections exist.

Some adjustments were required to the computation of a method's address
when using the method offset table when there are multiple text sections.

The number of possible section headers was increased to allow for up
to 128 text sections.  A test case was also added.

Fixes #15823.

Change-Id: If8117b0e0afb058cbc072258425a35aef2363c92
Reviewed-on: https://go-review.googlesource.com/27790
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-21 20:23:49 +00:00
Austin Clements c03925edd3 runtime: remove unnecessary atomics from heapBitSetType
These used to be necessary when racing with updates to the mark bit,
but since the mark bit is no longer in the bitmap and the checkmark is
only updated with the world stopped, we can now always use regular
writes to update the type information in the heap bitmap.

Somewhat surprisingly, this has basically no overall performance
effect beyond the usual noise, but it does clean up the code.

Change-Id: I3933d0b4c0bc1c9bcf6313613515c0b496212105
Reviewed-on: https://go-review.googlesource.com/29277
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-09-21 15:08:16 +00:00
Austin Clements ab59235729 runtime: consistency check for G rescan position
Issue #17099 shows a failure that indicates we rescanned a stack twice
concurrently during mark termination, which suggests that the rescan
list became inconsistent. Add a simple check when we dequeue something
from the rescan list that it claims to be at the index where we found
it.

Change-Id: I6a267da4154a2e7b7d430cb4056e6bae978eaf62
Reviewed-on: https://go-review.googlesource.com/29280
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-09-20 18:37:32 +00:00
Austin Clements 39ce6eb9ec runtime: report GCSys and OtherSys in heap profile
The comment block at the end of the heap profile includes *almost*
everything from MemStats. Add the missing fields. These are useful for
debugging RSS that has gone to GC-internal data structures.

Change-Id: I0ee8a918d49629e28fd8fd2bf6861c4529461c24
Reviewed-on: https://go-review.googlesource.com/29276
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-09-20 18:37:29 +00:00
Cherry Zhang 38cd79889e cmd/compile: simplify div/mod on ARM
On ARM, DIV, DIVU, MOD, MODU are pseudo instructions that makes
runtime calls _div/_udiv/_mod/_umod, which themselves are wrappers
of udiv. The udiv function does the real thing.

Instead of generating these pseudo instructions, call to udiv
directly. This removes one layer of wrappers (which has an awkward
way of passing argument), and also allows combining DIV and MOD
if both results are needed.

Change-Id: I118afc3986db3a1daabb5c1e6e57430888c91817
Reviewed-on: https://go-review.googlesource.com/29390
Reviewed-by: David Chase <drchase@google.com>
2016-09-20 13:40:48 +00:00
Keith Randall 6129f37367 cmd/compile: inline convT2{I,E} when result doesn't escape
No point in calling a function when we can build the interface
using a known type (or itab) and the address of a local.

Get rid of third arg (preallocated stack space) to convT2{I,E}.

Makes go binary smaller by 0.2%

benchmark                   old ns/op     new ns/op     delta
BenchmarkEfaceInteger-8     16.7          10.1          -39.52%

Update #17118
Update #15375

Change-Id: I9724a1f802bfa1e3957bf1856b55558278e198a2
Reviewed-on: https://go-review.googlesource.com/29373
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2016-09-19 02:37:08 +00:00
David Crawshaw 0cbb12f0bb plugin: new package for loading plugins
Includes a linux implementation.

Change-Id: Iacc2ed7da760ae9deebc928adf2b334b043b07ec
Reviewed-on: https://go-review.googlesource.com/27823
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-16 17:54:40 +00:00
David Crawshaw 8607bed744 runtime: avoid dependence on main symbol
For -buildmode=plugin, this lets the linker drop the main.main symbol
out of the binary while including most of the runtime.

(In the future it should be possible to drop the entire runtime
package from plugins.)

Change-Id: I3e7a024ddf5cc945e3d8b84bf37a0b7cb2a00eb6
Reviewed-on: https://go-review.googlesource.com/27821
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-16 14:49:27 +00:00
David Crawshaw eced6754c2 cmd/link: -buildmode=plugin support for linux
This CL contains several linker changes to support creating plugins.

It collects the exported plugin symbols provided by the compiler and
includes them in the moduledata.

It treats a binary as being dynamically linked if it imports the plugin
package. This lets the dynamic linker de-duplicate symbols.

Change-Id: I099b6f38dda26306eba5c41dbe7862f5a5918d95
Reviewed-on: https://go-review.googlesource.com/27820
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-09-16 14:49:13 +00:00
Matthew Dempsky 27eebbabc2 cmd/compile, runtime: remove throwreturn
Change-Id: If8d27cf1cd8d650ed0ba332448d3174d80b6b0ca
Reviewed-on: https://go-review.googlesource.com/29217
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-15 12:13:34 +00:00
Martin Möhrmann 150de948ee cmd/compile: intrinsify slicebytetostringtmp when not instrumenting
when not instrumenting:
- Intrinsify uses of slicebytetostringtmp within the runtime package
  in the ssa backend.
- Pass OARRAYBYTESTRTMP nodes to the compiler backends for lowering
  instead of generating calls to slicebytetostringtmp.

name                    old time/op  new time/op  delta
ConcatStringAndBytes-4  27.9ns ± 2%  24.7ns ± 2%  -11.52%  (p=0.000 n=43+43)

Fixes #17044

Change-Id: I51ce9c3b93284ce526edd0234f094e98580faf2d
Reviewed-on: https://go-review.googlesource.com/29017
Run-TryBot: Martin Möhrmann <martisch@uos.de>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-09-14 21:58:14 +00:00
Josh Bleecher Snyder 9980b70cb4 runtime: limit the number of map overflow buckets
Consider repeatedly adding many items to a map
and then deleting them all, as in #16070. The map
itself doesn't need to grow above the high water
mark of number of items. However, due to random
collisions, the map can accumulate overflow
buckets.

Prior to this CL, those overflow buckets were
never removed, which led to a slow memory leak.

The problem with removing overflow buckets is
iterators. The obvious approach is to repack
keys and values and eliminate unused overflow
buckets. However, keys, values, and overflow
buckets cannot be manipulated without disrupting
iterators.

This CL takes a different approach, which is to
reuse the existing map growth mechanism,
which is well established, well tested, and
safe in the presence of iterators.
When a map has accumulated enough overflow buckets
we trigger map growth, but grow into a map of the
same size as before. The old overflow buckets will
be left behind for garbage collection.

For the code in #16070, instead of climbing
(very slowly) forever, memory usage now cycles
between 264mb and 483mb every 15 minutes or so.

To avoid increasing the size of maps,
the overflow bucket counter is only 16 bits.
For large maps, the counter is incremented
stochastically.

Fixes #16070

Change-Id: If551d77613ec6836907efca58bda3deee304297e
Reviewed-on: https://go-review.googlesource.com/25049
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-09-13 17:53:32 +00:00
Cherry Zhang 38d35e714a cmd/compile, runtime/internal/atomic: intrinsify And8, Or8 on ARM64
Also add assembly implementation, in case intrinsics is disabled.

Change-Id: Iff0a8a8ce326651bd29f6c403f5ec08dd3629993
Reviewed-on: https://go-review.googlesource.com/28979
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-09-13 02:09:15 +00:00
Keith Randall d00a3cead8 runtime: make gdb test resilient to line numbering
Don't break on line number, instead break on the actual call.
This makes the test more robust to line numbering changes in the backend.

A CL (28950) changed the generated code line numbering slightly.  A MOVW
$0, R0 instruction at the start of the function changed to line
10 (because several constant zero instructions got CSEd, and one gets
picked arbitrarily).  That's too fragile for a test.

Change-Id: I5d6a8ef0603de7d727585004142780a527e70496
Reviewed-on: https://go-review.googlesource.com/29085
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-09-12 23:13:12 +00:00
Michael Munday f1515a01fd runtime, math/big: allow R0 on s390x to contain values other than 0
The new SSA backend for s390x can use R0 as a general purpose register.
This change modifies assembly code to either avoid using R0 entirely
or explicitly set R0 to 0.

R0 can still be safely used as 0 in address calculations.

Change-Id: I3efa723e9ef322a91a408bd8c31768d7858526c8
Reviewed-on: https://go-review.googlesource.com/28976
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-12 18:06:01 +00:00
Michael Munday d38d59ffb5 runtime: fix SIGILL in checkvectorfacility on s390x
STFLE does not necessarily write to all the double-words that are
requested. It is therefore necessary to clear the target memory
before calling STFLE in order to ensure that the facility list does
not contain false positives.

Fixes #17032.

Change-Id: I7bec9ade7103e747b72f08562fe57e6f091bd89f
Reviewed-on: https://go-review.googlesource.com/28850
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-09-09 00:34:22 +00:00
Cherry Zhang 14ff7cc94c runtime/cgo: fix callback on big-endian MIPS64
Use MOVW, instead of MOVV, to pass an int32 arg. Also no need to
restore arg registers.

Fix big-endian MIPS64 build.

Change-Id: Ib43c71075c988153e5e5c5c6e7297b3fee28652a
Reviewed-on: https://go-review.googlesource.com/28830
Reviewed-by: Minux Ma <minux@golang.org>
2016-09-09 00:30:28 +00:00
Josh Bleecher Snyder 07bcc16547 runtime: simplify getargp
Change-Id: I9ed62e8a6d8b9204c18748efd7845adabf3460b9
Reviewed-on: https://go-review.googlesource.com/28775
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-09-08 17:10:22 +00:00
Martin Möhrmann 252093f120 runtime: remove maxstring
Before this CL the runtime prevented printing of overlong strings with the print
function when the length of the string was determined to be corrupted.
Corruption was checked by comparing the string size against the limit
which was stored in maxstring.

However maxstring was not updated everywhere were go strings were created
e.g. for string constants during compile time. Thereby the check for maximum
string length prevented the printing of some valid strings.

The protection maxstring provided did not warrant the bookkeeping
and global synchronization needed to keep maxstring updated to the
correct limit everywhere.

Fixes #16999

Change-Id: I62cc2f4362f333f75b77f199ce1a71aac0ff7aeb
Reviewed-on: https://go-review.googlesource.com/28813
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-09-08 15:57:01 +00:00
Ilya Tocar 0cff219c12 strings: use AVX2 for Index if available
IndexHard4-4      1.50ms ± 2%  0.71ms ± 0%  -52.36%  (p=0.000 n=20+19)

This also fixes a bug, that caused a string of length 16 to use
two 8-byte comparisons instead of one 16-byte. And adds a test for
cases when partial_match fails.

Change-Id: I1ee8fc4e068bb36c95c45de78f067c822c0d9df0
Reviewed-on: https://go-review.googlesource.com/22551
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-09-07 10:43:13 +00:00