Commit Graph

3153 Commits

Author SHA1 Message Date
Josh Bleecher Snyder 6cb064c9c4 Revert "runtime: convert g.waitreason from string to uint8"
This reverts commit 4eea887fd4.

Reason for revert: broke s390x build

Change-Id: Id6c2b6a7319273c4d21f613d4cdd38b00d49f847
Reviewed-on: https://go-review.googlesource.com/100375
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2018-03-13 15:21:21 +00:00
Josh Bleecher Snyder 4eea887fd4 runtime: convert g.waitreason from string to uint8
Every time I poke at #14921, the g.waitreason string
pointer writes show up.

They're not particularly important performance-wise,
but it'd be nice to clear the noise away.

And it does open up a few extra bytes in the g struct
for some future use.

Change-Id: I7ffbd52fbc2a286931a2218038fda52ed6473cc9
Reviewed-on: https://go-review.googlesource.com/99078
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-12 21:56:50 +00:00
Tobias Klauser 025134b0d1 runtime: simplify range expressions in tests
Generated by running gofmt -s on the files in question.

Change-Id: If6578b150e1bfced8657196d2af01f5d36879f93
Reviewed-on: https://go-review.googlesource.com/100135
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2018-03-12 19:58:42 +00:00
Vladimir Kuzmin 7395083136 cmd/compile: avoid extra mapaccess in "m[k] op= r"
Currently, order desugars map assignment operations like

    m[k] op= r

into

    m[k] = m[k] op r

which in turn is transformed during walk into:

    tmp := *mapaccess(m, k)
    tmp = tmp op r
    *mapassign(m, k) = tmp

However, this is suboptimal, as we could instead produce just:

    *mapassign(m, k) op= r

One complication though is if "r == 0", then "m[k] /= r" and "m[k] %=
r" will panic, and they need to do so *before* calling mapassign,
otherwise we may insert a new zero-value element into the map.

It would be spec compliant to just emit the "r != 0" check before
calling mapassign (see #23735), but currently these checks aren't
generated until SSA construction. For now, it's simpler to continue
desugaring /= and %= into two map indexing operations.

Fixes #23661.

Change-Id: I46e3739d9adef10e92b46fdd78b88d5aabe68952
Reviewed-on: https://go-review.googlesource.com/91557
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-12 19:27:44 +00:00
Austin Clements 0def0f2e99 runtime: fix abort handling on arm64
The implementation of runtime.abort on arm64 currently branches to
address 0, which results in a signal from PC 0, rather than from
runtime.abort, so the runtime fails to recognize it as an abort.

Fix runtime.abort on arm64 to read from address 0 like what other
architectures do and recognize this in the signal handler.

Should fix the linux/arm64 build.

Change-Id: I960ab630daaeadc9190287604d4d8337b1ea3853
Reviewed-on: https://go-review.googlesource.com/99895
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2018-03-09 22:17:04 +00:00
Ilya Tocar 91102bf723 runtime: use bytes.IndexByte in findnull
bytes.IndexByte is heavily optimized. Use it in findnull.
This is second attempt, similar to CL97523.
In this version we never call IndexByte on region of memory,
that crosses page boundary. A bit slower than CL97523,
but still fast:

name        old time/op  new time/op  delta
GoString-6   164ns ± 2%   118ns ± 0%  -28.00%  (p=0.000 n=10+6)

findnull is also used in gostringnocopy,
which is used in many hot spots in the runtime.

Fixes #23830

Change-Id: Id843dd4f65a34309d92bdd8df229e484d26b0cb2
Reviewed-on: https://go-review.googlesource.com/98015
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-09 19:37:39 +00:00
Ian Lance Taylor 4902778607 runtime: set libcall values for Solaris system calls
This lets SIGPROF signals get a useful traceback.
Without it we just see sysvicallN calling asmcgocall.

Updates #24142

Change-Id: I5dfe3add51f0c3a4cb1c98acb7738be6396214bc
Reviewed-on: https://go-review.googlesource.com/99617
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-09 19:27:20 +00:00
Josh Bleecher Snyder 031f71efdf runtime: add TestSizeof
Borrowed from cmd/compile, TestSizeof ensures
that the size of important types doesn't change unexpectedly.
It also helps reviewers see the impact of intended changes.

Change-Id: If57955f0c3e66054de3f40c6bba585b88694c7be
Reviewed-on: https://go-review.googlesource.com/99837
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-09 17:03:25 +00:00
Tobias Klauser 91f74069ef runtime: fix comment for hwcap on linux/arm
hwcap is set in archauxv, setup_auxv no longer exists.

Change-Id: I0fc9393e0c1c45192e0eff4715e9bdd69fab2653
Reviewed-on: https://go-review.googlesource.com/99779
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2018-03-09 16:02:38 +00:00
Austin Clements 5d22cebb12 runtime: explain and enforce that _panic values live on the stack
It's a bit mysterious that _defer.sp is a uintptr that gets
stack-adjusted explicitly while _panic.argp is an unsafe.Pointer that
doesn't, but turns out to be critically important when a deferred
function grows the stack before doing a recover.

Add a comment explaining that this works because _panic values live on
the stack. Enforce this by marking _panic go:notinheap.

Change-Id: I9ca49e84ee1f86d881552c55dccd0662b530836b
Reviewed-on: https://go-review.googlesource.com/99735
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2018-03-08 23:35:46 +00:00
Austin Clements 60a9e5d613 runtime: ensure abort actually crashes the process
On all non-x86 arches, runtime.abort simply reads from nil.
Unfortunately, if this happens on a user stack, the signal handler
will dutifully turn this into a panicmem, which lets user defers run
and which user code can even recover from.

To fix this, add an explicit check to the signal handler that turns
faults in abort into hard crashes directly in the signal handler. This
has the added benefit of giving a register dump at the abort point.

Change-Id: If26a7f13790745ee3867db7f53b72d8281176d70
Reviewed-on: https://go-review.googlesource.com/93661
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2018-03-08 22:55:55 +00:00
Austin Clements c950a90d72 runtime: call abort instead of raw INT $3 or bad MOV
Everything except for amd64, amd64p32, and 386 currently defines and
uses an abort function. This CL makes these match. The next CL will
recognize the abort function to make this more useful.

Change-Id: I7c155871ea48919a9220417df0630005b444f488
Reviewed-on: https://go-review.googlesource.com/93660
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2018-03-08 22:55:54 +00:00
Austin Clements 7f1b2738bb runtime: make throw safer to call
Currently, throw may grow the stack, which means whenever we call it
from a context where it's not safe to grow the stack, we first have to
switch to the system stack. This is pretty easy to get wrong.

Fix this by making throw switch to the system stack so it doesn't grow
the stack and is hence safe to call without a system stack switch at
the call site.

The only thing this complicates is badsystemstack itself, which would
now go into an infinite loop before printing anything (previously it
would also go into an infinite loop, but would at least print the
error first). Fix this by making badsystemstack do a direct write and
then crash hard.

Change-Id: Ic5b4a610df265e47962dcfa341cabac03c31c049
Reviewed-on: https://go-review.googlesource.com/93659
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2018-03-08 22:55:52 +00:00
Austin Clements 9d59234cbe runtime: move unrecoverable panic handling to the system stack
Currently parts of unrecoverable panic handling (notably, printing
panic messages) can happen on the user stack. This may grow the stack,
which is generally fine, but if we're handling a runtime panic, it's
better to do as little as possible in case the runtime is in an
inconsistent state.

Hence, this commit rearranges the handling of unrecoverable panics so
that it's done entirely on the system stack.

This is mostly a matter of shuffling code a bit so everything can move
into a systemstack block. The one slight subtlety is in the "panic
during panic" case, where we now depend on startpanic_m's caller to
print the stack rather than startpanic_m itself. To make this work,
startpanic_m now returns a boolean indicating that the caller should
avoid trying to print any panic messages and get right to the stack
trace. Since the caller is already in a position to do this, this
actually simplifies things a little.

Change-Id: Id72febe8c0a9fb31d9369b600a1816d65a49bfed
Reviewed-on: https://go-review.googlesource.com/93658
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2018-03-08 22:55:51 +00:00
Ian Lance Taylor 3d69ef37b8 runtime: use systemstack around throw in sysSigaction
Try to fix the build on ppc64-linux and ppc64le-linux, avoiding:

--- FAIL: TestInlinedRoutineRecords (2.12s)
	dwarf_test.go:97: build: # command-line-arguments
		runtime.systemstack: nosplit stack overflow
			752	assumed on entry to runtime.sigtrampgo (nosplit)
			480	after runtime.sigtrampgo (nosplit) uses 272
			400	after runtime.sigfwdgo (nosplit) uses 80
			264	after runtime.setsig (nosplit) uses 136
			208	after runtime.sigaction (nosplit) uses 56
			136	after runtime.sysSigaction (nosplit) uses 72
			88	after runtime.throw (nosplit) uses 48
			16	after runtime.dopanic (nosplit) uses 72
			-16	after runtime.systemstack (nosplit) uses 32

	dwarf_test.go:98: build error: exit status 2
--- FAIL: TestAbstractOriginSanity (10.22s)
	dwarf_test.go:97: build: # command-line-arguments
		runtime.systemstack: nosplit stack overflow
			752	assumed on entry to runtime.sigtrampgo (nosplit)
			480	after runtime.sigtrampgo (nosplit) uses 272
			400	after runtime.sigfwdgo (nosplit) uses 80
			264	after runtime.setsig (nosplit) uses 136
			208	after runtime.sigaction (nosplit) uses 56
			136	after runtime.sysSigaction (nosplit) uses 72
			88	after runtime.throw (nosplit) uses 48
			16	after runtime.dopanic (nosplit) uses 72
			-16	after runtime.systemstack (nosplit) uses 32

	dwarf_test.go:98: build error: exit status 2
FAIL
FAIL	cmd/link/internal/ld	13.404s

Change-Id: I4840604adb0e9f68a8d8e24f2f2a1a17d1634a58
Reviewed-on: https://go-review.googlesource.com/99415
Reviewed-by: Austin Clements <austin@google.com>
2018-03-08 16:35:53 +00:00
Ian Lance Taylor 419c06455a runtime: get traceback from VDSO code
Currently if a profiling signal arrives while executing within a VDSO
the profiler will report _ExternalCode, which is needlessly confusing
for a pure Go program. Change the VDSO calling code to record the
caller's PC/SP, so that we can do a traceback from that point. If that
fails for some reason, report _VDSO rather than _ExternalCode, which
should at least point in the right direction.

This adds some instructions to the code that calls the VDSO, but the
slowdown is reasonably negligible:

name                                  old time/op  new time/op  delta
ClockVDSOAndFallbackPaths/vDSO-8      40.5ns ± 2%  41.3ns ± 1%  +1.85%  (p=0.002 n=10+10)
ClockVDSOAndFallbackPaths/Fallback-8  41.9ns ± 1%  43.5ns ± 1%  +3.84%  (p=0.000 n=9+9)
TimeNow-8                             41.5ns ± 3%  41.5ns ± 2%    ~     (p=0.723 n=10+10)

Fixes #24142

Change-Id: Iacd935db3c4c782150b3809aaa675a71799b1c9c
Reviewed-on: https://go-review.googlesource.com/97315
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-07 23:35:25 +00:00
Ian Lance Taylor c2f28de732 runtime: change from rt_sigaction to sigaction
This normalizes the Linux code to act like other targets. The size
argument to the rt_sigaction system call is pushed to a single
function, sysSigaction.

This is intended as a simplification step for CL 93875 for #14327.

Change-Id: I594788e235f0da20e16e8a028e27ac8c883907c4
Reviewed-on: https://go-review.googlesource.com/99077
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-07 23:30:02 +00:00
Elias Naur 7a2a96d6ad runtime/cgo: make sure nil is undefined before defining it
While working on standalone builds of gomobile bindings, I ran into
errors on the form:

gcc_darwin_arm.c:30:31: error: ambiguous expansion of macro 'nil' [-Werror,-Wambiguous-macro]
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS11.2.sdk/usr/include/MacTypes.h:94:15: note: expanding this definition of 'nil'

Fix it by undefining nil before defining it in libcgo.h.

Change-Id: I8e9660a68c6c351e592684d03d529f0d182c0493
Reviewed-on: https://go-review.googlesource.com/99215
Run-TryBot: Elias Naur <elias.naur@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2018-03-07 21:08:19 +00:00
Yuval Pavel Zholkover 083f3957b8 runtime: add missing build constraints to os_linux_{be64,noauxv,novdso,ppc64x}.go files
They do not match the file name patterns of
  *_GOOS
  *_GOARCH
  *_GOOS_GOARCH
therefore the implicit linux constraint was not being added.

Change-Id: Ie506c51cee6818db445516f96fffaa351df62cf5
Reviewed-on: https://go-review.googlesource.com/99116
Reviewed-by: Tobias Klauser <tobias.klauser@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-03-07 14:26:19 +00:00
Matthew Dempsky 2c0c68d621 cmd/compile: fix miscompilation of "defer delete(m, k)"
Previously, for slow map key types (i.e., any type other than a 32-bit
or 64-bit plain memory type), we would rewrite

    defer delete(m, k)

into

    ktmp := k
    defer delete(m, &ktmp)

However, if the defer statement was inside a loop, we would end up
reusing the same ktmp value for all of the deferred deletes.

We already rewrite

    defer print(x, y, z)

into

    defer func(a1, a2, a3) {
        print(a1, a2, a3)
    }(x, y, z)

This CL generalizes this rewrite to also apply for slow map deletes.

This could be extended to apply even more generally to other builtins,
but as discussed on #24259, there are cases where we must *not* do
this (e.g., "defer recover()"). However, if we elect to do this more
generally, this CL should still make that easier.

Lastly, while here, fix a few isues in wrapCall (nee walkprintfunc):

1) lookupN appends the generation number to the symbol anyway, so "%d"
was being literally included in the generated function names.

2) walkstmt will be called when the function is compiled later anyway,
so no need to do it now.

Fixes #24259.

Change-Id: I70286867c64c69c18e9552f69e3f4154a0fc8b04
Reviewed-on: https://go-review.googlesource.com/99017
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-06 23:33:28 +00:00
Josh Bleecher Snyder f7739c07c8 runtime: skip pointless writes in freedefer
Change-Id: I501a0e5c87ec88616c7dcdf1b723758b6df6c088
Reviewed-on: https://go-review.googlesource.com/98758
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-06 18:58:57 +00:00
Tobias Klauser 9745397e1d runtime: fix stack switch check in walltime/nanotime on linux/arm
CL 98095 got the check wrong. We should be testing
'getg() == getg().m.curg', not 'getg().m == getg().m.curg'.

Change-Id: I32f6238b00409b67afa8efe732513d542aec5bc7
Reviewed-on: https://go-review.googlesource.com/98855
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2018-03-06 14:24:19 +00:00
Meng Zhuo 8916773a3d runtime, cmd/compile: use ldp for DUFFCOPY on ARM64
name         old time/op  new time/op  delta
CopyFat8     2.15ns ± 1%  2.19ns ± 6%     ~     (p=0.171 n=8+9)
CopyFat12    2.15ns ± 0%  2.17ns ± 2%     ~     (p=0.137 n=8+10)
CopyFat16    2.17ns ± 3%  2.15ns ± 0%     ~     (p=0.211 n=10+10)
CopyFat24    2.16ns ± 1%  2.15ns ± 0%     ~     (p=0.087 n=10+10)
CopyFat32    11.5ns ± 0%  12.8ns ± 2%  +10.87%  (p=0.000 n=8+10)
CopyFat64    20.2ns ± 2%  12.9ns ± 0%  -36.11%  (p=0.000 n=10+10)
CopyFat128   37.2ns ± 0%  21.5ns ± 0%  -42.20%  (p=0.000 n=10+10)
CopyFat256   71.6ns ± 0%  38.7ns ± 0%  -45.95%  (p=0.000 n=10+10)
CopyFat512    140ns ± 0%    73ns ± 0%  -47.86%  (p=0.000 n=10+9)
CopyFat520    142ns ± 0%    74ns ± 0%  -47.54%  (p=0.000 n=10+10)
CopyFat1024   277ns ± 0%   141ns ± 0%  -49.10%  (p=0.000 n=10+10)

Change-Id: If54bc571add5db674d5e081579c87e80153d0a5a
Reviewed-on: https://go-review.googlesource.com/97395
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2018-03-06 04:14:59 +00:00
Hana Kim d3946f75d3 internal/trace: remove backlinks from span/task end to start
This is an updated version of golang.org/cl/96395, with the fix to
TestUserSpan.

This reverts commit 7b6f6267e90a8e4eab37a3f2164ba882e6222adb.

Change-Id: I31eec8ba0997f9178dffef8dac608e731ab70872
Reviewed-on: https://go-review.googlesource.com/98236
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Heschi Kreinick <heschi@google.com>
2018-03-05 20:10:22 +00:00
Ian Lance Taylor 7178267b59 runtime: rename vdso symbols to use camel case
This was originally C code using names with underscores, which were
retained when the code was rewritten into Go. Change the code to use
Go-like camel case names.

The names that come from the ELF ABI are left unchanged.

Change-Id: I181bc5dd81284c07bc67b7df4635f4734b41d646
Reviewed-on: https://go-review.googlesource.com/98520
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Tobias Klauser <tobias.klauser@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-05 19:12:32 +00:00
Tobias Klauser 5f80e70912 runtime: remove unused SYS_* definitions on Linux
Also fix the indentation of the SYS_* definitions in sys_linux_mipsx.s
and order them numerically.

Change-Id: I0c454301c329a163e7db09dcb25d4e825149858c
Reviewed-on: https://go-review.googlesource.com/98448
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-05 18:32:08 +00:00
Keith Randall ee58eccc56 internal/bytealg: move short string Index implementations into bytealg
Also move the arm64 CountByte implementation while we're here.

Fixes #19792

Change-Id: I1e0fdf1e03e3135af84150a2703b58dad1b0d57e
Reviewed-on: https://go-review.googlesource.com/98518
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-04 19:49:44 +00:00
Keith Randall f6332bb84a internal/bytealg: move compare functions to bytealg
Move bytes.Compare and runtime·cmpstring to bytealg.

Update #19792

Change-Id: I139e6d7c59686bef7a3017e3dec99eba5fd10447
Reviewed-on: https://go-review.googlesource.com/98515
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-04 17:49:39 +00:00
Keith Randall 45964e4f9c internal/bytealg: move Count to bytealg
Move bytes.Count and strings.Count to bytealg.

Update #19792

Change-Id: I3e4e14b504a0b71758885bb131e5656e342cf8cb
Reviewed-on: https://go-review.googlesource.com/98495
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-04 17:49:25 +00:00
Tobias Klauser 51b027116c runtime: use vDSO for clock_gettime on linux/arm
Use the __vdso_clock_gettime fast path via the vDSO on linux/arm to
speed up nanotime and walltime. This results in the following
performance improvement for time.Now on a RaspberryPi 3 (running
32bit Raspbian, i.e. GOOS=linux/GOARCH=arm):

name     old time/op  new time/op  delta
TimeNow  0.99µs ± 0%  0.39µs ± 1%  -60.74%  (p=0.000 n=12+20)

Change-Id: I3598278a6c88d7f6a6ce66c56b9d25f9dd2f4c9a
Reviewed-on: https://go-review.googlesource.com/98095
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-03-03 12:12:58 +00:00
Tobias Klauser c69f60d071 runtime: remove unused __vdso_time_sym
It's unused since https://golang.org/cl/99320043

Change-Id: I74d69ff894aa2fb556f1c2083406c118c559d91b
Reviewed-on: https://go-review.googlesource.com/98195
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2018-03-03 12:11:38 +00:00
Keith Randall 1dfa380e3d internal/bytealg: move equal functions to bytealg
Move bytes.Equal, runtime.memequal, and runtime.memequal_varlen
to the bytealg package.

Update #19792

Change-Id: Ic4175e952936016ea0bda6c7c3dbb33afdc8e4ac
Reviewed-on: https://go-review.googlesource.com/98355
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-03 04:18:27 +00:00
Keith Randall 403ab0f221 internal/bytealg: move IndexByte asssembly to the new bytealg package
Move the IndexByte function from the runtime to a new bytealg package.
The new package will eventually hold all the optimized assembly for
groveling through byte slices and strings. It seems a better home for
this code than randomly keeping it in runtime.

Once this is in, the next step is to move the other functions
(Compare, Equal, ...).

Update #19792

This change seems complicated enough that we might just declare
"not worth it" and abandon.  Opinions welcome.

The core assembly is all unchanged, except minor modifications where
the code reads cpu feature bits.

The wrapper functions have been cleaned up as they are now actually
checked by vet.

Change-Id: I9fa75bee5d85db3a65b3fd3b7997e60367523796
Reviewed-on: https://go-review.googlesource.com/98016
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-02 22:46:15 +00:00
Zhou Peng b77aad0891 runtime: fix typo, func comments should start with function name
Change-Id: I289af4884583537639800e37928c22814d38cba9
Reviewed-on: https://go-review.googlesource.com/98115
Reviewed-by: Alberto Donizetti <alb.donizetti@gmail.com>
2018-03-02 12:03:30 +00:00
Brad Fitzpatrick 1fadbc1a76 Revert "runtime: use bytes.IndexByte in findnull"
This reverts commit 7365fac2db.

Reason for revert: breaks the build on some architectures, reading unmapped pages?

Change-Id: I3a8c02dc0b649269faacea79ecd8213defa97c54
Reviewed-on: https://go-review.googlesource.com/97995
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-01 22:22:51 +00:00
Balaram Makam 213a75171d runtime: improve arm64 memmove implementation
Improve runtime memmove_arm64.s specializing for small copies and
processing 32 bytes per iteration for 32 bytes or more.

Benchmark results of runtime/Memmove on Amberwing:
name                      old time/op    new time/op     delta
Memmove/0                   7.61ns ± 0%     7.20ns ± 0%     ~     (p=0.053 n=5+7)
Memmove/1                   9.28ns ± 0%     8.80ns ± 0%   -5.17%  (p=0.000 n=4+8)
Memmove/2                   9.65ns ± 0%     9.20ns ± 0%   -4.68%  (p=0.000 n=5+8)
Memmove/3                   10.0ns ± 0%      9.2ns ± 0%   -7.83%  (p=0.000 n=5+8)
Memmove/4                   10.6ns ± 0%      9.2ns ± 0%  -13.21%  (p=0.000 n=5+8)
Memmove/5                   11.0ns ± 0%      9.2ns ± 0%  -16.36%  (p=0.000 n=5+8)
Memmove/6                   12.4ns ± 0%      9.2ns ± 0%  -25.81%  (p=0.000 n=5+8)
Memmove/7                   13.1ns ± 0%      9.2ns ± 0%  -29.56%  (p=0.000 n=5+8)
Memmove/8                   9.10ns ± 1%     9.20ns ± 0%   +1.08%  (p=0.002 n=5+8)
Memmove/9                   9.67ns ± 0%     9.20ns ± 0%   -4.88%  (p=0.000 n=5+8)
Memmove/10                  10.4ns ± 0%      9.2ns ± 0%  -11.54%  (p=0.000 n=5+8)
Memmove/11                  10.9ns ± 0%      9.2ns ± 0%  -15.60%  (p=0.000 n=5+8)
Memmove/12                  11.5ns ± 0%      9.2ns ± 0%  -20.00%  (p=0.000 n=5+8)
Memmove/13                  12.4ns ± 0%      9.2ns ± 0%  -25.81%  (p=0.000 n=5+8)
Memmove/14                  13.1ns ± 0%      9.2ns ± 0%  -29.77%  (p=0.000 n=5+8)
Memmove/15                  13.8ns ± 0%      9.2ns ± 0%  -33.33%  (p=0.000 n=5+8)
Memmove/16                  9.70ns ± 0%     9.20ns ± 0%   -5.19%  (p=0.000 n=5+8)
Memmove/32                  10.6ns ± 0%      9.2ns ± 0%  -13.21%  (p=0.000 n=4+8)
Memmove/64                  13.4ns ± 0%     10.2ns ± 0%  -23.88%  (p=0.000 n=4+8)
Memmove/128                 18.1ns ± 1%     13.2ns ± 0%  -26.99%  (p=0.000 n=5+8)
Memmove/256                 25.2ns ± 0%     16.4ns ± 0%  -34.92%  (p=0.000 n=5+8)
Memmove/512                 36.4ns ± 0%     22.8ns ± 0%  -37.36%  (p=0.000 n=5+8)
Memmove/1024                70.1ns ± 0%     36.8ns ±11%  -47.49%  (p=0.002 n=5+8)
Memmove/2048                 121ns ± 0%       61ns ± 0%     ~     (p=0.053 n=5+7)
Memmove/4096                 224ns ± 0%      120ns ± 0%  -46.43%  (p=0.000 n=5+8)
MemmoveUnalignedDst/0       8.40ns ± 0%     8.00ns ± 0%   -4.76%  (p=0.000 n=5+8)
MemmoveUnalignedDst/1       9.87ns ± 1%    10.00ns ± 0%     ~     (p=0.070 n=5+8)
MemmoveUnalignedDst/2       10.6ns ± 0%     10.4ns ± 0%   -1.89%  (p=0.000 n=5+8)
MemmoveUnalignedDst/3       10.8ns ± 0%     10.4ns ± 0%   -3.70%  (p=0.000 n=5+8)
MemmoveUnalignedDst/4       10.9ns ± 0%     10.3ns ± 0%     ~     (p=0.053 n=5+7)
MemmoveUnalignedDst/5       11.5ns ± 0%     10.3ns ± 1%  -10.22%  (p=0.000 n=4+8)
MemmoveUnalignedDst/6       13.2ns ± 0%     10.4ns ± 1%  -21.50%  (p=0.000 n=5+8)
MemmoveUnalignedDst/7       13.7ns ± 0%     10.3ns ± 1%  -24.64%  (p=0.000 n=4+8)
MemmoveUnalignedDst/8       10.1ns ± 0%     10.4ns ± 0%   +2.97%  (p=0.002 n=5+8)
MemmoveUnalignedDst/9       10.7ns ± 0%     10.4ns ± 0%   -2.80%  (p=0.000 n=5+8)
MemmoveUnalignedDst/10      11.2ns ± 1%     10.4ns ± 0%   -6.81%  (p=0.000 n=5+8)
MemmoveUnalignedDst/11      11.6ns ± 0%     10.4ns ± 0%  -10.34%  (p=0.000 n=5+8)
MemmoveUnalignedDst/12      12.5ns ± 2%     10.4ns ± 0%  -16.53%  (p=0.000 n=5+8)
MemmoveUnalignedDst/13      13.7ns ± 0%     10.4ns ± 0%  -24.09%  (p=0.000 n=5+8)
MemmoveUnalignedDst/14      14.0ns ± 0%     10.4ns ± 0%  -25.71%  (p=0.000 n=5+8)
MemmoveUnalignedDst/15      14.6ns ± 0%     10.4ns ± 0%  -28.77%  (p=0.000 n=5+8)
MemmoveUnalignedDst/16      10.5ns ± 0%     10.4ns ± 0%   -0.95%  (p=0.000 n=5+8)
MemmoveUnalignedDst/32      12.4ns ± 0%     11.6ns ± 0%   -6.05%  (p=0.000 n=5+8)
MemmoveUnalignedDst/64      15.2ns ± 0%     12.3ns ± 0%  -19.08%  (p=0.000 n=5+8)
MemmoveUnalignedDst/128     18.7ns ± 0%     15.2ns ± 0%  -18.72%  (p=0.000 n=5+8)
MemmoveUnalignedDst/256     25.1ns ± 0%     18.6ns ± 0%  -25.90%  (p=0.000 n=5+8)
MemmoveUnalignedDst/512     37.8ns ± 0%     24.4ns ± 0%  -35.45%  (p=0.000 n=5+8)
MemmoveUnalignedDst/1024    74.6ns ± 0%     40.4ns ± 0%     ~     (p=0.053 n=5+7)
MemmoveUnalignedDst/2048     133ns ± 0%       75ns ± 0%  -43.91%  (p=0.000 n=5+8)
MemmoveUnalignedDst/4096     247ns ± 0%      141ns ± 0%  -42.91%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/0       8.40ns ± 0%     8.00ns ± 0%   -4.76%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/1       9.81ns ± 0%    10.00ns ± 0%   +1.98%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/2       10.5ns ± 0%     10.0ns ± 0%   -4.76%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/3       10.7ns ± 1%     10.0ns ± 0%   -6.89%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/4       11.3ns ± 0%     10.0ns ± 0%  -11.50%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/5       11.6ns ± 0%     10.0ns ± 0%  -13.79%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/6       13.6ns ± 0%     10.0ns ± 0%  -26.47%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/7       14.4ns ± 0%     10.0ns ± 0%  -30.75%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/8       9.87ns ± 1%    10.00ns ± 0%     ~     (p=0.070 n=5+8)
MemmoveUnalignedSrc/9       10.4ns ± 0%     10.0ns ± 0%   -3.85%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/10      11.2ns ± 0%     10.0ns ± 0%  -10.71%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/11      11.8ns ± 0%     10.0ns ± 0%  -15.25%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/12      12.1ns ± 0%     10.0ns ± 0%  -17.36%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/13      13.6ns ± 0%     10.0ns ± 0%  -26.47%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/14      14.7ns ± 0%     10.0ns ± 0%  -31.79%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/15      14.4ns ± 0%     10.0ns ± 0%  -30.56%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/16      11.0ns ± 0%     10.0ns ± 0%   -9.09%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/32      11.5ns ± 0%     10.0ns ± 0%  -13.04%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/64      14.9ns ± 0%     11.2ns ± 0%  -24.83%  (p=0.000 n=4+8)
MemmoveUnalignedSrc/128     19.5ns ± 0%     15.2ns ± 0%  -22.05%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/256     27.3ns ± 2%     19.2ns ± 0%  -29.62%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/512     40.4ns ± 0%     27.2ns ± 0%  -32.67%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/1024    75.4ns ± 0%     44.4ns ± 0%  -41.15%  (p=0.000 n=5+8)
MemmoveUnalignedSrc/2048     131ns ± 0%       77ns ± 3%  -41.56%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/4096     248ns ± 0%      145ns ± 0%  -41.53%  (p=0.000 n=5+8)

name                      old speed      new speed       delta
Memmove/1                  108MB/s ± 0%    114MB/s ± 0%   +5.37%  (p=0.004 n=4+8)
Memmove/2                  207MB/s ± 0%    217MB/s ± 0%   +4.85%  (p=0.002 n=5+8)
Memmove/3                  301MB/s ± 0%    326MB/s ± 0%   +8.45%  (p=0.002 n=5+8)
Memmove/4                  377MB/s ± 0%    435MB/s ± 0%  +15.31%  (p=0.004 n=4+8)
Memmove/5                  455MB/s ± 0%    543MB/s ± 0%  +19.46%  (p=0.002 n=5+8)
Memmove/6                  483MB/s ± 0%    652MB/s ± 0%  +34.88%  (p=0.003 n=5+7)
Memmove/7                  537MB/s ± 0%    761MB/s ± 0%  +41.71%  (p=0.002 n=5+8)
Memmove/8                  879MB/s ± 1%    869MB/s ± 0%   -1.15%  (p=0.000 n=5+7)
Memmove/9                  931MB/s ± 0%    978MB/s ± 0%   +5.05%  (p=0.002 n=5+8)
Memmove/10                 960MB/s ± 0%   1086MB/s ± 0%  +13.13%  (p=0.002 n=5+8)
Memmove/11                1.00GB/s ± 0%   1.20GB/s ± 0%  +18.92%  (p=0.003 n=5+7)
Memmove/12                1.04GB/s ± 0%   1.30GB/s ± 0%  +25.40%  (p=0.002 n=5+8)
Memmove/13                1.05GB/s ± 0%   1.41GB/s ± 0%  +34.87%  (p=0.002 n=5+8)
Memmove/14                1.07GB/s ± 0%   1.52GB/s ± 0%  +42.14%  (p=0.002 n=5+8)
Memmove/15                1.09GB/s ± 0%   1.63GB/s ± 0%  +49.91%  (p=0.002 n=5+8)
Memmove/16                1.65GB/s ± 0%   1.74GB/s ± 0%   +5.40%  (p=0.003 n=5+7)
Memmove/32                3.01GB/s ± 0%   3.48GB/s ± 0%  +15.58%  (p=0.003 n=5+7)
Memmove/64                4.76GB/s ± 0%   6.27GB/s ± 0%  +31.75%  (p=0.003 n=5+7)
Memmove/128               7.08GB/s ± 1%   9.69GB/s ± 0%  +36.96%  (p=0.002 n=5+8)
Memmove/256               10.2GB/s ± 0%   15.6GB/s ± 0%  +53.58%  (p=0.002 n=5+8)
Memmove/512               14.1GB/s ± 0%   22.4GB/s ± 0%  +59.57%  (p=0.003 n=5+7)
Memmove/1024              14.6GB/s ± 0%   27.9GB/s ±10%  +91.00%  (p=0.002 n=5+8)
Memmove/2048              16.9GB/s ± 0%   33.4GB/s ± 0%  +98.32%  (p=0.003 n=5+7)
Memmove/4096              18.3GB/s ± 0%   33.9GB/s ± 0%  +85.80%  (p=0.002 n=5+8)
MemmoveUnalignedDst/1      101MB/s ± 1%    100MB/s ± 0%     ~     (p=0.586 n=5+8)
MemmoveUnalignedDst/2      189MB/s ± 0%    192MB/s ± 0%   +1.82%  (p=0.002 n=5+8)
MemmoveUnalignedDst/3      278MB/s ± 0%    288MB/s ± 0%   +3.88%  (p=0.003 n=5+7)
MemmoveUnalignedDst/4      368MB/s ± 0%    387MB/s ± 0%   +5.41%  (p=0.003 n=5+7)
MemmoveUnalignedDst/5      434MB/s ± 0%    484MB/s ± 0%  +11.52%  (p=0.002 n=5+8)
MemmoveUnalignedDst/6      454MB/s ± 0%    580MB/s ± 0%  +27.62%  (p=0.002 n=5+8)
MemmoveUnalignedDst/7      509MB/s ± 0%    677MB/s ± 0%  +33.01%  (p=0.002 n=5+8)
MemmoveUnalignedDst/8      792MB/s ± 0%    770MB/s ± 0%   -2.77%  (p=0.002 n=5+8)
MemmoveUnalignedDst/9      841MB/s ± 0%    866MB/s ± 0%   +2.92%  (p=0.002 n=5+8)
MemmoveUnalignedDst/10     896MB/s ± 0%    962MB/s ± 0%   +7.35%  (p=0.003 n=5+7)
MemmoveUnalignedDst/11     947MB/s ± 0%   1058MB/s ± 0%  +11.80%  (p=0.002 n=5+8)
MemmoveUnalignedDst/12     962MB/s ± 2%   1154MB/s ± 0%  +19.97%  (p=0.002 n=5+8)
MemmoveUnalignedDst/13     947MB/s ± 0%   1251MB/s ± 0%  +32.08%  (p=0.002 n=5+8)
MemmoveUnalignedDst/14    1.00GB/s ± 0%   1.35GB/s ± 0%  +34.55%  (p=0.002 n=5+8)
MemmoveUnalignedDst/15    1.03GB/s ± 0%   1.44GB/s ± 0%  +40.50%  (p=0.002 n=5+8)
MemmoveUnalignedDst/16    1.53GB/s ± 0%   1.54GB/s ± 0%   +0.77%  (p=0.002 n=5+8)
MemmoveUnalignedDst/32    2.58GB/s ± 0%   2.75GB/s ± 0%   +6.52%  (p=0.003 n=5+7)
MemmoveUnalignedDst/64    4.21GB/s ± 0%   5.19GB/s ± 0%  +23.40%  (p=0.004 n=5+6)
MemmoveUnalignedDst/128   6.86GB/s ± 0%   8.42GB/s ± 0%  +22.78%  (p=0.003 n=5+7)
MemmoveUnalignedDst/256   10.2GB/s ± 0%   13.8GB/s ± 0%  +35.15%  (p=0.002 n=5+8)
MemmoveUnalignedDst/512   13.5GB/s ± 0%   21.0GB/s ± 0%  +54.90%  (p=0.002 n=5+8)
MemmoveUnalignedDst/1024  13.7GB/s ± 0%   25.3GB/s ± 0%  +84.61%  (p=0.003 n=5+7)
MemmoveUnalignedDst/2048  15.3GB/s ± 0%   27.5GB/s ± 0%  +79.52%  (p=0.002 n=5+8)
MemmoveUnalignedDst/4096  16.5GB/s ± 0%   28.9GB/s ± 0%  +74.74%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/1      102MB/s ± 0%    100MB/s ± 0%   -2.02%  (p=0.000 n=5+7)
MemmoveUnalignedSrc/2      191MB/s ± 0%    200MB/s ± 0%   +4.78%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/3      279MB/s ± 0%    300MB/s ± 0%   +7.45%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/4      354MB/s ± 0%    400MB/s ± 0%  +13.10%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/5      431MB/s ± 0%    500MB/s ± 0%  +16.02%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/6      441MB/s ± 0%    600MB/s ± 0%  +36.03%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/7      485MB/s ± 0%    700MB/s ± 0%  +44.29%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/8      811MB/s ± 1%    800MB/s ± 0%   -1.36%  (p=0.016 n=5+8)
MemmoveUnalignedSrc/9      864MB/s ± 0%    900MB/s ± 0%   +4.07%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/10     893MB/s ± 0%    999MB/s ± 0%  +11.97%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/11     932MB/s ± 0%   1099MB/s ± 0%  +18.01%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/12     988MB/s ± 0%   1199MB/s ± 0%  +21.35%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/13     955MB/s ± 0%   1299MB/s ± 0%  +36.02%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/14     955MB/s ± 0%   1399MB/s ± 0%  +46.52%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/15    1.04GB/s ± 0%   1.50GB/s ± 0%  +44.18%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/16    1.45GB/s ± 0%   1.60GB/s ± 0%  +10.14%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/32    2.78GB/s ± 0%   3.20GB/s ± 0%  +15.16%  (p=0.003 n=5+7)
MemmoveUnalignedSrc/64    4.30GB/s ± 0%   5.72GB/s ± 0%  +32.90%  (p=0.003 n=5+7)
MemmoveUnalignedSrc/128   6.57GB/s ± 0%   8.42GB/s ± 0%  +28.06%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/256   9.39GB/s ± 1%  13.33GB/s ± 0%  +41.96%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/512   12.7GB/s ± 0%   18.8GB/s ± 0%  +48.53%  (p=0.003 n=5+7)
MemmoveUnalignedSrc/1024  13.6GB/s ± 0%   23.0GB/s ± 0%  +69.82%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/2048  15.6GB/s ± 0%   26.8GB/s ± 3%  +71.37%  (p=0.002 n=5+8)
MemmoveUnalignedSrc/4096  16.5GB/s ± 0%   28.2GB/s ± 0%  +71.40%  (p=0.002 n=5+8)

Fixes #22925

Change-Id: I38c1a9ad5c6e3f4f95fc521c4b7e3140b58b4737
Reviewed-on: https://go-review.googlesource.com/83799
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2018-03-01 20:34:11 +00:00
Josh Bleecher Snyder 7365fac2db runtime: use bytes.IndexByte in findnull
bytes.IndexByte is heavily optimized.
Use it in findnull.

name        old time/op  new time/op  delta
GoString-8  65.5ns ± 1%  40.2ns ± 1%  -38.62%  (p=0.000 n=19+19)

findnull is also used in gostringnocopy,
which is used in many hot spots in the runtime.

Fixes #23830

Change-Id: I2e6cb279c7d8078f8844065de684cc3567fe89d7
Reviewed-on: https://go-review.googlesource.com/97523
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-03-01 20:34:07 +00:00
Hana Kim e75f805e6f runtime/trace: skip TestUserTaskSpan upon timestamp error
Change-Id: I030baaa0a0abf1e43449faaf676d389a28a868a3
Reviewed-on: https://go-review.googlesource.com/97857
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
Reviewed-by: Peter Weinberger <pjw@google.com>
2018-03-01 18:38:49 +00:00
Josh Bleecher Snyder 9372e3f5ef runtime: don't allocate to build strings of length 1
Use staticbytes instead.
Instrumenting make.bash shows approx 0.5%
of all slicebytetostrings have a buffer of length 1.

name                     old time/op  new time/op  delta
SliceByteToString/1-8    14.1ns ± 1%   4.1ns ± 1%  -71.13%  (p=0.000 n=17+20)
SliceByteToString/2-8    15.5ns ± 2%  15.5ns ± 1%     ~     (p=0.061 n=20+18)
SliceByteToString/4-8    14.9ns ± 1%  15.0ns ± 2%   +1.25%  (p=0.000 n=20+20)
SliceByteToString/8-8    17.1ns ± 1%  17.5ns ± 1%   +2.16%  (p=0.000 n=19+19)
SliceByteToString/16-8   23.6ns ± 1%  23.9ns ± 1%   +1.41%  (p=0.000 n=20+18)
SliceByteToString/32-8   26.0ns ± 1%  25.8ns ± 0%   -1.05%  (p=0.000 n=19+16)
SliceByteToString/64-8   30.0ns ± 0%  30.2ns ± 0%   +0.56%  (p=0.000 n=16+18)
SliceByteToString/128-8  38.9ns ± 0%  39.0ns ± 0%   +0.23%  (p=0.019 n=19+15)

Fixes #24172

Change-Id: I3dfa14eefbf9fb4387114e20c9cb40e186abe962
Reviewed-on: https://go-review.googlesource.com/97717
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2018-03-01 17:38:06 +00:00
Josh Bleecher Snyder aa9c1a8f80 runtime: fix amd64p32 indexbytes in presence of overflow
When the slice/string length is very large,
probably artifically large as in CL 97523,
adding BX (length) to R11 (pointer) overflows.
As a result, checking DI < R11 yields the wrong result.
Since they will be equal when the loop is done,
just check DI != R11 instead.
Yes, the pointer itself could overflow, but if that happens,
something else has gone pretty wrong; not our concern here.

Fixes #24187

Change-Id: I2f60fc6ccae739345d01bc80528560726ad4f8c6
Reviewed-on: https://go-review.googlesource.com/97802
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2018-03-01 16:53:33 +00:00
Tobias Klauser c7c01efd96 runtime: clean up libc_* definitions on Solaris
All functions defined in syscall2_solaris.go have the respective libc_*
var in syscall_solaris.go, except for libc_close. Move it from
os3_solaris.go

Remove unused libc_fstat.

Order go:cgo_import_dynamic and go:linkname lists in
syscall2_solaris.go alphabetically.

Change-Id: I9f12fa473cf1ae351448ac45597c82a67d799c31
Reviewed-on: https://go-review.googlesource.com/97736
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-03-01 07:31:53 +00:00
Richard Miller c2cdfbd1a7 runtime: don't try to shrink address space with brk in Plan 9
Plan 9 won't let brk shrink the data segment if it's shared with
other processes (which it is in the go runtime).  So we keep track
of the notional end of the segment as it moves up and down, and
call brk only when it grows.

Corrects CL 94776.

Updates #23860.
Fixes #24013.

Change-Id: I754232decab81dfd71d690f77ee6097a17d9be11
Reviewed-on: https://go-review.googlesource.com/97595
Reviewed-by: David du Colombier <0intro@gmail.com>
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: David du Colombier <0intro@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-28 15:57:10 +00:00
Keith Randall 2413b54888 cmd/compile: mark the first word of an interface as a uintptr
The first word of an interface is a pointer, but for the purposes
of GC we don't need to treat it as such.
 1. If it is a non-empty interface, the pointer points to an itab
    which is always in persistentalloc space.
 2. If it is an empty interface, the pointer points to a _type.
   a. If it is a compile-time-allocated type, it points into
      the read-only data section.
   b. If it is a reflect-allocated type, it points into the Go heap.
      Reflect is responsible for keeping a reference to
      the underlying type so it won't be GCd.

If we ever have a moving GC, we need to change this for 2b (as
well as scan itabs to update their itab._type fields).

Write barriers on the first word of interfaces have already been removed.

Change-Id: I643e91d7ac4de980ac2717436eff94097c65d959
Reviewed-on: https://go-review.googlesource.com/97518
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2018-02-27 22:58:32 +00:00
Tobias Klauser 5b21bf6f81 runtime: simplify walltime/nanotime on linux/{386,amd64}
Avoid an unnecessary MOVL/MOVQ.

Follow CL 97377

Change-Id: Ic43976d6b0cece3ed455496d18aedd67e0337d3f
Reviewed-on: https://go-review.googlesource.com/97358
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2018-02-27 18:42:41 +00:00
Josh Bleecher Snyder c5d6c42d35 runtime: improve 386/amd64 systemstack
Minor improvements, noticed while investigating other things.

Shorten the prologue.

Make branch direction better for static branch prediction;
the most common case by far is switching stacks (g==curg).

Change-Id: Ib2211d3efecb60446355cda56194221ccb78057d
Reviewed-on: https://go-review.googlesource.com/97377
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2018-02-27 18:10:38 +00:00
Josh Bleecher Snyder 486caa26d7 runtime: short-circuit typedmemmove when dst==src
Change-Id: I855268a4c0d07ad602ec90f5da66422d3d87c5f2
Reviewed-on: https://go-review.googlesource.com/94595
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
2018-02-27 00:56:18 +00:00
Ian Lance Taylor 804e3e565e runtime: don't check for String/Error methods in printany
They have either already been called by preprintpanics, or they can
not be called safely because of the various conditions checked at the
start of gopanic.

Fixes #24059

Change-Id: I4a6233d12c9f7aaaee72f343257ea108bae79241
Reviewed-on: https://go-review.googlesource.com/96755
Reviewed-by: Austin Clements <austin@google.com>
2018-02-23 22:39:46 +00:00
Austin Clements 788464724c runtime: reduce arena size to 4MB on 64-bit Windows
Currently, we use 64MB heap arenas on 64-bit platforms. This works
well on UNIX-like OSes because they treat untouched pages as
essentially free. However, on Windows, committed memory is charged
against a process whether or not it has demand-faulted physical pages
in. Hence, on Windows, even a process with a tiny heap will commit
64MB for one heap arena, plus another 32MB for the arena map. Things
are much worse under the race detector, which increases the heap
commitment by a factor of 5.5X, leading to 384MB of committed memory
at runtime init.

Fix this by reducing the heap arena size to 4MB on Windows.

To counterbalance the effect of increasing the arena map size by a
factor of 16, and to further reduce the impact of the commitment for
the arena map, we switch from a single entry L1 arena map to a 64
entry L1 arena map.

Compared to the original arena design, this slows down the
x/benchmarks garbage benchmark by 0.49% (the slow down of this commit
alone is 1.59%, but the previous commit bought us a 1% speed-up):

name                       old time/op  new time/op  delta
Garbage/benchmem-MB=64-12  2.28ms ± 1%  2.29ms ± 1%  +0.49%  (p=0.000 n=17+18)

(https://perf.golang.org/search?q=upload:20180223.1)

(This was measured on linux/amd64 by modifying its arena configuration
as above.)

Fixes #23900.

Change-Id: I6b7fa5ecebee2947bf20cfeb78c248809469c6b1
Reviewed-on: https://go-review.googlesource.com/96780
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2018-02-23 21:59:51 +00:00
Austin Clements ec25210564 runtime: support a two-level arena map
Currently, the heap arena map is a single, large array that covers
every possible arena frame in the entire address space. This is
practical up to about 48 bits of address space with 64 MB arenas.

However, there are two problems with this:

1. mips64, ppc64, and s390x support full 64-bit address spaces (though
   on Linux only s390x has kernel support for 64-bit address spaces).
   On these platforms, it would be good to support these larger
   address spaces.

2. On Windows, processes are charged for untouched memory, so for
   processes with small heaps, the mostly-untouched 32 MB arena map
   plus a 64 MB arena are significant overhead. Hence, it would be
   good to reduce both the arena map size and the arena size, but with
   a single-level arena, these are inversely proportional.

This CL adds support for a two-level arena map. Arena frame numbers
are now divided into arenaL1Bits of L1 index and arenaL2Bits of L2
index.

At the moment, arenaL1Bits is always 0, so we effectively have a
single level map. We do a few things so that this has no cost beyond
the current single-level map:

1. We embed the L2 array directly in mheap, so if there's a single
   entry in the L2 array, the representation is identical to the
   current representation and there's no extra level of indirection.

2. Hot code that accesses the arena map is structured so that it
   optimizes to nearly the same machine code as it does currently.

3. We make some small tweaks to hot code paths and to the inliner
   itself to keep some important functions inlined despite their
   now-larger ASTs. In particular, this is necessary for
   heapBitsForAddr and heapBits.next.

Possibly as a result of some of the tweaks, this actually slightly
improves the performance of the x/benchmarks garbage benchmark:

name                       old time/op  new time/op  delta
Garbage/benchmem-MB=64-12  2.28ms ± 1%  2.26ms ± 1%  -1.07%  (p=0.000 n=17+19)

(https://perf.golang.org/search?q=upload:20180223.2)

For #23900.

Change-Id: If5164e0961754f97eb9eca58f837f36d759505ff
Reviewed-on: https://go-review.googlesource.com/96779
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2018-02-23 21:59:50 +00:00
Austin Clements 33b76920ec runtime: rename "arena index" to "arena map"
There are too many places where I want to talk about "indexing into
the arena index". Make this less awkward and ambiguous by calling it
the "arena map" instead.

Change-Id: I726b0667bb2139dbc006175a0ec09a871cdf73f9
Reviewed-on: https://go-review.googlesource.com/96777
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
2018-02-23 21:59:48 +00:00