Go exchanges siginfo and sigevent structures with the kernel. They
contain unions, but Go's use is limited to the first few fields. Pad out
the rest so the size Go sees is the same as what the Linux kernel sees.
This is a follow-up to CL 342052 which added the sigevent struct without
padding. It updates the siginfo struct as well so there are no bad
examples in the defs_linux_*.go files.
Change-Id: Id991d4a57826677dd7e6cc30ad113fa3b321cddf
Reviewed-on: https://go-review.googlesource.com/c/go/+/353136
Run-TryBot: Rhys Hiltner <rhys@justin.tv>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
Using setitimer on Linux to request SIGPROF signal deliveries in
proportion to the process's on-CPU time results in under-reporting when
the program uses several goroutines in parallel. Linux calculates the
process's total CPU spend on a regular basis (often every 4ms); if the
process has spent enough CPU time since the last calculation to warrant
more than one SIGPROF (usually 10ms for the default sample rate of 100
Hz), the kernel is often able to deliver only one of them. With these
common settings, that results in Go CPU profiles being attenuated for
programs that use more than 2.5 goroutines in parallel.
To avoid in effect overflowing the kernel's process-wide CPU counter,
and relying on Linux's typical behavior of having the active thread
handle the resulting process-targeted signal, use timer_create to
request a timer for each OS thread that the Go runtime manages. Have
each timer track the CPU time of a single thread, with the resulting
SIGPROF going directly to that thread.
To continue tracking CPU time spent on threads that don't interact with
the Go runtime (such as those created and used in cgo), keep using
setitimer in addition to the new mechanism. When a SIGPROF signal
arrives, check whether it's due to setitimer or timer_create and filter
as appropriate: If the thread is known to Go (has an M) and has a
timer_create timer, ignore SIGPROF signals from setitimer. If the thread
is not known to Go (does not have an M), ignore SIGPROF signals that are
not from setitimer.
Counteract the new bias that per-thread profiling adds against
short-lived threads (or those that are only active on occasion for a
short time, such as garbage collection workers on mostly-idle systems)
by configuring the timers' initial trigger to be from a uniform random
distribution between "immediate trigger" and the full requested sample
period.
Updates #35057
Change-Id: Iab753c4e5101bdc09ef9132eec84a75478e05579
Reviewed-on: https://go-review.googlesource.com/c/go/+/324129
Run-TryBot: Rhys Hiltner <rhys@justin.tv>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: David Chase <drchase@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
At this point all funcPC references are ABIInternal functions.
Replace with the intrinsics.
Change-Id: I3ba7e485c83017408749b53f92877d3727a75e27
Reviewed-on: https://go-review.googlesource.com/c/go/+/321954
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Use FuncPCABI0 to reference ABI0 assembly symbols. Currently,
they are referenced using funcPC, which will get the ABI wrapper's
address. They don't seem to affect correctness (either the wrapper
is harmless, or, on non-AMD64 architectures, not enabled). They
should have been converted.
This CL does not yet completely eliminate funcPC. But at this
point we should be able to replace all remaining uses of funcPC
to internal/abi.FuncPCABIInternal.
Change-Id: I383a686e11d570f757f185fe46769a42c856ab77
Reviewed-on: https://go-review.googlesource.com/c/go/+/321952
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
There are a few assembly functions in the runtime that are marked
as ABIInternal, solely because funcPC can get the right address.
The functions themselves do not actually follow ABIInternal (or
irrelevant). Now we have internal/abi.FuncPCABI0, use that, and
un-mark the functions.
Also un-mark assembly functions that are only called in assembly.
For them, it only matters if the caller and callee are consistent.
Change-Id: I240e126ac13cb362f61ff8482057ee9f53c24097
Reviewed-on: https://go-review.googlesource.com/c/go/+/321950
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
The previous CL introduced macros for transitions from the Windows ABI
to the Go ABI. This CL does the same for SysV and uses them in almost
all places where we transition from the C ABI to the Go ABI.
Compared to Windows, this transition is much simpler and I didn't find
any places that were getting it wrong. But this does let us unify a
lot of code nicely and introduces some degree of abstraction around
these ABI transitions.
Change-Id: Ib6bdecafce587ce18fca4c8300fcf401284a2bcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/309930
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
During a cgocallback, the runtime calls needm to get an m.
The calls made during needm cannot themselves assume that
there is an m or a g (which is attached to the m).
In the old days of making direct system calls, the only thing
you had to do for such functions was mark them //go:nosplit,
to avoid the use of g in the stack split prologue.
But now, on operating systems that make system calls through
shared libraries and use code that saves state in the g or m
before doing so, it's not safe to assume g exists. In fact, it is
not even safe to call getg(), because it might fault deferencing
the TLS storage to find the g pointer (that storage may not be
initialized yet, at least on Windows, and perhaps on other systems
in the future).
The specific routines that are problematic are usleep and osyield,
which are called during lock contention in lockextra, called
from needm.
All this is rather subtle and hidden, so in addition to fixing the
problem on Windows, this CL makes the fact of not running on
a g much clearer by introducing variants usleep_no_g and
osyield_no_g whose names should make clear that there is no g.
And then we can remove the various sketchy getg() == nil checks
in the existing routines.
As part of this cleanup, this CL also deletes onosstack on Windows.
onosstack is from back when the runtime was implemented in C.
It predates systemstack but does essentially the same thing.
Instead of having two different copies of this code, we can use
systemstack consistently. This way we need not port onosstack
to each architecture.
This CL is part of a stack adding windows/arm64
support (#36439), intended to land in the Go 1.17 cycle.
This CL is, however, not windows/arm64-specific.
It is cleanup meant to make the port (and future ports) easier.
Change-Id: I3352de1fd0a3c26267c6e209063e6e86abd26187
Reviewed-on: https://go-review.googlesource.com/c/go/+/288793
Trust: Russ Cox <rsc@golang.org>
Trust: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Calls to lock may need to use global members of mOS that also need to be
cleaned up before the thread exits. Before this commit, these resources
would leak. Moving them to be cleaned up in unminit, however, would race
with gstack on unix. So this creates a new helper, mdestroy, to release
resources that must be destroyed only after locks are no longer
required. We also move highResTimer lifetime to the same semantics,
since it doesn't help to constantly acquire and release the timer object
during dropm.
Updates #43720.
Change-Id: Ib3f598f3fda1b2bbcb608099616fa4f85bc1c289
Reviewed-on: https://go-review.googlesource.com/c/go/+/284137
Run-TryBot: Jason A. Donenfeld <Jason@zx2c4.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Trust: Alex Brainman <alex.brainman@gmail.com>
Trust: Jason A. Donenfeld <Jason@zx2c4.com>
Under linux+cgo, OS threads are launched via pthread_create().
This abstraction, under linux, requires we avoid blocking
signals 32,33 and 34 indefinitely because they are needed to
reliably execute POSIX-semantics threading in glibc and/or musl.
When blocking signals the go runtime generally re-enables them
quickly. However, when a thread exits (under cgo, this is
via a return from mstart()), we avoid a deadlock in C-code by
not blocking these three signals.
Fixes#42494
Change-Id: I02dfb2480a1f97d11679e0c4b132b51bddbe4c14
Reviewed-on: https://go-review.googlesource.com/c/go/+/269799
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Trust: Tobias Klauser <tobias.klauser@gmail.com>
startupRandomData is only used in sysauxv and getRandomData on linux,
thus move it closer to where it is used. Also adjust its godoc comment.
Change-Id: Ice51d579ec33436adbfdf247caf4ba00bae865e0
Reviewed-on: https://go-review.googlesource.com/c/go/+/248761
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Go 1.14 included a (rather awful) workaround for a Linux kernel bug
that corrupted vector registers on x86 CPUs during signal delivery
(https://bugzilla.kernel.org/show_bug.cgi?id=205663). This bug was
introduced in Linux 5.2 and fixed in 5.3.15, 5.4.2 and all 5.5 and
later kernels. The fix was also back-ported by major distros. This
workaround was necessary, but had unfortunate downsides, including
causing Go programs to exceed the mlock ulimit in many configurations
(#37436).
We're reasonably confident that by the Go 1.16 release, the number of
systems running affected kernels will be vanishingly small. Hence,
this CL removes this workaround.
This effectively reverts CLs 209597 (version parser), 209899 (mlock
top of signal stack), 210299 (better failure message), 223121 (soft
mlock failure handling), and 244059 (special-case patched Ubuntu
kernels). The one thing we keep is the osArchInit function. It's empty
everywhere now, but is a reasonable hook to have.
Updates #35326, #35777 (the original register corruption bugs).
Updates #40184 (request to revert in 1.15).
Fixes#35979.
Change-Id: Ie213270837095576f1f3ef46bf3de187dc486c50
Reviewed-on: https://go-review.googlesource.com/c/go/+/246200
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Some runtime calls accept a slice, but only use ptr and len.
This change modifies most such routines to accept only ptr and len.
After this change, the only runtime calls that accept an unnecessary
cap arg are concatstrings and slicerunetostring.
Neither is particularly common, and both are complicated to modify.
Negligible compiler performance impact. Shrinks binaries a little.
There are only a few regressions; the one I investigated was
due to register allocation fluctuation.
Passes 'go test -race std cmd', modulo #38265 and #38266.
Wow, does that take a long time to run.
Updates #36890
file before after Δ %
compile 19655024 19655152 +128 +0.001%
cover 5244840 5236648 -8192 -0.156%
dist 3662376 3658280 -4096 -0.112%
link 6680056 6675960 -4096 -0.061%
pprof 14789844 14777556 -12288 -0.083%
test2json 2824744 2820648 -4096 -0.145%
trace 11647876 11639684 -8192 -0.070%
vet 8260472 8256376 -4096 -0.050%
total 115163736 115118808 -44928 -0.039%
Change-Id: Idb29fa6a81d6a82bfd3b65740b98cf3275ca0a78
Reviewed-on: https://go-review.googlesource.com/c/go/+/227163
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Instead, note that mlock has failed, start trying the mitigation of
touching the signal stack before sending a preemption signal, and,
if the program crashes, mention the possible problem and a wiki page
describing the issue (https://golang.org/wiki/LinuxKernelSignalVectorBug).
Tested on a kernel in the buggy version range, but with the patch,
by using `ulimit -l 0`.
Fixes#37436
Change-Id: I072aadb2101496dffd655e442fa5c367dad46ce8
Reviewed-on: https://go-review.googlesource.com/c/go/+/223121
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Based on riscv-go port.
Updates #27532
Change-Id: If522807a382130be3c8d40f4b4c1131d1de7c9e3
Reviewed-on: https://go-review.googlesource.com/c/go/+/204632
Run-TryBot: Joel Sing <joel@sing.id.au>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Linux 5.2 introduced a bug that can corrupt vector registers on return
from a signal if the signal stack isn't faulted in:
https://bugzilla.kernel.org/show_bug.cgi?id=205663
This CL works around this by mlocking the top page of all Go signal
stacks on the affected kernels.
Fixes#35326, #35777
Change-Id: I77c80a2baa4780827633f92f464486caa222295d
Reviewed-on: https://go-review.googlesource.com/c/go/+/209899
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Chase <drchase@google.com>
This adds pipe/pipe2 on Solaris as they exist on other Unix systems.
They were not added previously because Solaris does not need them
for netpollBreak. They are added now in preparation for using pipes
in TestSignalM.
Updates #35276
Change-Id: I53dfdf077430153155f0a79715af98b0972a841c
Reviewed-on: https://go-review.googlesource.com/c/go/+/206077
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
We'll add a test once all of the POSIX platforms are done.
For #10958, #24543.
Change-Id: If7e3f14e8391791364877629bf415d9f8e788b0a
Reviewed-on: https://go-review.googlesource.com/c/go/+/201401
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This requires defining pipe, pipe2, and setNonblock for various platforms.
The new function is currently only used on AIX. It will be used by
later CLs in this series.
Updates #27707
Change-Id: Id2f987b66b4c66a3ef40c22484ff1d14f58e9b31
Reviewed-on: https://go-review.googlesource.com/c/go/+/171822
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This change adds two new treap iteration types: one for large
unscavenged spans (contain at least one huge page) and one for small
unscavenged spans. This allows us to scavenge the huge spans first by
first iterating over the large ones, then the small ones.
Also, since we now depend on physHugePageSize being a power of two,
ensure that that's the case when it's retrieved from the OS.
For #30333.
Change-Id: I51662740205ad5e4905404a0856f5f2b2d2a5680
Reviewed-on: https://go-review.googlesource.com/c/go/+/174399
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This change adds the global physHugePageSize which is initialized in
osinit(). physHugePageSize contains the system's transparent huge page
(or superpage) size in bytes.
For #30333.
Change-Id: I2f0198c40729dbbe6e6f2676cef1d57dd107562c
Reviewed-on: https://go-review.googlesource.com/c/go/+/170858
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
The general code for setting a timespec value sometimes used set_nsec
and sometimes used a combination of set_sec and set_nsec. Standardize
on a setNsec function that takes a number of nanoseconds and splits
them up to set the tv_sec and tv_nsec fields. Consistently mark
setNsec as go:nosplit, since it has to be that way on some systems
including Darwin and GNU/Linux. Consistently use timediv on 32-bit
systems to help stay within split-stack limits on processors that
don't have a 64-bit division instruction.
Change-Id: I6396bb7ddbef171a96876bdeaf7a1c585a6d725b
Reviewed-on: https://go-review.googlesource.com/c/go/+/167389
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This avoids problems when running under QEMU. It seems that at least
some QEMU versions turn the sigaction implementation into a call to
the C library sigaction function. The C library function will reject
attempts to set the signal handler for signals 32 and 33. Ignore
errors in that case.
Change-Id: Id443a9a32f6fb0ceef5c59a398e7ede30bf71646
Reviewed-on: https://go-review.googlesource.com/125955
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Heschi Kreinick <heschi@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The Go runtime registers a handler for every signal. This prevents Go
binaries from working on QEMU in user-emulation mode, since the hacky
way QEMU implements signals on Linux assumes that no-one uses signal
64 (SIGRTMAX).
In the past, we had a workaround in the runtime to prevent crashes on
start-up when running on QEMU:
golang.org/cl/124900043
golang.org/cl/16853
but it went lost during the 1.11 dev cycle. More precisely, the test
for SIGRTMAX was dropped in CL 18150 when we stopped testing the
result of sigaction in the Linux implementation of setsig. That change
was made to avoid a stack split overflow because code started calling
setsig from nosplit functions. Then in CL 99077 we started testing the
result of sigaction again, this time using systemstack to avoid to
stack split overflow. When this test was added back, we did not bring
back the test of SIGRTMAX.
As a result, Go1.10 binaries work on QEMU, while 1.11 binaries
immediately crash on startup.
This change restores the QEMU workaround.
Updates #24656
Change-Id: I46380b1e1b4bf47db7bc7b3d313f00c4e4c11ea3
Reviewed-on: https://go-review.googlesource.com/111176
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Replace thread creation with calls to the pthread
library in libc.
Update #17490
Change-Id: I1e19965c45255deb849b059231252fc6a7861d6c
Reviewed-on: https://go-review.googlesource.com/108679
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This normalizes the Linux code to act like other targets. The size
argument to the rt_sigaction system call is pushed to a single
function, sysSigaction.
This is intended as a simplification step for CL 93875 for #14327.
Change-Id: I594788e235f0da20e16e8a028e27ac8c883907c4
Reviewed-on: https://go-review.googlesource.com/99077
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Use the __vdso_clock_gettime fast path via the vDSO on linux/arm to
speed up nanotime and walltime. This results in the following
performance improvement for time.Now on a RaspberryPi 3 (running
32bit Raspbian, i.e. GOOS=linux/GOARCH=arm):
name old time/op new time/op delta
TimeNow 0.99µs ± 0% 0.39µs ± 1% -60.74% (p=0.000 n=12+20)
Change-Id: I3598278a6c88d7f6a6ce66c56b9d25f9dd2f4c9a
Reviewed-on: https://go-review.googlesource.com/98095
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Follow CL 93655 which removed the (commented-out) usage of this
function.
Also remove unused constant _RLIMIT_AS and type rlimit.
Change-Id: Ifb6e6b2104f4c2555269f8ced72bfcae24f5d5e9
Reviewed-on: https://go-review.googlesource.com/94775
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently large sysReserve calls on some OSes don't actually reserve
the memory, but just check that it can be reserved. This was important
when we called sysReserve to "reserve" many gigabytes for the heap up
front, but now that we map memory in small increments as we need it,
this complication is no longer necessary.
This has one curious side benefit: currently, on Linux, allocations
that are large enough to be rejected by mmap wind up freezing the
application for a long time before it panics. This happens because
sysReserve doesn't reserve the memory, so sysMap calls mmap_fixed,
which calls mmap, which fails because the mapping is too large.
However, mmap_fixed doesn't inspect *why* mmap fails, so it falls back
to probing every page in the desired region individually with mincore
before performing an (otherwise dangerous) MAP_FIXED mapping, which
will also fail. This takes a long time for a large region. Now this
logic is gone, so the mmap failure leads to an immediate panic.
Updates #10460.
Change-Id: I8efe88c611871cdb14f99fadd09db83e0161ca2e
Reviewed-on: https://go-review.googlesource.com/85888
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
By default futexes are permitted in shared memory regions, which
requires the kernel to translate the memory address. Since our futexes
are never in shared memory, set FUTEX_PRIVATE_FLAG, which makes futex
operations slightly more efficient.
Change-Id: I2a82365ed27d5cd8d53c5382ebaca1a720a80952
Reviewed-on: https://go-review.googlesource.com/80144
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The current code can potentially return a smaller processor count on a
linux kernel when its cpumask_size (controlled by both kernel config and
boot parameter) is not a multiple of the pointer size, because
r/sys.PtrSize will be rounded down. Since sched_getaffinity returns the
size in bytes, we can just allocate the buf as a byte array to avoid the
extra calculation with the pointer size and roundups.
Change-Id: I0c21046012b88d8a56b5dd3dde1d158d94f8eea9
Reviewed-on: https://go-review.googlesource.com/75591
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently mmap returns an unsafe.Pointer that encodes OS errors as
values less than 4096. In practice this is okay, but it borders on
being really unsafe: for example, the value has to be checked
immediately after return and if stack copying were ever to observe
such a value, it would panic. It's also not remotely idiomatic.
Fix this by making mmap return a separate pointer value and error,
like a normal Go function.
Updates #22218.
Change-Id: Iefd965095ffc82cc91118872753a5d39d785c3a6
Reviewed-on: https://go-review.googlesource.com/71270
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Found with mvdan.cc/unindent. Prioritized the ones with the biggest wins
for now.
Change-Id: I2b032e45cdd559fc9ed5b1ee4c4de42c4c92e07b
Reviewed-on: https://go-review.googlesource.com/56470
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Although mincore is declared in stubs.go, mincore isn't used by any
OSes except linux. Move it to os_linux.go and clean up unused code.
Change-Id: I6cfb0fed85c0317a4d091a2722ac55fa79fc7c9a
Reviewed-on: https://go-review.googlesource.com/54910
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
SysV semaphore undo lists should be shared by threads, just like
several other resources listed in cloneFlags. Currently we don't do
this, but it probably doesn't affect anything because 1) probably
nobody uses SysV semaphores from Go and 2) Go-created threads never
exit until the process does. Beyond being the right thing to do,
user-level QEMU requires this flag because it depends on glibc to
create new threads and glibc uses this flag.
Fixes#20763.
Change-Id: I1d1dafec53ed87e0f4d4d432b945e8e68bb72dcd
Reviewed-on: https://go-review.googlesource.com/48170
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Android on ChromeOS uses a restrictive seccomp filter that blocks
sched_getaffinity, leading this code to index a slice by -errno.
Change-Id: Iec09a4f79dfbc17884e24f39bcfdad305de75b37
Reviewed-on: https://go-review.googlesource.com/34794
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Android's libc doesn't provide access to auxv, so currently the Go
runtime synthesizes a fake, minimal auxv when loaded as a library on
Android. This used to be sufficient, but now we depend on auxv to
retrieve the system physical page size and panic if we can't retrieve
it.
Fix this by falling back to reading auxv from /proc/self/auxv if the
loader-provided auxv is empty and removing the synthetic auxv vectors.
Fixes#18041.
Change-Id: Ia2ec2c764a6609331494a5d359032c56cbb83482
Reviewed-on: https://go-review.googlesource.com/33652
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
This ensures that runtime's signal handlers pass through the TSAN and
MSAN libc interceptors and subsequent calls to the intercepted
sigaction function from C will correctly see them.
Fixes#17753.
Change-Id: I9798bb50291a4b8fa20caa39c02a4465ec40bb8d
Reviewed-on: https://go-review.googlesource.com/33142
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This implements a check that can be done at runtime for the ISA level and
hardware capability. It follows the same implementation as in s390x.
These checks will be important as we enable new instructions and write go
asm implementations using those.
Updates #15403Fixes#16643
Change-Id: Idfee374a3ffd7cf13a7d8cf0a6c83d247d3bee16
Reviewed-on: https://go-review.googlesource.com/32330
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is a more robust method for obtaining the availability of vx.
Since this variable may be checked frequently I've also now
padded it so that it will be in its own cache line.
I've kept the other check (in hash/crc32) the same for now until
I can figure out the best way to update it.
Updates #15403.
Change-Id: I74eed651afc6f6a9c5fa3b88fa6a2b0c9ecf5875
Reviewed-on: https://go-review.googlesource.com/31149
Reviewed-by: Austin Clements <austin@google.com>