mirror of https://github.com/golang/go.git
[release-branch.go1.8] runtime: don't corrupt arena bounds on low mmap
Cherry-pick of CL 43870. If mheap.sysAlloc doesn't have room in the heap arena for an allocation, it will attempt to map more address space with sysReserve. sysReserve is given a hint, but can return any unused address range. Currently, mheap.sysAlloc incorrectly assumes the returned region will never fall between arena_start and arena_used. If it does, mheap.sysAlloc will blindly accept the new region as the new arena_used and arena_end, causing these to decrease and make it so any Go heap above the new arena_used is no longer considered part of the Go heap. This assumption *used to be* safe because we had all memory between arena_start and arena_used mapped, but when we switched to an arena_start of 0 on 32-bit, it became no longer safe. Most likely, we've only recently seen this bug occur because we usually start arena_used just above the binary, which is low in the address space. Hence, the kernel is very unlikely to give us a region before arena_used. Since mheap.sysAlloc is a linear allocator, there's not much we can do to handle this well. Hence, we fix this problem by simply rejecting the new region if it isn't after arena_end. In this case, we'll take the fall-back path and mmap a small region at any address just for the requested memory. Fixes #20259. Change-Id: Ib72e8cd621545002d595c7cade1e817cfe3e5b1e Reviewed-on: https://go-review.googlesource.com/43954 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Chris Broadfoot <cbro@golang.org>
This commit is contained in:
parent
1054085dcf
commit
a43c0d2dc8
|
|
@ -400,10 +400,12 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
|
|||
if p == 0 {
|
||||
return nil
|
||||
}
|
||||
// p can be just about anywhere in the address
|
||||
// space, including before arena_end.
|
||||
if p == h.arena_end {
|
||||
h.arena_end = new_end
|
||||
h.arena_reserved = reserved
|
||||
} else if h.arena_start <= p && p+p_size-h.arena_start-1 <= _MaxArena32 {
|
||||
} else if h.arena_end < p && p+p_size-h.arena_start-1 <= _MaxArena32 {
|
||||
// Keep everything page-aligned.
|
||||
// Our pages are bigger than hardware pages.
|
||||
h.arena_end = p + p_size
|
||||
|
|
@ -413,6 +415,16 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
|
|||
h.arena_used = used
|
||||
h.arena_reserved = reserved
|
||||
} else {
|
||||
// We got a mapping, but it's not
|
||||
// linear with our current arena, so
|
||||
// we can't use it.
|
||||
//
|
||||
// TODO: Make it possible to allocate
|
||||
// from this. We can't decrease
|
||||
// arena_used, but we could introduce
|
||||
// a new variable for the current
|
||||
// allocation position.
|
||||
|
||||
// We haven't added this allocation to
|
||||
// the stats, so subtract it from a
|
||||
// fake stat (but avoid underflow).
|
||||
|
|
|
|||
Loading…
Reference in New Issue