Tiny alloc memory block is shared by different goroutines running on the same thread.
We call racemalloc after enabling preemption in mallocgc,
as the result another goroutine can act on not yet race-cleared tiny block.
Call racemalloc before enabling preemption.
Fixes#7224.
LGTM=dave
R=golang-codereviews, dave
CC=golang-codereviews
https://golang.org/cl/57730043
Currently windows crashes because early allocs in schedinit
try to allocate tiny memory blocks, but m->p is not yet setup.
I've considered calling procresize(1) earlier in schedinit,
but this refactoring is better and must fix the issue as well.
Fixes#7218.
R=golang-codereviews, r
CC=golang-codereviews
https://golang.org/cl/54570045
On 32-bits n*sizeof(r[0]) can overflow.
Or it can become 1<<32-eps, and mallocgc will "successfully"
allocate 0 pages for it, there are no checks downstream
and MHeap_Grow just does:
npage = (npage+15)&~15;
ask = npage<<PageShift;
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews
https://golang.org/cl/54760045
Currently for 2-word blocks we set the flag to clear the flag. Makes no sense.
In particular on 32-bits we call memclr always.
R=golang-codereviews, dave, iant
CC=golang-codereviews, khr, rsc
https://golang.org/cl/41170044
The spans array is allocated in runtime·mallocinit. On a
32-bit system the number of entries in the spans array is
MaxArena32 / PageSize, which (2U << 30) / (1 << 12) == (1 << 19).
So we are allocating an array that can hold 19 bits for an
index that can hold 20 bits. According to the comment in the
function, this is intentional: we only allocate enough spans
(and bitmaps) for a 2G arena, because allocating more would
probably be wasteful.
But since the span index is simply the upper 20 bits of the
memory address, this scheme only works if memory addresses are
limited to the low 2G of memory. That would be OK if we were
careful to enforce it, but we're not. What we are careful to
enforce, in functions like runtime·MHeap_SysAlloc, is that we
always return addresses between the heap's arena_start and
arena_start + MaxArena32.
We generally get away with it because we start allocating just
after the program end, so we only run into trouble with
programs that allocate a lot of memory, enough to get past
address 0x80000000.
This changes the code that computes a span index to subtract
arena_start on 32-bit systems just as we currently do on
64-bit systems.
R=golang-codereviews, rsc
CC=golang-codereviews
https://golang.org/cl/49460043
record finalizers and heap profile info. Enables
removing the special bit from the heap bitmap. Also
provides a generic mechanism for annotating occasional
heap objects.
finalizers
overhead per obj
old 680 B 80 B avg
new 16 B/span 48 B
profile
overhead per obj
old 32KB 24 B + hash tables
new 16 B/span 24 B
R=cshapiro, khr, dvyukov, gobot
CC=golang-codereviews
https://golang.org/cl/13314053
On the plus side, we don't need to change the bits when mallocing
pointerless objects. On the other hand, we need to mark objects in the
free lists during GC. But the free lists are small at GC time, so it
should be a net win.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 40 33 -17.65%
BenchmarkMalloc16 45 38 -15.72%
BenchmarkMallocTypeInfo8 58 59 +0.85%
BenchmarkMallocTypeInfo16 63 64 +1.10%
R=golang-dev, rsc, dvyukov
CC=cshapiro, golang-dev
https://golang.org/cl/41040043
And document it explicitly, even though it already said
it wasn't guaranteed.
Fixes#6857
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/43580043
When enabled this new debugging mode will allocate objects on
their own page and never recycle memory addresses. This is an
essential tool to root cause a broad class of heap corruption.
R=golang-dev, dave, daniel.morsing, dvyukov, rsc, iant, cshapiro
CC=golang-dev
https://golang.org/cl/22060046
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes#5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043
#pragma textflag and #pragma dataflag directives.
Update dataflag directives to use symbols instead of integer constants.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/13310043
the use of the flag, especially for objects which actually do have
pointers but we don't want the GC to scan them.
R=golang-dev, cshapiro
CC=golang-dev
https://golang.org/cl/13181045
Originally the requirement was f(x) where f's argument is
exactly x's type.
CL 11858043 relaxed the requirement in a non-standard
way: f's argument must be exactly x's type or interface{}.
If we're going to relax the requirement, it should be done
in a way consistent with the rest of Go. This CL allows f's
argument to have any type for which x is assignable;
that's the same requirement the compiler would impose
if compiling f(x) directly.
Fixes#5368.
R=dvyukov, bradfitz, pieter
CC=golang-dev
https://golang.org/cl/12895043
The original plan was to collect allocation stacks
for all memory blocks. But it was never implemented
and it's not in near plans and it's unclear how to do it at all.
R=golang-dev, dave, bradfitz
CC=golang-dev
https://golang.org/cl/12724044
Make it accept type, combine flags.
Several reasons for the change:
1. mallocgc and settype must be atomic wrt GC
2. settype is called from only one place now
3. it will help performance (eventually settype
functionality must be combined with markallocated)
4. flags are easier to read now (no mallocgc(sz, 0, 1, 0) anymore)
R=golang-dev, iant, nightlyone, rsc, dave, khr, bradfitz, r
CC=golang-dev
https://golang.org/cl/10136043
It assumes that the m will not change, and the m may
change if the goroutine is preempted.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/11560043
Currently preemption signal g->stackguard0==StackPreempt
can be lost if it is received when preemption is disabled
(e.g. m->lock!=0). This change duplicates the preemption
signal in g->preempt and restores g->stackguard0
when preemption is enabled.
Update #543.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10792043
Also reduce FixAlloc allocation granulatiry from 128k to 16k,
small programs do not need that much memory for MCache's and MSpan's.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/10140044
Count only number of frees, everything else is derivable
and does not need to be counted on every malloc.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 68 66 -3.07%
BenchmarkMalloc16 75 70 -6.48%
BenchmarkMallocTypeInfo8 102 97 -4.80%
BenchmarkMallocTypeInfo16 108 105 -2.78%
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/9776043
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
Reincarnation of committed and rolled back https://golang.org/cl/9805043
The latent bugs that it revealed are fixed:
https://golang.org/cl/9837049https://golang.org/cl/9778048
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9778049
Then use the limit to make sure MHeap_LookupMaybe & inlined
copies don't return a span if the pointer is beyond the limit.
Use this fact to optimize all call sites.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/9869045
This depends on: 9791044: runtime: allocate page table lazily
Once page table is moved out of heap, the heap becomes small.
This removes unnecessary dereferences during heap access.
No logical changes.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9802043
This removes the 256MB memory allocation at startup,
which conflicts with ulimit.
Also will allow to eliminate an unnecessary memory dereference in GC,
because the page table is usually mapped at known address.
Update #5049.
Update #5236.
R=golang-dev, khr, r, khr, rsc
CC=golang-dev
https://golang.org/cl/9791044
multiple failures on amd64
««« original CL description
runtime: introduce helper persistentalloc() function
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/9822043
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
Currently per-sizeclass stats are lost for destroyed MCache's. This patch fixes this.
Also, only update mstats.heap_alloc on heap operations, because that's the only
stat that needs to be promptly updated. Everything else needs to be up-to-date only in ReadMemStats().
R=golang-dev, remyoudompheng, dave, iant
CC=golang-dev
https://golang.org/cl/9207047
Also change table type from int32[] to int8[] to save space in L1$.
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.68%
R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/9199044
For Go 1.1, stop checking the rlimit, because it broke now
that mheap is allocated using SysAlloc. See issue 5049.
R=r
CC=golang-dev
https://golang.org/cl/7741050