Brad has been asking for this for a while.
I have resisted because I wanted to find a more general way to
do this, one that would keep the performance of code introducing
variables the same as the performance of code that did not.
(See golang.org/issue/3512#c20).
I have not found the more general way, and recent changes to
remove ambiguously live temporaries have blown away the
property I was trying to preserve, so that's no longer a reason
not to make the change.
Fixes#3512.
LGTM=iant
R=iant
CC=bradfitz, golang-codereviews, khr, r
https://golang.org/cl/83740044
Map iteration previously started from a random bucket, but walked each
bucket from the beginning. Now, iteration always starts from the first
bucket and walks each bucket starting at a random offset. For
performance, the random offset is selected at the start of iteration
and reused for each bucket.
Iteration over a map with 8 or fewer elements--a single bucket--will
now be non-deterministic. There will now be only 8 different possible
map iterations.
Significant benchmark changes, on my OS X laptop (rough but consistent):
benchmark old ns/op new ns/op delta
BenchmarkMapIter 128 121 -5.47%
BenchmarkMapIterEmpty 4.26 4.45 +4.46%
BenchmarkNewEmptyMap 114 111 -2.63%
Fixes#6719.
R=khr, bradfitz
CC=golang-codereviews
https://golang.org/cl/47370043
Some builders broke on this test; I'm guessing that was because
this test didn't try hard enough to find a different iteration order.
Update #6719
R=dave
CC=golang-codereviews
https://golang.org/cl/47300043
Technically the spec does not guarantee that the iteration order is random,
but it is a property that we have consciously pursued, and so it seems
right to verify that our implementation does indeed randomise.
Update #6719.
R=khr, bradfitz
CC=golang-codereviews
https://golang.org/cl/47010043
If an iterator is started while a map is in the middle of a grow,
and the map has NaN keys, then those keys might get returned by
the iterator more than once. This fix makes the evacuation decision
deterministic and repeatable for NaN keys so each one gets returned
only once.
R=golang-dev, r, khr, iant
CC=golang-dev
https://golang.org/cl/14367043
struct Hmap is the header for a map value.
CL 8377046 made flags a uint32 so that it could be updated atomically,
but that bumped the struct to 56 bytes, which allocates as 64 bytes (on amd64).
hash0 is initialized from runtime.fastrand1, which returns a uint32,
so the top 32 bits were always zero anyway. Declare it as a uint32
to reclaim 4 bytes and bring the Hmap size back down to a 48-byte allocation.
Fixes#5237.
R=golang-dev, khr, khr
CC=bradfitz, dvyukov, golang-dev
https://golang.org/cl/12034047
Use atomic operations on flags field to make sure we aren't
losing a flag update during parallel map operations.
R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/8377046
Doing grow work on reads is not multithreaded safe.
Changed code to do grow work only on inserts & deletes.
This is a short-term fix, eventually we'll want to do
grow work in parallel to recover the space of the old
table.
Fixes#5120.
R=bradfitz, khr
CC=golang-dev
https://golang.org/cl/8242043
Motivated by garbage profiling in HTTP benchmarks. This
changes means new empty maps are just one small allocation
(the HMap) instead the HMap + the relatively larger h->buckets
allocation. This helps maps which remain empty throughout
their life.
benchmark old ns/op new ns/op delta
BenchmarkNewEmptyMap 196 107 -45.41%
benchmark old allocs new allocs delta
BenchmarkNewEmptyMap 2 1 -50.00%
benchmark old bytes new bytes delta
BenchmarkNewEmptyMap 195 50 -74.36%
R=khr, golang-dev, r
CC=golang-dev
https://golang.org/cl/7722046
Hashtable is arranged as an array of
8-entry buckets with chained overflow.
Each bucket has 8 extra hash bits
per key to provide quick lookup within
a bucket. Table is grown incrementally.
Update #3885
Go time drops from 0.51s to 0.34s.
R=r, rsc, m3b, dave, bradfitz, khr, ugorji, remyoudompheng
CC=golang-dev
https://golang.org/cl/7504044