mirror of https://github.com/golang/go.git
[dev.ssa] Merge remote-tracking branch 'origin/master' into mergebranch
Semi-regular merge from tip to dev.ssa. Change-Id: If7d2269f267bcbc0ecd3a483d349951044470e3f
This commit is contained in:
commit
80bc512449
|
|
@ -621,6 +621,15 @@ These modes accept only 1, 2, 4, and 8 as scale factors.
|
|||
|
||||
</ul>
|
||||
|
||||
<p>
|
||||
When using the compiler and assembler's
|
||||
<code>-dynlink</code> or <code>-shared</code> modes,
|
||||
any load or store of a fixed memory location such as a global variable
|
||||
must be assumed to overwrite <code>CX</code>.
|
||||
Therefore, to be safe for use with these modes,
|
||||
assembly sources should typically avoid CX except between memory references.
|
||||
</p>
|
||||
|
||||
<h3 id="amd64">64-bit Intel 386 (a.k.a. amd64)</h3>
|
||||
|
||||
<p>
|
||||
|
|
|
|||
|
|
@ -198,9 +198,13 @@ prints help text, not an error.
|
|||
</p>
|
||||
|
||||
<p>
|
||||
Note to Git aficionados: The <code>git-codereview</code> command is not required to
|
||||
<b>Note to Git aficionados:</b>
|
||||
The <code>git-codereview</code> command is not required to
|
||||
upload and manage Gerrit code reviews. For those who prefer plain Git, the text
|
||||
below gives the Git equivalent of each git-codereview command. If you do use plain
|
||||
below gives the Git equivalent of each git-codereview command.
|
||||
</p>
|
||||
|
||||
<p>If you do use plain
|
||||
Git, note that you still need the commit hooks that the git-codereview command
|
||||
configures; those hooks add a Gerrit <code>Change-Id</code> line to the commit
|
||||
message and check that all Go source files have been formatted with gofmt. Even
|
||||
|
|
@ -208,6 +212,12 @@ if you intend to use plain Git for daily work, install the hooks in a new Git
|
|||
checkout by running <code>git-codereview</code> <code>hooks</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The workflow described below assumes a single change per branch.
|
||||
It is also possible to prepare a sequence of (usually related) changes in a single branch.
|
||||
See the <a href="https://golang.org/x/review/git-codereview">git-codereview documentation</a> for details.
|
||||
</p>
|
||||
|
||||
<h3 id="git-config">Set up git aliases</h3>
|
||||
|
||||
<p>
|
||||
|
|
|
|||
|
|
@ -30,6 +30,13 @@ to fix critical security problems in both Go 1.4 and Go 1.5 as they arise.
|
|||
See the <a href="/security">security policy</a> for more details.
|
||||
</p>
|
||||
|
||||
<h2 id="go1.6">go1.6 (released 2016/02/17)</h2>
|
||||
|
||||
<p>
|
||||
Go 1.6 is a major release of Go.
|
||||
Read the <a href="/doc/go1.6">Go 1.6 Release Notes</a> for more information.
|
||||
</p>
|
||||
|
||||
<h2 id="go1.5">go1.5 (released 2015/08/19)</h2>
|
||||
|
||||
<p>
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
<!--{
|
||||
"Title": "Go 1.6 Release Notes DRAFT",
|
||||
"Title": "Go 1.6 Release Notes",
|
||||
"Path": "/doc/go1.6",
|
||||
"Template": true
|
||||
}-->
|
||||
|
|
@ -13,13 +13,6 @@ Edit .,s;^([a-z][A-Za-z0-9_/]+)\.([A-Z][A-Za-z0-9_]+\.)?([A-Z][A-Za-z0-9_]+)([ .
|
|||
ul li { margin: 0.5em 0; }
|
||||
</style>
|
||||
|
||||
<p>
|
||||
<i>NOTE: This is a DRAFT of the Go 1.6 release notes, prepared for the Go 1.6 beta.
|
||||
Go 1.6 has NOT yet been released.
|
||||
By our regular schedule, it is expected some time in February 2016.
|
||||
</i>
|
||||
</p>
|
||||
|
||||
<h2 id="introduction">Introduction to Go 1.6</h2>
|
||||
|
||||
<p>
|
||||
|
|
@ -70,9 +63,12 @@ On NaCl, Go 1.5 required SDK version pepper-41.
|
|||
Go 1.6 adds support for later SDK versions.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
TODO: CX no longer available on 386 assembly? (https://golang.org/cl/16386)
|
||||
</pre>
|
||||
<p>
|
||||
On 32-bit x86 systems using the <code>-dynlink</code> or <code>-shared</code> compilation modes,
|
||||
the register CX is now overwritten by certain memory references and should
|
||||
be avoided in hand-written assembly.
|
||||
See the <a href="/doc/asm#x86">assembly documentation</a> for details.
|
||||
</p>
|
||||
|
||||
<h2 id="tools">Tools</h2>
|
||||
|
||||
|
|
@ -248,7 +244,7 @@ Some programs may run faster, some slower.
|
|||
On average the programs in the Go 1 benchmark suite run a few percent faster in Go 1.6
|
||||
than they did in Go 1.5.
|
||||
The garbage collector's pauses are even lower than in Go 1.5,
|
||||
although the effect is likely only noticeable for programs using
|
||||
especially for programs using
|
||||
a large amount of memory.
|
||||
</p>
|
||||
|
||||
|
|
@ -569,7 +565,7 @@ The <a href="/pkg/debug/elf/"><code>debug/elf</code></a> package
|
|||
adds support for general compressed ELF sections.
|
||||
User code needs no updating: the sections are decompressed automatically when read.
|
||||
However, compressed
|
||||
<a href="/pkg/debug/elf/#Section"><code>Section</code></a>'s do not support random access:
|
||||
<a href="/pkg/debug/elf/#Section"><code>Sections</code></a> do not support random access:
|
||||
they have a nil <code>ReaderAt</code> field.
|
||||
</li>
|
||||
|
||||
|
|
@ -632,7 +628,6 @@ In previous releases, the argument to <code>*</code> was required to have type <
|
|||
Also in the <a href="/pkg/fmt/"><code>fmt</code></a> package,
|
||||
<a href="/pkg/fmt/#Scanf"><code>Scanf</code></a> can now scan hexadecimal strings using %X, as an alias for %x.
|
||||
Both formats accept any mix of upper- and lower-case hexadecimal.
|
||||
<a href="https://golang.org/issues/13585">TODO: Keep?</a>
|
||||
</li>
|
||||
|
||||
<li>
|
||||
|
|
@ -717,9 +712,6 @@ Second, DNS lookup functions such as
|
|||
<a href="/pkg/net/#LookupAddr"><code>LookupAddr</code></a>
|
||||
now return rooted domain names (with a trailing dot)
|
||||
on Plan 9 and Windows, to match the behavior of Go on Unix systems.
|
||||
TODO: Third, lookups satisfied from /etc/hosts now add a trailing dot as well,
|
||||
so that looking up 127.0.0.1 typically now returns “localhost.” not “localhost”.
|
||||
This is arguably a mistake but is not yet fixed. See https://golang.org/issue/13564.
|
||||
</li>
|
||||
|
||||
<li>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,14 @@
|
|||
Tools:
|
||||
|
||||
cmd/go: GO15VENDOREXPERIMENT gone, assumed on (CL 19615)
|
||||
cmd/link: "-X name value" form gone (CL 19614)
|
||||
|
||||
Ports:
|
||||
|
||||
SOMETHING WILL HAPPEN
|
||||
|
||||
API additions and behavior changes:
|
||||
|
||||
SOMETHING WILL HAPPEN
|
||||
|
||||
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
<!--{
|
||||
"Title": "The Go Programming Language Specification",
|
||||
"Subtitle": "Version of January 5, 2016",
|
||||
"Subtitle": "Version of February 23, 2016",
|
||||
"Path": "/ref/spec"
|
||||
}-->
|
||||
|
||||
|
|
@ -2443,9 +2443,8 @@ PrimaryExpr =
|
|||
|
||||
Selector = "." identifier .
|
||||
Index = "[" Expression "]" .
|
||||
Slice = "[" ( [ Expression ] ":" [ Expression ] ) |
|
||||
( [ Expression ] ":" Expression ":" Expression )
|
||||
"]" .
|
||||
Slice = "[" [ Expression ] ":" [ Expression ] "]" |
|
||||
"[" [ Expression ] ":" Expression ":" Expression "]" .
|
||||
TypeAssertion = "." "(" Type ")" .
|
||||
Arguments = "(" [ ( ExpressionList | Type [ "," ExpressionList ] ) [ "..." ] [ "," ] ] ")" .
|
||||
</pre>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,13 @@
|
|||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Issue 13930. Test that cgo's multiple-value special form for
|
||||
// C function calls works in variable declaration statements.
|
||||
|
||||
package cgotest
|
||||
|
||||
// #include <stdlib.h>
|
||||
import "C"
|
||||
|
||||
var _, _ = C.abs(0)
|
||||
|
|
@ -27,6 +27,18 @@ go src=..
|
|||
internal
|
||||
objfile
|
||||
objfile.go
|
||||
unvendor
|
||||
golang.org
|
||||
x
|
||||
arch
|
||||
arm
|
||||
armasm
|
||||
testdata
|
||||
+
|
||||
x86
|
||||
x86asm
|
||||
testdata
|
||||
+
|
||||
gofmt
|
||||
gofmt.go
|
||||
gofmt_test.go
|
||||
|
|
@ -35,18 +47,6 @@ go src=..
|
|||
newlink
|
||||
testdata
|
||||
+
|
||||
vendor
|
||||
golang.org
|
||||
x
|
||||
arch
|
||||
arm
|
||||
armasm
|
||||
testdata
|
||||
+
|
||||
x86
|
||||
x86asm
|
||||
testdata
|
||||
+
|
||||
archive
|
||||
tar
|
||||
testdata
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ func (w *Writer) Flush() error {
|
|||
}
|
||||
|
||||
// Close finishes writing the zip file by writing the central directory.
|
||||
// It does not (and can not) close the underlying writer.
|
||||
// It does not (and cannot) close the underlying writer.
|
||||
func (w *Writer) Close() error {
|
||||
if w.last != nil && !w.last.closed {
|
||||
if err := w.last.close(); err != nil {
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ import (
|
|||
type Buffer struct {
|
||||
buf []byte // contents are the bytes buf[off : len(buf)]
|
||||
off int // read at &buf[off], write at &buf[len(buf)]
|
||||
runeBytes [utf8.UTFMax]byte // avoid allocation of slice on each WriteByte or Rune
|
||||
runeBytes [utf8.UTFMax]byte // avoid allocation of slice on each call to WriteRune
|
||||
bootstrap [64]byte // memory to hold first slice; helps small buffers (Printf) avoid allocation.
|
||||
lastRead readOp // last read operation, so that Unread* can work correctly.
|
||||
}
|
||||
|
|
|
|||
|
|
@ -178,7 +178,7 @@ func BenchmarkAll(b *testing.B) {
|
|||
for _, context := range contexts {
|
||||
w := NewWalker(context, filepath.Join(build.Default.GOROOT, "src"))
|
||||
for _, name := range pkgNames {
|
||||
if name != "unsafe" && !strings.HasPrefix(name, "cmd/") {
|
||||
if name != "unsafe" && !strings.HasPrefix(name, "cmd/") && !internalPkg.MatchString(name) {
|
||||
pkg, _ := w.Import(name)
|
||||
w.export(pkg)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -162,8 +162,6 @@ func archX86(linkArch *obj.LinkArch) *Arch {
|
|||
instructions["MOVDQ2Q"] = x86.AMOVQ
|
||||
instructions["MOVNTDQ"] = x86.AMOVNTO
|
||||
instructions["MOVOA"] = x86.AMOVO
|
||||
instructions["PF2ID"] = x86.APF2IL
|
||||
instructions["PI2FD"] = x86.API2FL
|
||||
instructions["PSLLDQ"] = x86.APSLLO
|
||||
instructions["PSRLDQ"] = x86.APSRLO
|
||||
instructions["PADDD"] = x86.APADDL
|
||||
|
|
|
|||
|
|
@ -447,7 +447,11 @@ func (f *File) walk(x interface{}, context string, visit func(*File, interface{}
|
|||
case *ast.ImportSpec:
|
||||
case *ast.ValueSpec:
|
||||
f.walk(&n.Type, "type", visit)
|
||||
f.walk(n.Values, "expr", visit)
|
||||
if len(n.Names) == 2 && len(n.Values) == 1 {
|
||||
f.walk(&n.Values[0], "as2", visit)
|
||||
} else {
|
||||
f.walk(n.Values, "expr", visit)
|
||||
}
|
||||
case *ast.TypeSpec:
|
||||
f.walk(&n.Type, "type", visit)
|
||||
|
||||
|
|
|
|||
|
|
@ -133,7 +133,7 @@ C's union types are represented as a Go byte array with the same length.
|
|||
|
||||
Go structs cannot embed fields with C types.
|
||||
|
||||
Go code can not refer to zero-sized fields that occur at the end of
|
||||
Go code cannot refer to zero-sized fields that occur at the end of
|
||||
non-empty C structs. To get the address of such a field (which is the
|
||||
only operation you can do with a zero-sized field) you must take the
|
||||
address of the struct and add the size of the struct.
|
||||
|
|
@ -148,8 +148,9 @@ assignment context to retrieve both the return value (if any) and the
|
|||
C errno variable as an error (use _ to skip the result value if the
|
||||
function returns void). For example:
|
||||
|
||||
n, err := C.sqrt(-1)
|
||||
n, err = C.sqrt(-1)
|
||||
_, err := C.voidFunc()
|
||||
var n, err = C.sqrt(1)
|
||||
|
||||
Calling C function pointers is currently not supported, however you can
|
||||
declare Go variables which hold C function pointers and pass them
|
||||
|
|
|
|||
|
|
@ -432,7 +432,7 @@ func (p *Package) loadDWARF(f *File, names []*Name) {
|
|||
fmt.Fprintf(&b, "\t0,\n")
|
||||
}
|
||||
}
|
||||
// for the last entry, we can not use 0, otherwise
|
||||
// for the last entry, we cannot use 0, otherwise
|
||||
// in case all __cgodebug_data is zero initialized,
|
||||
// LLVM-based gcc will place the it in the __DATA.__common
|
||||
// zero-filled section (our debug/macho doesn't support
|
||||
|
|
@ -2025,7 +2025,7 @@ func (c *typeConv) Struct(dt *dwarf.StructType, pos token.Pos) (expr *ast.Struct
|
|||
// We can't permit that, because then the size of the Go
|
||||
// struct will not be the same as the size of the C struct.
|
||||
// Our only option in such a case is to remove the field,
|
||||
// which means that it can not be referenced from Go.
|
||||
// which means that it cannot be referenced from Go.
|
||||
for off > 0 && sizes[len(sizes)-1] == 0 {
|
||||
n := len(sizes)
|
||||
fld = fld[0 : n-1]
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import (
|
|||
"bytes"
|
||||
"fmt"
|
||||
"go/token"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
)
|
||||
|
|
@ -16,6 +17,43 @@ import (
|
|||
// It returns the output to standard output and standard error.
|
||||
// ok indicates whether the command exited successfully.
|
||||
func run(stdin []byte, argv []string) (stdout, stderr []byte, ok bool) {
|
||||
if i := find(argv, "-xc"); i >= 0 && argv[len(argv)-1] == "-" {
|
||||
// Some compilers have trouble with standard input.
|
||||
// Others have trouble with -xc.
|
||||
// Avoid both problems by writing a file with a .c extension.
|
||||
f, err := ioutil.TempFile("", "cgo-gcc-input-")
|
||||
if err != nil {
|
||||
fatalf("%s", err)
|
||||
}
|
||||
name := f.Name()
|
||||
f.Close()
|
||||
if err := ioutil.WriteFile(name+".c", stdin, 0666); err != nil {
|
||||
os.Remove(name)
|
||||
fatalf("%s", err)
|
||||
}
|
||||
defer os.Remove(name)
|
||||
defer os.Remove(name + ".c")
|
||||
|
||||
// Build new argument list without -xc and trailing -.
|
||||
new := append(argv[:i:i], argv[i+1:len(argv)-1]...)
|
||||
|
||||
// Since we are going to write the file to a temporary directory,
|
||||
// we will need to add -I . explicitly to the command line:
|
||||
// any #include "foo" before would have looked in the current
|
||||
// directory as the directory "holding" standard input, but now
|
||||
// the temporary directory holds the input.
|
||||
// We've also run into compilers that reject "-I." but allow "-I", ".",
|
||||
// so be sure to use two arguments.
|
||||
// This matters mainly for people invoking cgo -godefs by hand.
|
||||
new = append(new, "-I", ".")
|
||||
|
||||
// Finish argument list with path to C file.
|
||||
new = append(new, name+".c")
|
||||
|
||||
argv = new
|
||||
stdin = nil
|
||||
}
|
||||
|
||||
p := exec.Command(argv[0], argv[1:]...)
|
||||
p.Stdin = bytes.NewReader(stdin)
|
||||
var bout, berr bytes.Buffer
|
||||
|
|
@ -30,6 +68,15 @@ func run(stdin []byte, argv []string) (stdout, stderr []byte, ok bool) {
|
|||
return
|
||||
}
|
||||
|
||||
func find(argv []string, target string) int {
|
||||
for i, arg := range argv {
|
||||
if arg == target {
|
||||
return i
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
func lineno(pos token.Pos) string {
|
||||
return fset.Position(pos).String()
|
||||
}
|
||||
|
|
|
|||
|
|
@ -98,11 +98,11 @@ var progtable = [arm.ALAST]obj.ProgInfo{
|
|||
arm.AMOVH: {Flags: gc.SizeW | gc.LeftRead | gc.RightWrite | gc.Move},
|
||||
arm.AMOVW: {Flags: gc.SizeL | gc.LeftRead | gc.RightWrite | gc.Move},
|
||||
|
||||
// In addtion, duffzero reads R0,R1 and writes R1. This fact is
|
||||
// In addition, duffzero reads R0,R1 and writes R1. This fact is
|
||||
// encoded in peep.c
|
||||
obj.ADUFFZERO: {Flags: gc.Call},
|
||||
|
||||
// In addtion, duffcopy reads R1,R2 and writes R0,R1,R2. This fact is
|
||||
// In addition, duffcopy reads R1,R2 and writes R0,R1,R2. This fact is
|
||||
// encoded in peep.c
|
||||
obj.ADUFFCOPY: {Flags: gc.Call},
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,65 @@
|
|||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package big_test
|
||||
|
||||
import (
|
||||
"cmd/compile/internal/big"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Use the classic continued fraction for e
|
||||
// e = [1; 0, 1, 1, 2, 1, 1, ... 2n, 1, 1, ...]
|
||||
// i.e., for the nth term, use
|
||||
// 1 if n mod 3 != 1
|
||||
// (n-1)/3 * 2 if n mod 3 == 1
|
||||
func recur(n, lim int64) *big.Rat {
|
||||
term := new(big.Rat)
|
||||
if n%3 != 1 {
|
||||
term.SetInt64(1)
|
||||
} else {
|
||||
term.SetInt64((n - 1) / 3 * 2)
|
||||
}
|
||||
|
||||
if n > lim {
|
||||
return term
|
||||
}
|
||||
|
||||
// Directly initialize frac as the fractional
|
||||
// inverse of the result of recur.
|
||||
frac := new(big.Rat).Inv(recur(n+1, lim))
|
||||
|
||||
return term.Add(term, frac)
|
||||
}
|
||||
|
||||
// This example demonstrates how to use big.Rat to compute the
|
||||
// first 15 terms in the sequence of rational convergents for
|
||||
// the constant e (base of natural logarithm).
|
||||
func Example_eConvergents() {
|
||||
for i := 1; i <= 15; i++ {
|
||||
r := recur(0, int64(i))
|
||||
|
||||
// Print r both as a fraction and as a floating-point number.
|
||||
// Since big.Rat implements fmt.Formatter, we can use %-13s to
|
||||
// get a left-aligned string representation of the fraction.
|
||||
fmt.Printf("%-13s = %s\n", r, r.FloatString(8))
|
||||
}
|
||||
|
||||
// Output:
|
||||
// 2/1 = 2.00000000
|
||||
// 3/1 = 3.00000000
|
||||
// 8/3 = 2.66666667
|
||||
// 11/4 = 2.75000000
|
||||
// 19/7 = 2.71428571
|
||||
// 87/32 = 2.71875000
|
||||
// 106/39 = 2.71794872
|
||||
// 193/71 = 2.71830986
|
||||
// 1264/465 = 2.71827957
|
||||
// 1457/536 = 2.71828358
|
||||
// 2721/1001 = 2.71828172
|
||||
// 23225/8544 = 2.71828184
|
||||
// 25946/9545 = 2.71828182
|
||||
// 49171/18089 = 2.71828183
|
||||
// 517656/190435 = 2.71828183
|
||||
}
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This file implements encoding/decoding of Floats.
|
||||
|
||||
package big
|
||||
|
||||
import "fmt"
|
||||
|
||||
// MarshalText implements the encoding.TextMarshaler interface.
|
||||
// Only the Float value is marshaled (in full precision), other
|
||||
// attributes such as precision or accuracy are ignored.
|
||||
func (x *Float) MarshalText() (text []byte, err error) {
|
||||
if x == nil {
|
||||
return []byte("<nil>"), nil
|
||||
}
|
||||
var buf []byte
|
||||
return x.Append(buf, 'g', -1), nil
|
||||
}
|
||||
|
||||
// UnmarshalText implements the encoding.TextUnmarshaler interface.
|
||||
// The result is rounded per the precision and rounding mode of z.
|
||||
// If z's precision is 0, it is changed to 64 before rounding takes
|
||||
// effect.
|
||||
func (z *Float) UnmarshalText(text []byte) error {
|
||||
// TODO(gri): get rid of the []byte/string conversion
|
||||
_, _, err := z.Parse(string(text), 0)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("math/big: cannot unmarshal %q into a *big.Float (%v)", text, err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package big
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var floatVals = []string{
|
||||
"0",
|
||||
"1",
|
||||
"0.1",
|
||||
"2.71828",
|
||||
"1234567890",
|
||||
"3.14e1234",
|
||||
"3.14e-1234",
|
||||
"0.738957395793475734757349579759957975985497e100",
|
||||
"0.73895739579347546656564656573475734957975995797598589749859834759476745986795497e100",
|
||||
"inf",
|
||||
"Inf",
|
||||
}
|
||||
|
||||
func TestFloatJSONEncoding(t *testing.T) {
|
||||
for _, test := range floatVals {
|
||||
for _, sign := range []string{"", "+", "-"} {
|
||||
for _, prec := range []uint{0, 1, 2, 10, 53, 64, 100, 1000} {
|
||||
x := sign + test
|
||||
var tx Float
|
||||
_, _, err := tx.SetPrec(prec).Parse(x, 0)
|
||||
if err != nil {
|
||||
t.Errorf("parsing of %s (prec = %d) failed (invalid test case): %v", x, prec, err)
|
||||
continue
|
||||
}
|
||||
b, err := json.Marshal(&tx)
|
||||
if err != nil {
|
||||
t.Errorf("marshaling of %v (prec = %d) failed: %v", &tx, prec, err)
|
||||
continue
|
||||
}
|
||||
var rx Float
|
||||
rx.SetPrec(prec)
|
||||
if err := json.Unmarshal(b, &rx); err != nil {
|
||||
t.Errorf("unmarshaling of %v (prec = %d) failed: %v", &tx, prec, err)
|
||||
continue
|
||||
}
|
||||
if rx.Cmp(&tx) != 0 {
|
||||
t.Errorf("JSON encoding of %v (prec = %d) failed: got %v want %v", &tx, prec, &rx, &tx)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,74 @@
|
|||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This file implements encoding/decoding of Ints.
|
||||
|
||||
package big
|
||||
|
||||
import "fmt"
|
||||
|
||||
// Gob codec version. Permits backward-compatible changes to the encoding.
|
||||
const intGobVersion byte = 1
|
||||
|
||||
// GobEncode implements the gob.GobEncoder interface.
|
||||
func (x *Int) GobEncode() ([]byte, error) {
|
||||
if x == nil {
|
||||
return nil, nil
|
||||
}
|
||||
buf := make([]byte, 1+len(x.abs)*_S) // extra byte for version and sign bit
|
||||
i := x.abs.bytes(buf) - 1 // i >= 0
|
||||
b := intGobVersion << 1 // make space for sign bit
|
||||
if x.neg {
|
||||
b |= 1
|
||||
}
|
||||
buf[i] = b
|
||||
return buf[i:], nil
|
||||
}
|
||||
|
||||
// GobDecode implements the gob.GobDecoder interface.
|
||||
func (z *Int) GobDecode(buf []byte) error {
|
||||
if len(buf) == 0 {
|
||||
// Other side sent a nil or default value.
|
||||
*z = Int{}
|
||||
return nil
|
||||
}
|
||||
b := buf[0]
|
||||
if b>>1 != intGobVersion {
|
||||
return fmt.Errorf("Int.GobDecode: encoding version %d not supported", b>>1)
|
||||
}
|
||||
z.neg = b&1 != 0
|
||||
z.abs = z.abs.setBytes(buf[1:])
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalText implements the encoding.TextMarshaler interface.
|
||||
func (x *Int) MarshalText() (text []byte, err error) {
|
||||
if x == nil {
|
||||
return []byte("<nil>"), nil
|
||||
}
|
||||
return x.abs.itoa(x.neg, 10), nil
|
||||
}
|
||||
|
||||
// UnmarshalText implements the encoding.TextUnmarshaler interface.
|
||||
func (z *Int) UnmarshalText(text []byte) error {
|
||||
// TODO(gri): get rid of the []byte/string conversion
|
||||
if _, ok := z.SetString(string(text), 0); !ok {
|
||||
return fmt.Errorf("math/big: cannot unmarshal %q into a *big.Int", text)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// The JSON marshallers are only here for API backward compatibility
|
||||
// (programs that explicitly look for these two methods). JSON works
|
||||
// fine with the TextMarshaler only.
|
||||
|
||||
// MarshalJSON implements the json.Marshaler interface.
|
||||
func (x *Int) MarshalJSON() ([]byte, error) {
|
||||
return x.MarshalText()
|
||||
}
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaler interface.
|
||||
func (z *Int) UnmarshalJSON(text []byte) error {
|
||||
return z.UnmarshalText(text)
|
||||
}
|
||||
|
|
@ -0,0 +1,121 @@
|
|||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package big
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/gob"
|
||||
"encoding/json"
|
||||
"encoding/xml"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var encodingTests = []string{
|
||||
"0",
|
||||
"1",
|
||||
"2",
|
||||
"10",
|
||||
"1000",
|
||||
"1234567890",
|
||||
"298472983472983471903246121093472394872319615612417471234712061",
|
||||
}
|
||||
|
||||
func TestIntGobEncoding(t *testing.T) {
|
||||
var medium bytes.Buffer
|
||||
enc := gob.NewEncoder(&medium)
|
||||
dec := gob.NewDecoder(&medium)
|
||||
for _, test := range encodingTests {
|
||||
for _, sign := range []string{"", "+", "-"} {
|
||||
x := sign + test
|
||||
medium.Reset() // empty buffer for each test case (in case of failures)
|
||||
var tx Int
|
||||
tx.SetString(x, 10)
|
||||
if err := enc.Encode(&tx); err != nil {
|
||||
t.Errorf("encoding of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
var rx Int
|
||||
if err := dec.Decode(&rx); err != nil {
|
||||
t.Errorf("decoding of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
if rx.Cmp(&tx) != 0 {
|
||||
t.Errorf("transmission of %s failed: got %s want %s", &tx, &rx, &tx)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sending a nil Int pointer (inside a slice) on a round trip through gob should yield a zero.
|
||||
// TODO: top-level nils.
|
||||
func TestGobEncodingNilIntInSlice(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
enc := gob.NewEncoder(buf)
|
||||
dec := gob.NewDecoder(buf)
|
||||
|
||||
var in = make([]*Int, 1)
|
||||
err := enc.Encode(&in)
|
||||
if err != nil {
|
||||
t.Errorf("gob encode failed: %q", err)
|
||||
}
|
||||
var out []*Int
|
||||
err = dec.Decode(&out)
|
||||
if err != nil {
|
||||
t.Fatalf("gob decode failed: %q", err)
|
||||
}
|
||||
if len(out) != 1 {
|
||||
t.Fatalf("wrong len; want 1 got %d", len(out))
|
||||
}
|
||||
var zero Int
|
||||
if out[0].Cmp(&zero) != 0 {
|
||||
t.Fatalf("transmission of (*Int)(nil) failed: got %s want 0", out)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIntJSONEncoding(t *testing.T) {
|
||||
for _, test := range encodingTests {
|
||||
for _, sign := range []string{"", "+", "-"} {
|
||||
x := sign + test
|
||||
var tx Int
|
||||
tx.SetString(x, 10)
|
||||
b, err := json.Marshal(&tx)
|
||||
if err != nil {
|
||||
t.Errorf("marshaling of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
var rx Int
|
||||
if err := json.Unmarshal(b, &rx); err != nil {
|
||||
t.Errorf("unmarshaling of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
if rx.Cmp(&tx) != 0 {
|
||||
t.Errorf("JSON encoding of %s failed: got %s want %s", &tx, &rx, &tx)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIntXMLEncoding(t *testing.T) {
|
||||
for _, test := range encodingTests {
|
||||
for _, sign := range []string{"", "+", "-"} {
|
||||
x := sign + test
|
||||
var tx Int
|
||||
tx.SetString(x, 0)
|
||||
b, err := xml.Marshal(&tx)
|
||||
if err != nil {
|
||||
t.Errorf("marshaling of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
var rx Int
|
||||
if err := xml.Unmarshal(b, &rx); err != nil {
|
||||
t.Errorf("unmarshaling of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
if rx.Cmp(&tx) != 0 {
|
||||
t.Errorf("XML encoding of %s failed: got %s want %s", &tx, &rx, &tx)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -15,7 +15,7 @@ import (
|
|||
)
|
||||
|
||||
func ratTok(ch rune) bool {
|
||||
return strings.IndexRune("+-/0123456789.eE", ch) >= 0
|
||||
return strings.ContainsRune("+-/0123456789.eE", ch)
|
||||
}
|
||||
|
||||
// Scan is a support routine for fmt.Scanner. It accepts the formats
|
||||
|
|
@ -25,7 +25,7 @@ func (z *Rat) Scan(s fmt.ScanState, ch rune) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if strings.IndexRune("efgEFGv", ch) < 0 {
|
||||
if !strings.ContainsRune("efgEFGv", ch) {
|
||||
return errors.New("Rat.Scan: invalid verb")
|
||||
}
|
||||
if _, ok := z.SetString(string(tok)); !ok {
|
||||
|
|
|
|||
|
|
@ -0,0 +1,73 @@
|
|||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This file implements encoding/decoding of Rats.
|
||||
|
||||
package big
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Gob codec version. Permits backward-compatible changes to the encoding.
|
||||
const ratGobVersion byte = 1
|
||||
|
||||
// GobEncode implements the gob.GobEncoder interface.
|
||||
func (x *Rat) GobEncode() ([]byte, error) {
|
||||
if x == nil {
|
||||
return nil, nil
|
||||
}
|
||||
buf := make([]byte, 1+4+(len(x.a.abs)+len(x.b.abs))*_S) // extra bytes for version and sign bit (1), and numerator length (4)
|
||||
i := x.b.abs.bytes(buf)
|
||||
j := x.a.abs.bytes(buf[:i])
|
||||
n := i - j
|
||||
if int(uint32(n)) != n {
|
||||
// this should never happen
|
||||
return nil, errors.New("Rat.GobEncode: numerator too large")
|
||||
}
|
||||
binary.BigEndian.PutUint32(buf[j-4:j], uint32(n))
|
||||
j -= 1 + 4
|
||||
b := ratGobVersion << 1 // make space for sign bit
|
||||
if x.a.neg {
|
||||
b |= 1
|
||||
}
|
||||
buf[j] = b
|
||||
return buf[j:], nil
|
||||
}
|
||||
|
||||
// GobDecode implements the gob.GobDecoder interface.
|
||||
func (z *Rat) GobDecode(buf []byte) error {
|
||||
if len(buf) == 0 {
|
||||
// Other side sent a nil or default value.
|
||||
*z = Rat{}
|
||||
return nil
|
||||
}
|
||||
b := buf[0]
|
||||
if b>>1 != ratGobVersion {
|
||||
return fmt.Errorf("Rat.GobDecode: encoding version %d not supported", b>>1)
|
||||
}
|
||||
const j = 1 + 4
|
||||
i := j + binary.BigEndian.Uint32(buf[j-4:j])
|
||||
z.a.neg = b&1 != 0
|
||||
z.a.abs = z.a.abs.setBytes(buf[j:i])
|
||||
z.b.abs = z.b.abs.setBytes(buf[i:])
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalText implements the encoding.TextMarshaler interface.
|
||||
func (x *Rat) MarshalText() (text []byte, err error) {
|
||||
// TODO(gri): get rid of the []byte/string conversion
|
||||
return []byte(x.RatString()), nil
|
||||
}
|
||||
|
||||
// UnmarshalText implements the encoding.TextUnmarshaler interface.
|
||||
func (z *Rat) UnmarshalText(text []byte) error {
|
||||
// TODO(gri): get rid of the []byte/string conversion
|
||||
if _, ok := z.SetString(string(text)); !ok {
|
||||
return fmt.Errorf("math/big: cannot unmarshal %q into a *big.Rat", text)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
@ -0,0 +1,125 @@
|
|||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package big
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/gob"
|
||||
"encoding/json"
|
||||
"encoding/xml"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestRatGobEncoding(t *testing.T) {
|
||||
var medium bytes.Buffer
|
||||
enc := gob.NewEncoder(&medium)
|
||||
dec := gob.NewDecoder(&medium)
|
||||
for _, test := range encodingTests {
|
||||
medium.Reset() // empty buffer for each test case (in case of failures)
|
||||
var tx Rat
|
||||
tx.SetString(test + ".14159265")
|
||||
if err := enc.Encode(&tx); err != nil {
|
||||
t.Errorf("encoding of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
var rx Rat
|
||||
if err := dec.Decode(&rx); err != nil {
|
||||
t.Errorf("decoding of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
if rx.Cmp(&tx) != 0 {
|
||||
t.Errorf("transmission of %s failed: got %s want %s", &tx, &rx, &tx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sending a nil Rat pointer (inside a slice) on a round trip through gob should yield a zero.
|
||||
// TODO: top-level nils.
|
||||
func TestGobEncodingNilRatInSlice(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
enc := gob.NewEncoder(buf)
|
||||
dec := gob.NewDecoder(buf)
|
||||
|
||||
var in = make([]*Rat, 1)
|
||||
err := enc.Encode(&in)
|
||||
if err != nil {
|
||||
t.Errorf("gob encode failed: %q", err)
|
||||
}
|
||||
var out []*Rat
|
||||
err = dec.Decode(&out)
|
||||
if err != nil {
|
||||
t.Fatalf("gob decode failed: %q", err)
|
||||
}
|
||||
if len(out) != 1 {
|
||||
t.Fatalf("wrong len; want 1 got %d", len(out))
|
||||
}
|
||||
var zero Rat
|
||||
if out[0].Cmp(&zero) != 0 {
|
||||
t.Fatalf("transmission of (*Int)(nil) failed: got %s want 0", out)
|
||||
}
|
||||
}
|
||||
|
||||
var ratNums = []string{
|
||||
"-141592653589793238462643383279502884197169399375105820974944592307816406286",
|
||||
"-1415926535897932384626433832795028841971",
|
||||
"-141592653589793",
|
||||
"-1",
|
||||
"0",
|
||||
"1",
|
||||
"141592653589793",
|
||||
"1415926535897932384626433832795028841971",
|
||||
"141592653589793238462643383279502884197169399375105820974944592307816406286",
|
||||
}
|
||||
|
||||
var ratDenoms = []string{
|
||||
"1",
|
||||
"718281828459045",
|
||||
"7182818284590452353602874713526624977572",
|
||||
"718281828459045235360287471352662497757247093699959574966967627724076630353",
|
||||
}
|
||||
|
||||
func TestRatJSONEncoding(t *testing.T) {
|
||||
for _, num := range ratNums {
|
||||
for _, denom := range ratDenoms {
|
||||
var tx Rat
|
||||
tx.SetString(num + "/" + denom)
|
||||
b, err := json.Marshal(&tx)
|
||||
if err != nil {
|
||||
t.Errorf("marshaling of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
var rx Rat
|
||||
if err := json.Unmarshal(b, &rx); err != nil {
|
||||
t.Errorf("unmarshaling of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
if rx.Cmp(&tx) != 0 {
|
||||
t.Errorf("JSON encoding of %s failed: got %s want %s", &tx, &rx, &tx)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRatXMLEncoding(t *testing.T) {
|
||||
for _, num := range ratNums {
|
||||
for _, denom := range ratDenoms {
|
||||
var tx Rat
|
||||
tx.SetString(num + "/" + denom)
|
||||
b, err := xml.Marshal(&tx)
|
||||
if err != nil {
|
||||
t.Errorf("marshaling of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
var rx Rat
|
||||
if err := xml.Unmarshal(b, &rx); err != nil {
|
||||
t.Errorf("unmarshaling of %s failed: %s", &tx, err)
|
||||
continue
|
||||
}
|
||||
if rx.Cmp(&tx) != 0 {
|
||||
t.Errorf("XML encoding of %s failed: got %s want %s", &tx, &rx, &tx)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -877,7 +877,7 @@ func (p *exporter) byte(b byte) {
|
|||
// tracef is like fmt.Printf but it rewrites the format string
|
||||
// to take care of indentation.
|
||||
func (p *exporter) tracef(format string, args ...interface{}) {
|
||||
if strings.IndexAny(format, "<>\n") >= 0 {
|
||||
if strings.ContainsAny(format, "<>\n") {
|
||||
var buf bytes.Buffer
|
||||
for i := 0; i < len(format); i++ {
|
||||
// no need to deal with runes
|
||||
|
|
@ -1035,6 +1035,9 @@ func predeclared() []*Type {
|
|||
|
||||
// package unsafe
|
||||
Types[TUNSAFEPTR],
|
||||
|
||||
// any type, for builtin export data
|
||||
Types[TANY],
|
||||
}
|
||||
}
|
||||
return predecl
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
package gc
|
||||
|
||||
const runtimeimport = "" +
|
||||
"package runtime\n" +
|
||||
"package runtime safe\n" +
|
||||
"func @\"\".newobject (@\"\".typ·2 *byte) (? *any)\n" +
|
||||
"func @\"\".panicindex ()\n" +
|
||||
"func @\"\".panicslice ()\n" +
|
||||
|
|
@ -44,7 +44,7 @@ const runtimeimport = "" +
|
|||
"func @\"\".stringtoslicerune (? *[32]rune, ? string) (? []rune)\n" +
|
||||
"func @\"\".stringiter (? string, ? int) (? int)\n" +
|
||||
"func @\"\".stringiter2 (? string, ? int) (@\"\".retk·1 int, @\"\".retv·2 rune)\n" +
|
||||
"func @\"\".slicecopy (@\"\".to·2 any, @\"\".fr·3 any, @\"\".wid·4 uintptr) (? int)\n" +
|
||||
"func @\"\".slicecopy (@\"\".to·2 any, @\"\".fr·3 any, @\"\".wid·4 uintptr \"unsafe-uintptr\") (? int)\n" +
|
||||
"func @\"\".slicestringcopy (@\"\".to·2 any, @\"\".fr·3 any) (? int)\n" +
|
||||
"func @\"\".typ2Itab (@\"\".typ·2 *byte, @\"\".typ2·3 *byte, @\"\".cache·4 **byte) (@\"\".ret·1 *byte)\n" +
|
||||
"func @\"\".convI2E (@\"\".elem·2 any) (@\"\".ret·1 any)\n" +
|
||||
|
|
@ -66,8 +66,6 @@ const runtimeimport = "" +
|
|||
"func @\"\".panicdottype (@\"\".have·1 *byte, @\"\".want·2 *byte, @\"\".iface·3 *byte)\n" +
|
||||
"func @\"\".ifaceeq (@\"\".i1·2 any, @\"\".i2·3 any) (@\"\".ret·1 bool)\n" +
|
||||
"func @\"\".efaceeq (@\"\".i1·2 any, @\"\".i2·3 any) (@\"\".ret·1 bool)\n" +
|
||||
"func @\"\".ifacethash (@\"\".i1·2 any) (@\"\".ret·1 uint32)\n" +
|
||||
"func @\"\".efacethash (@\"\".i1·2 any) (@\"\".ret·1 uint32)\n" +
|
||||
"func @\"\".makemap (@\"\".mapType·2 *byte, @\"\".hint·3 int64, @\"\".mapbuf·4 *any, @\"\".bucketbuf·5 *any) (@\"\".hmap·1 map[any]any)\n" +
|
||||
"func @\"\".mapaccess1 (@\"\".mapType·2 *byte, @\"\".hmap·3 map[any]any, @\"\".key·4 *any) (@\"\".val·1 *any)\n" +
|
||||
"func @\"\".mapaccess1_fast32 (@\"\".mapType·2 *byte, @\"\".hmap·3 map[any]any, @\"\".key·4 any) (@\"\".val·1 *any)\n" +
|
||||
|
|
@ -91,31 +89,31 @@ const runtimeimport = "" +
|
|||
"func @\"\".writebarrierstring (@\"\".dst·1 *any, @\"\".src·2 any)\n" +
|
||||
"func @\"\".writebarrierslice (@\"\".dst·1 *any, @\"\".src·2 any)\n" +
|
||||
"func @\"\".writebarrieriface (@\"\".dst·1 *any, @\"\".src·2 any)\n" +
|
||||
"func @\"\".writebarrierfat01 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat10 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat11 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat001 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat010 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat011 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat100 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat101 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat110 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat111 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0001 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0010 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0011 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0100 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0101 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0110 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0111 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1000 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1001 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1010 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1011 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1100 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1101 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1110 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1111 (@\"\".dst·1 *any, _ uintptr, @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat01 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat10 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat11 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat001 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat010 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat011 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat100 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat101 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat110 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat111 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0001 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0010 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0011 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0100 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0101 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0110 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat0111 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1000 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1001 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1010 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1011 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1100 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1101 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1110 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".writebarrierfat1111 (@\"\".dst·1 *any, _ uintptr \"unsafe-uintptr\", @\"\".src·3 any)\n" +
|
||||
"func @\"\".typedmemmove (@\"\".typ·1 *byte, @\"\".dst·2 *any, @\"\".src·3 *any)\n" +
|
||||
"func @\"\".typedslicecopy (@\"\".typ·2 *byte, @\"\".dst·3 any, @\"\".src·4 any) (? int)\n" +
|
||||
"func @\"\".selectnbsend (@\"\".chanType·2 *byte, @\"\".hchan·3 chan<- any, @\"\".elem·4 *any) (? bool)\n" +
|
||||
|
|
@ -131,9 +129,9 @@ const runtimeimport = "" +
|
|||
"func @\"\".makeslice (@\"\".typ·2 *byte, @\"\".nel·3 int64, @\"\".cap·4 int64) (@\"\".ary·1 []any)\n" +
|
||||
"func @\"\".growslice (@\"\".typ·2 *byte, @\"\".old·3 []any, @\"\".cap·4 int) (@\"\".ary·1 []any)\n" +
|
||||
"func @\"\".growslice_n (@\"\".typ·2 *byte, @\"\".old·3 []any, @\"\".n·4 int) (@\"\".ary·1 []any)\n" +
|
||||
"func @\"\".memmove (@\"\".to·1 *any, @\"\".frm·2 *any, @\"\".length·3 uintptr)\n" +
|
||||
"func @\"\".memclr (@\"\".ptr·1 *byte, @\"\".length·2 uintptr)\n" +
|
||||
"func @\"\".memequal (@\"\".x·2 *any, @\"\".y·3 *any, @\"\".size·4 uintptr) (? bool)\n" +
|
||||
"func @\"\".memmove (@\"\".to·1 *any, @\"\".frm·2 *any, @\"\".length·3 uintptr \"unsafe-uintptr\")\n" +
|
||||
"func @\"\".memclr (@\"\".ptr·1 *byte, @\"\".length·2 uintptr \"unsafe-uintptr\")\n" +
|
||||
"func @\"\".memequal (@\"\".x·2 *any, @\"\".y·3 *any, @\"\".size·4 uintptr \"unsafe-uintptr\") (? bool)\n" +
|
||||
"func @\"\".memequal8 (@\"\".x·2 *any, @\"\".y·3 *any) (? bool)\n" +
|
||||
"func @\"\".memequal16 (@\"\".x·2 *any, @\"\".y·3 *any) (? bool)\n" +
|
||||
"func @\"\".memequal32 (@\"\".x·2 *any, @\"\".y·3 *any) (? bool)\n" +
|
||||
|
|
@ -148,15 +146,14 @@ const runtimeimport = "" +
|
|||
"func @\"\".int64tofloat64 (? int64) (? float64)\n" +
|
||||
"func @\"\".uint64tofloat64 (? uint64) (? float64)\n" +
|
||||
"func @\"\".complex128div (@\"\".num·2 complex128, @\"\".den·3 complex128) (@\"\".quo·1 complex128)\n" +
|
||||
"func @\"\".racefuncenter (? uintptr)\n" +
|
||||
"func @\"\".racefuncenterfp (? *int32)\n" +
|
||||
"func @\"\".racefuncenter (? uintptr \"unsafe-uintptr\")\n" +
|
||||
"func @\"\".racefuncexit ()\n" +
|
||||
"func @\"\".raceread (? uintptr)\n" +
|
||||
"func @\"\".racewrite (? uintptr)\n" +
|
||||
"func @\"\".racereadrange (@\"\".addr·1 uintptr, @\"\".size·2 uintptr)\n" +
|
||||
"func @\"\".racewriterange (@\"\".addr·1 uintptr, @\"\".size·2 uintptr)\n" +
|
||||
"func @\"\".msanread (@\"\".addr·1 uintptr, @\"\".size·2 uintptr)\n" +
|
||||
"func @\"\".msanwrite (@\"\".addr·1 uintptr, @\"\".size·2 uintptr)\n" +
|
||||
"func @\"\".raceread (? uintptr \"unsafe-uintptr\")\n" +
|
||||
"func @\"\".racewrite (? uintptr \"unsafe-uintptr\")\n" +
|
||||
"func @\"\".racereadrange (@\"\".addr·1 uintptr \"unsafe-uintptr\", @\"\".size·2 uintptr \"unsafe-uintptr\")\n" +
|
||||
"func @\"\".racewriterange (@\"\".addr·1 uintptr \"unsafe-uintptr\", @\"\".size·2 uintptr \"unsafe-uintptr\")\n" +
|
||||
"func @\"\".msanread (@\"\".addr·1 uintptr \"unsafe-uintptr\", @\"\".size·2 uintptr \"unsafe-uintptr\")\n" +
|
||||
"func @\"\".msanwrite (@\"\".addr·1 uintptr \"unsafe-uintptr\", @\"\".size·2 uintptr \"unsafe-uintptr\")\n" +
|
||||
"\n" +
|
||||
"$$\n"
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@
|
|||
|
||||
// +build ignore
|
||||
|
||||
package PACKAGE
|
||||
package runtime
|
||||
|
||||
// emitted by compiler, not referred to by go programs
|
||||
|
||||
|
|
@ -83,8 +83,6 @@ func panicdottype(have, want, iface *byte)
|
|||
|
||||
func ifaceeq(i1 any, i2 any) (ret bool)
|
||||
func efaceeq(i1 any, i2 any) (ret bool)
|
||||
func ifacethash(i1 any) (ret uint32)
|
||||
func efacethash(i1 any) (ret uint32)
|
||||
|
||||
// *byte is really *runtime.Type
|
||||
func makemap(mapType *byte, hint int64, mapbuf *any, bucketbuf *any) (hmap map[any]any)
|
||||
|
|
@ -192,7 +190,6 @@ func complex128div(num complex128, den complex128) (quo complex128)
|
|||
|
||||
// race detection
|
||||
func racefuncenter(uintptr)
|
||||
func racefuncenterfp(*int32)
|
||||
func racefuncexit()
|
||||
func raceread(uintptr)
|
||||
func racewrite(uintptr)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@
|
|||
|
||||
// +build ignore
|
||||
|
||||
package PACKAGE
|
||||
package unsafe
|
||||
|
||||
type Pointer uintptr // not really; filled in by compiler
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,31 @@
|
|||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package gc_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"internal/testenv"
|
||||
"io/ioutil"
|
||||
"os/exec"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBuiltin(t *testing.T) {
|
||||
testenv.MustHaveGoRun(t)
|
||||
|
||||
old, err := ioutil.ReadFile("builtin.go")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
new, err := exec.Command("go", "run", "mkbuiltin.go", "-stdout").Output()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(old, new) {
|
||||
t.Fatal("builtin.go out of date; run mkbuiltin.go")
|
||||
}
|
||||
}
|
||||
|
|
@ -576,6 +576,12 @@ func esc(e *EscState, n *Node, up *Node) {
|
|||
if n == nil {
|
||||
return
|
||||
}
|
||||
if n.Type != nil && n.Type.Etype == TFIELD {
|
||||
// This is the left side of x:y in a struct literal.
|
||||
// x is syntax, not an expression.
|
||||
// See #14405.
|
||||
return
|
||||
}
|
||||
|
||||
lno := int(setlineno(n))
|
||||
|
||||
|
|
@ -602,9 +608,10 @@ func esc(e *EscState, n *Node, up *Node) {
|
|||
|
||||
// Big stuff escapes unconditionally
|
||||
// "Big" conditions that were scattered around in walk have been gathered here
|
||||
if n.Esc != EscHeap && n.Type != nil && (n.Type.Width > MaxStackVarSize ||
|
||||
n.Op == ONEW && n.Type.Type.Width >= 1<<16 ||
|
||||
n.Op == OMAKESLICE && !isSmallMakeSlice(n)) {
|
||||
if n.Esc != EscHeap && n.Type != nil &&
|
||||
(n.Type.Width > MaxStackVarSize ||
|
||||
n.Op == ONEW && n.Type.Type.Width >= 1<<16 ||
|
||||
n.Op == OMAKESLICE && !isSmallMakeSlice(n)) {
|
||||
if Debug['m'] > 1 {
|
||||
Warnl(int(n.Lineno), "%v is too large for stack", n)
|
||||
}
|
||||
|
|
@ -962,7 +969,7 @@ func escassign(e *EscState, dst *Node, src *Node) {
|
|||
dst = &e.theSink
|
||||
}
|
||||
|
||||
case ODOT: // treat "dst.x = src" as "dst = src"
|
||||
case ODOT: // treat "dst.x = src" as "dst = src"
|
||||
escassign(e, dst.Left, src)
|
||||
|
||||
return
|
||||
|
|
@ -1042,7 +1049,6 @@ func escassign(e *EscState, dst *Node, src *Node) {
|
|||
ODOTMETH,
|
||||
// treat recv.meth as a value with recv in it, only happens in ODEFER and OPROC
|
||||
// iface.method already leaks iface in esccall, no need to put in extra ODOTINTER edge here
|
||||
ODOTTYPE,
|
||||
ODOTTYPE2,
|
||||
OSLICE,
|
||||
OSLICE3,
|
||||
|
|
@ -1052,6 +1058,12 @@ func escassign(e *EscState, dst *Node, src *Node) {
|
|||
// Conversions, field access, slice all preserve the input value.
|
||||
escassign(e, dst, src.Left)
|
||||
|
||||
case ODOTTYPE:
|
||||
if src.Type != nil && !haspointers(src.Type) {
|
||||
break
|
||||
}
|
||||
escassign(e, dst, src.Left)
|
||||
|
||||
case OAPPEND:
|
||||
// Append returns first argument.
|
||||
// Subsequent arguments are already leaked because they are operands to append.
|
||||
|
|
@ -1549,9 +1561,9 @@ func escflows(e *EscState, dst *Node, src *Node) {
|
|||
// finding an OADDR just means we're following the upstream of a dereference,
|
||||
// so this address doesn't leak (yet).
|
||||
// If level == 0, it means the /value/ of this node can reach the root of this flood.
|
||||
// so if this node is an OADDR, it's argument should be marked as escaping iff
|
||||
// it's currfn/e->loopdepth are different from the flood's root.
|
||||
// Once an object has been moved to the heap, all of it's upstream should be considered
|
||||
// so if this node is an OADDR, its argument should be marked as escaping iff
|
||||
// its currfn/e->loopdepth are different from the flood's root.
|
||||
// Once an object has been moved to the heap, all of its upstream should be considered
|
||||
// escaping to the global scope.
|
||||
func escflood(e *EscState, dst *Node) {
|
||||
switch dst.Op {
|
||||
|
|
|
|||
|
|
@ -442,7 +442,7 @@ func importsym(s *Sym, op Op) *Sym {
|
|||
|
||||
// mark the symbol so it is not reexported
|
||||
if s.Def == nil {
|
||||
if exportname(s.Name) || initname(s.Name) {
|
||||
if Debug['A'] != 0 || exportname(s.Name) || initname(s.Name) {
|
||||
s.Flags |= SymExport
|
||||
} else {
|
||||
s.Flags |= SymPackage // package scope
|
||||
|
|
|
|||
|
|
@ -749,7 +749,13 @@ func typefmt(t *Type, flag int) string {
|
|||
if name != "" {
|
||||
str = name + " " + typ
|
||||
}
|
||||
if flag&obj.FmtShort == 0 && !fmtbody && t.Note != nil {
|
||||
|
||||
// The fmtbody flag is intended to suppress escape analysis annotations
|
||||
// when printing a function type used in a function body.
|
||||
// (The escape analysis tags do not apply to func vars.)
|
||||
// But it must not suppress struct field tags.
|
||||
// See golang.org/issue/13777 and golang.org/issue/14331.
|
||||
if flag&obj.FmtShort == 0 && (!fmtbody || !t.Funarg) && t.Note != nil {
|
||||
str += " " + strconv.Quote(*t.Note)
|
||||
}
|
||||
return str
|
||||
|
|
@ -1537,7 +1543,7 @@ func nodedump(n *Node, flag int) string {
|
|||
} else {
|
||||
fmt.Fprintf(&buf, "%v%v", Oconv(int(n.Op), 0), Jconv(n, 0))
|
||||
}
|
||||
if recur && n.Type == nil && n.Name.Param.Ntype != nil {
|
||||
if recur && n.Type == nil && n.Name != nil && n.Name.Param != nil && n.Name.Param.Ntype != nil {
|
||||
indent(&buf)
|
||||
fmt.Fprintf(&buf, "%v-ntype%v", Oconv(int(n.Op), 0), n.Name.Param.Ntype)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -838,7 +838,7 @@ func gen(n *Node) {
|
|||
Cgen_as_wb(n.Left, n.Right, true)
|
||||
|
||||
case OAS2DOTTYPE:
|
||||
cgen_dottype(n.Rlist.N, n.List.N, n.List.Next.N, false)
|
||||
cgen_dottype(n.Rlist.N, n.List.N, n.List.Next.N, needwritebarrier(n.List.N, n.Rlist.N))
|
||||
|
||||
case OCALLMETH:
|
||||
cgen_callmeth(n, 0)
|
||||
|
|
|
|||
|
|
@ -28,30 +28,21 @@ const (
|
|||
|
||||
const (
|
||||
// These values are known by runtime.
|
||||
// The MEMx and NOEQx values must run in parallel. See algtype.
|
||||
AMEM = iota
|
||||
ANOEQ = iota
|
||||
AMEM0
|
||||
AMEM8
|
||||
AMEM16
|
||||
AMEM32
|
||||
AMEM64
|
||||
AMEM128
|
||||
ANOEQ
|
||||
ANOEQ0
|
||||
ANOEQ8
|
||||
ANOEQ16
|
||||
ANOEQ32
|
||||
ANOEQ64
|
||||
ANOEQ128
|
||||
ASTRING
|
||||
AINTER
|
||||
ANILINTER
|
||||
ASLICE
|
||||
AFLOAT32
|
||||
AFLOAT64
|
||||
ACPLX64
|
||||
ACPLX128
|
||||
AUNK = 100
|
||||
AMEM = 100
|
||||
)
|
||||
|
||||
const (
|
||||
|
|
@ -329,8 +320,7 @@ const (
|
|||
|
||||
const (
|
||||
// types of channel
|
||||
// must match ../../pkg/nreflect/type.go:/Chandir
|
||||
Cxxx = 0
|
||||
// must match ../../../../reflect/type.go:/ChanDir
|
||||
Crecv = 1 << 0
|
||||
Csend = 1 << 1
|
||||
Cboth = Crecv | Csend
|
||||
|
|
@ -385,27 +375,10 @@ type Sig struct {
|
|||
offset int32
|
||||
}
|
||||
|
||||
type Io struct {
|
||||
infile string
|
||||
bin *obj.Biobuf
|
||||
cp string // used for content when bin==nil
|
||||
last int
|
||||
peekc int
|
||||
peekc1 int // second peekc for ...
|
||||
nlsemi bool
|
||||
eofnl bool
|
||||
importsafe bool
|
||||
}
|
||||
|
||||
type Dlist struct {
|
||||
field *Type
|
||||
}
|
||||
|
||||
type Idir struct {
|
||||
link *Idir
|
||||
dir string
|
||||
}
|
||||
|
||||
// argument passing to/from
|
||||
// smagic and umagic
|
||||
type Magic struct {
|
||||
|
|
@ -452,10 +425,6 @@ var sizeof_String int // runtime sizeof(String)
|
|||
|
||||
var dotlist [10]Dlist // size is max depth of embeddeds
|
||||
|
||||
var curio Io
|
||||
|
||||
var pushedio Io
|
||||
|
||||
var lexlineno int32
|
||||
|
||||
var lineno int32
|
||||
|
|
@ -493,8 +462,6 @@ var debugstr string
|
|||
var Debug_checknil int
|
||||
var Debug_typeassert int
|
||||
|
||||
var importmyname *Sym // my name for package
|
||||
|
||||
var localpkg *Pkg // package being compiled
|
||||
|
||||
var importpkg *Pkg // package being imported
|
||||
|
|
@ -527,8 +494,6 @@ var Tptr EType // either TPTR32 or TPTR64
|
|||
|
||||
var myimportpath string
|
||||
|
||||
var idirs *Idir
|
||||
|
||||
var localimport string
|
||||
|
||||
var asmhdr string
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@
|
|||
// saves a copy of the body. Then inlcalls walks each function body to
|
||||
// expand calls to inlinable functions.
|
||||
//
|
||||
// The debug['l'] flag controls the agressiveness. Note that main() swaps level 0 and 1,
|
||||
// The debug['l'] flag controls the aggressiveness. Note that main() swaps level 0 and 1,
|
||||
// making 1 the default and -l disable. -ll and more is useful to flush out bugs.
|
||||
// These additional levels (beyond -l) may be buggy and are not supported.
|
||||
// 0: disabled
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -4,95 +4,90 @@
|
|||
|
||||
// +build ignore
|
||||
|
||||
// Generate builtin.go from builtin/runtime.go and builtin/unsafe.go
|
||||
// (passed as arguments on the command line by a go:generate comment).
|
||||
// Generate builtin.go from builtin/runtime.go and builtin/unsafe.go.
|
||||
// Run this after changing builtin/runtime.go and builtin/unsafe.go
|
||||
// or after changing the export metadata format in the compiler.
|
||||
// Either way, you need to have a working compiler binary first.
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var stdout = flag.Bool("stdout", false, "write to stdout instead of builtin.go")
|
||||
|
||||
func main() {
|
||||
f, err := os.Create("builtin.go")
|
||||
flag.Parse()
|
||||
|
||||
var b bytes.Buffer
|
||||
fmt.Fprintln(&b, "// AUTO-GENERATED by mkbuiltin.go; DO NOT EDIT")
|
||||
fmt.Fprintln(&b, "")
|
||||
fmt.Fprintln(&b, "package gc")
|
||||
|
||||
mkbuiltin(&b, "runtime")
|
||||
mkbuiltin(&b, "unsafe")
|
||||
|
||||
var err error
|
||||
if *stdout {
|
||||
_, err = os.Stdout.Write(b.Bytes())
|
||||
} else {
|
||||
err = ioutil.WriteFile("builtin.go", b.Bytes(), 0666)
|
||||
}
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer f.Close()
|
||||
w := bufio.NewWriter(f)
|
||||
|
||||
fmt.Fprintln(w, "// AUTO-GENERATED by mkbuiltin.go; DO NOT EDIT")
|
||||
fmt.Fprintln(w, "")
|
||||
fmt.Fprintln(w, "package gc")
|
||||
|
||||
for _, name := range os.Args[1:] {
|
||||
mkbuiltin(w, name)
|
||||
}
|
||||
|
||||
if err := w.Flush(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Compile .go file, import data from .6 file, and write Go string version.
|
||||
// Compile .go file, import data from .o file, and write Go string version.
|
||||
func mkbuiltin(w io.Writer, name string) {
|
||||
if err := exec.Command("go", "tool", "compile", "-A", "builtin/"+name+".go").Run(); err != nil {
|
||||
args := []string{"tool", "compile", "-A"}
|
||||
if name == "runtime" {
|
||||
args = append(args, "-u")
|
||||
}
|
||||
args = append(args, "builtin/"+name+".go")
|
||||
|
||||
if err := exec.Command("go", args...).Run(); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
obj := name + ".o"
|
||||
defer os.Remove(obj)
|
||||
|
||||
r, err := os.Open(obj)
|
||||
b, err := ioutil.ReadFile(obj)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer r.Close()
|
||||
scanner := bufio.NewScanner(r)
|
||||
|
||||
// Look for $$ that introduces imports.
|
||||
for scanner.Scan() {
|
||||
if strings.Contains(scanner.Text(), "$$") {
|
||||
goto Begin
|
||||
}
|
||||
i := bytes.Index(b, []byte("\n$$\n"))
|
||||
if i < 0 {
|
||||
log.Fatal("did not find beginning of imports")
|
||||
}
|
||||
log.Fatal("did not find beginning of imports")
|
||||
i += 4
|
||||
|
||||
Begin:
|
||||
initfunc := fmt.Sprintf("init_%s_function", name)
|
||||
|
||||
fmt.Fprintf(w, "\nconst %simport = \"\" +\n", name)
|
||||
|
||||
// sys.go claims to be in package PACKAGE to avoid
|
||||
// conflicts during "go tool compile sys.go". Rename PACKAGE to $2.
|
||||
replacer := strings.NewReplacer("PACKAGE", name)
|
||||
|
||||
// Process imports, stopping at $$ that closes them.
|
||||
for scanner.Scan() {
|
||||
p := scanner.Text()
|
||||
if strings.Contains(p, "$$") {
|
||||
goto End
|
||||
}
|
||||
// Look for $$ that closes imports.
|
||||
j := bytes.Index(b[i:], []byte("\n$$\n"))
|
||||
if j < 0 {
|
||||
log.Fatal("did not find end of imports")
|
||||
}
|
||||
j += i + 4
|
||||
|
||||
// Process and reformat imports.
|
||||
fmt.Fprintf(w, "\nconst %simport = \"\"", name)
|
||||
for _, p := range bytes.SplitAfter(b[i:j], []byte("\n")) {
|
||||
// Chop leading white space.
|
||||
p = strings.TrimLeft(p, " \t")
|
||||
|
||||
// Cut out decl of init_$1_function - it doesn't exist.
|
||||
if strings.Contains(p, initfunc) {
|
||||
p = bytes.TrimLeft(p, " \t")
|
||||
if len(p) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
fmt.Fprintf(w, "\t%q +\n", replacer.Replace(p)+"\n")
|
||||
fmt.Fprintf(w, " +\n\t%q", p)
|
||||
}
|
||||
log.Fatal("did not find end of imports")
|
||||
|
||||
End:
|
||||
fmt.Fprintf(w, "\t\"$$\\n\"\n")
|
||||
fmt.Fprintf(w, "\n")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -233,8 +233,7 @@ func stringsym(s string) (hdr, data *Sym) {
|
|||
off = dsname(symdata, off, s[n:n+m])
|
||||
}
|
||||
|
||||
off = duint8(symdata, off, 0) // terminating NUL for runtime
|
||||
off = (off + Widthptr - 1) &^ (Widthptr - 1) // round to pointer alignment
|
||||
off = duint8(symdata, off, 0) // terminating NUL for runtime
|
||||
ggloblsym(symdata, int32(off), obj.DUPOK|obj.RODATA|obj.LOCAL)
|
||||
|
||||
return symhdr, symdata
|
||||
|
|
|
|||
|
|
@ -42,8 +42,7 @@ import (
|
|||
// Order holds state during the ordering process.
|
||||
type Order struct {
|
||||
out *NodeList // list of generated statements
|
||||
temp *NodeList // head of stack of temporary variables
|
||||
free *NodeList // free list of NodeList* structs (for use in temp)
|
||||
temp []*Node // stack of temporary variables
|
||||
}
|
||||
|
||||
// Order rewrites fn->nbody to apply the ordering constraints
|
||||
|
|
@ -68,14 +67,7 @@ func ordertemp(t *Type, order *Order, clear bool) *Node {
|
|||
order.out = list(order.out, a)
|
||||
}
|
||||
|
||||
l := order.free
|
||||
if l == nil {
|
||||
l = new(NodeList)
|
||||
}
|
||||
order.free = l.Next
|
||||
l.Next = order.temp
|
||||
l.N = var_
|
||||
order.temp = l
|
||||
order.temp = append(order.temp, var_)
|
||||
return var_
|
||||
}
|
||||
|
||||
|
|
@ -215,42 +207,35 @@ func orderaddrtemp(np **Node, order *Order) {
|
|||
*np = ordercopyexpr(n, n.Type, order, 0)
|
||||
}
|
||||
|
||||
type ordermarker int
|
||||
|
||||
// Marktemp returns the top of the temporary variable stack.
|
||||
func marktemp(order *Order) *NodeList {
|
||||
return order.temp
|
||||
func marktemp(order *Order) ordermarker {
|
||||
return ordermarker(len(order.temp))
|
||||
}
|
||||
|
||||
// Poptemp pops temporaries off the stack until reaching the mark,
|
||||
// which must have been returned by marktemp.
|
||||
func poptemp(mark *NodeList, order *Order) {
|
||||
var l *NodeList
|
||||
|
||||
for {
|
||||
l = order.temp
|
||||
if l == mark {
|
||||
break
|
||||
}
|
||||
order.temp = l.Next
|
||||
l.Next = order.free
|
||||
order.free = l
|
||||
}
|
||||
func poptemp(mark ordermarker, order *Order) {
|
||||
order.temp = order.temp[:mark]
|
||||
}
|
||||
|
||||
// Cleantempnopop emits to *out VARKILL instructions for each temporary
|
||||
// above the mark on the temporary stack, but it does not pop them
|
||||
// from the stack.
|
||||
func cleantempnopop(mark *NodeList, order *Order, out **NodeList) {
|
||||
func cleantempnopop(mark ordermarker, order *Order, out **NodeList) {
|
||||
var kill *Node
|
||||
|
||||
for l := order.temp; l != mark; l = l.Next {
|
||||
if l.N.Name.Keepalive {
|
||||
l.N.Name.Keepalive = false
|
||||
l.N.Addrtaken = true // ensure SSA keeps the l.N variable
|
||||
kill = Nod(OVARLIVE, l.N, nil)
|
||||
for i := len(order.temp) - 1; i >= int(mark); i-- {
|
||||
n := order.temp[i]
|
||||
if n.Name.Keepalive {
|
||||
n.Name.Keepalive = false
|
||||
n.Addrtaken = true // ensure SSA keeps the n variable
|
||||
kill = Nod(OVARLIVE, n, nil)
|
||||
typecheck(&kill, Etop)
|
||||
*out = list(*out, kill)
|
||||
}
|
||||
kill = Nod(OVARKILL, l.N, nil)
|
||||
kill = Nod(OVARKILL, n, nil)
|
||||
typecheck(&kill, Etop)
|
||||
*out = list(*out, kill)
|
||||
}
|
||||
|
|
@ -258,7 +243,7 @@ func cleantempnopop(mark *NodeList, order *Order, out **NodeList) {
|
|||
|
||||
// Cleantemp emits VARKILL instructions for each temporary above the
|
||||
// mark on the temporary stack and removes them from the stack.
|
||||
func cleantemp(top *NodeList, order *Order) {
|
||||
func cleantemp(top ordermarker, order *Order) {
|
||||
cleantempnopop(top, order, &order.out)
|
||||
poptemp(top, order)
|
||||
}
|
||||
|
|
@ -290,13 +275,7 @@ func orderexprinplace(np **Node, outer *Order) {
|
|||
|
||||
// insert new temporaries from order
|
||||
// at head of outer list.
|
||||
lp := &order.temp
|
||||
|
||||
for *lp != nil {
|
||||
lp = &(*lp).Next
|
||||
}
|
||||
*lp = outer.temp
|
||||
outer.temp = order.temp
|
||||
outer.temp = append(outer.temp, order.temp...)
|
||||
|
||||
*np = n
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
package gc
|
||||
|
||||
// The recursive-descent parser is built around a slighty modified grammar
|
||||
// of Go to accomodate for the constraints imposed by strict one token look-
|
||||
// of Go to accommodate for the constraints imposed by strict one token look-
|
||||
// ahead, and for better error handling. Subsequent checks of the constructed
|
||||
// syntax tree restrict the language accepted by the compiler to proper Go.
|
||||
//
|
||||
|
|
@ -13,6 +13,7 @@ package gc
|
|||
// to handle optional commas and semicolons before a closing ) or } .
|
||||
|
||||
import (
|
||||
"cmd/internal/obj"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
|
@ -20,81 +21,31 @@ import (
|
|||
|
||||
const trace = false // if set, parse tracing can be enabled with -x
|
||||
|
||||
// TODO(gri) Once we handle imports w/o redirecting the underlying
|
||||
// source of the lexer we can get rid of these. They are here for
|
||||
// compatibility with the existing yacc-based parser setup (issue 13242).
|
||||
var thenewparser parser // the parser in use
|
||||
var savedstate []parser // saved parser state, used during import
|
||||
|
||||
func push_parser() {
|
||||
// Indentation (for tracing) must be preserved across parsers
|
||||
// since we are changing the lexer source (and parser state)
|
||||
// under foot, in the middle of productions. This won't be
|
||||
// needed anymore once we fix issue 13242, but neither will
|
||||
// be the push/pop_parser functionality.
|
||||
// (Instead we could just use a global variable indent, but
|
||||
// but eventually indent should be parser-specific anyway.)
|
||||
indent := thenewparser.indent
|
||||
savedstate = append(savedstate, thenewparser)
|
||||
thenewparser = parser{indent: indent} // preserve indentation
|
||||
thenewparser.next()
|
||||
// parse_import parses the export data of a package that is imported.
|
||||
func parse_import(bin *obj.Biobuf, indent []byte) {
|
||||
newparser(bin, indent).import_package()
|
||||
}
|
||||
|
||||
func pop_parser() {
|
||||
indent := thenewparser.indent
|
||||
n := len(savedstate) - 1
|
||||
thenewparser = savedstate[n]
|
||||
thenewparser.indent = indent // preserve indentation
|
||||
savedstate = savedstate[:n]
|
||||
}
|
||||
|
||||
// parse_file sets up a new parser and parses a single Go source file.
|
||||
func parse_file() {
|
||||
thenewparser = parser{}
|
||||
thenewparser.loadsys()
|
||||
thenewparser.next()
|
||||
thenewparser.file()
|
||||
}
|
||||
|
||||
// loadsys loads the definitions for the low-level runtime functions,
|
||||
// so that the compiler can generate calls to them,
|
||||
// but does not make the name "runtime" visible as a package.
|
||||
func (p *parser) loadsys() {
|
||||
if trace && Debug['x'] != 0 {
|
||||
defer p.trace("loadsys")()
|
||||
}
|
||||
|
||||
importpkg = Runtimepkg
|
||||
|
||||
if Debug['A'] != 0 {
|
||||
cannedimports("runtime.Builtin", "package runtime\n\n$$\n\n")
|
||||
} else {
|
||||
cannedimports("runtime.Builtin", runtimeimport)
|
||||
}
|
||||
curio.importsafe = true
|
||||
|
||||
p.import_package()
|
||||
p.import_there()
|
||||
|
||||
importpkg = nil
|
||||
// parse_file parses a single Go source file.
|
||||
func parse_file(bin *obj.Biobuf) {
|
||||
newparser(bin, nil).file()
|
||||
}
|
||||
|
||||
type parser struct {
|
||||
tok int32 // next token (one-token look-ahead)
|
||||
op Op // valid if tok == LASOP
|
||||
val Val // valid if tok == LLITERAL
|
||||
sym_ *Sym // valid if tok == LNAME
|
||||
fnest int // function nesting level (for error handling)
|
||||
xnest int // expression nesting level (for complit ambiguity resolution)
|
||||
yy yySymType // for temporary use by next
|
||||
indent []byte // tracing support
|
||||
lexer
|
||||
fnest int // function nesting level (for error handling)
|
||||
xnest int // expression nesting level (for complit ambiguity resolution)
|
||||
indent []byte // tracing support
|
||||
}
|
||||
|
||||
func (p *parser) next() {
|
||||
p.tok = yylex(&p.yy)
|
||||
p.op = p.yy.op
|
||||
p.val = p.yy.val
|
||||
p.sym_ = p.yy.sym
|
||||
// newparser returns a new parser ready to parse from src.
|
||||
// indent is the initial indentation for tracing output.
|
||||
func newparser(src *obj.Biobuf, indent []byte) *parser {
|
||||
var p parser
|
||||
p.bin = src
|
||||
p.indent = indent
|
||||
p.next()
|
||||
return &p
|
||||
}
|
||||
|
||||
func (p *parser) got(tok int32) bool {
|
||||
|
|
@ -347,108 +298,87 @@ func (p *parser) import_() {
|
|||
p.want(LIMPORT)
|
||||
if p.got('(') {
|
||||
for p.tok != EOF && p.tok != ')' {
|
||||
p.import_stmt()
|
||||
p.importdcl()
|
||||
if !p.osemi(')') {
|
||||
break
|
||||
}
|
||||
}
|
||||
p.want(')')
|
||||
} else {
|
||||
p.import_stmt()
|
||||
}
|
||||
}
|
||||
|
||||
func (p *parser) import_stmt() {
|
||||
if trace && Debug['x'] != 0 {
|
||||
defer p.trace("import_stmt")()
|
||||
}
|
||||
|
||||
line := int32(p.import_here())
|
||||
if p.tok == LPACKAGE {
|
||||
p.import_package()
|
||||
p.import_there()
|
||||
|
||||
ipkg := importpkg
|
||||
my := importmyname
|
||||
importpkg = nil
|
||||
importmyname = nil
|
||||
|
||||
if my == nil {
|
||||
my = Lookup(ipkg.Name)
|
||||
}
|
||||
|
||||
pack := Nod(OPACK, nil, nil)
|
||||
pack.Sym = my
|
||||
pack.Name.Pkg = ipkg
|
||||
pack.Lineno = line
|
||||
|
||||
if strings.HasPrefix(my.Name, ".") {
|
||||
importdot(ipkg, pack)
|
||||
return
|
||||
}
|
||||
if my.Name == "init" {
|
||||
lineno = line
|
||||
Yyerror("cannot import package as init - init must be a func")
|
||||
return
|
||||
}
|
||||
if my.Name == "_" {
|
||||
return
|
||||
}
|
||||
if my.Def != nil {
|
||||
lineno = line
|
||||
redeclare(my, "as imported package name")
|
||||
}
|
||||
my.Def = pack
|
||||
my.Lastlineno = line
|
||||
my.Block = 1 // at top level
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
p.import_there()
|
||||
// When an invalid import path is passed to importfile,
|
||||
// it calls Yyerror and then sets up a fake import with
|
||||
// no package statement. This allows us to test more
|
||||
// than one invalid import statement in a single file.
|
||||
if nerrors == 0 {
|
||||
Fatalf("phase error in import")
|
||||
p.importdcl()
|
||||
}
|
||||
}
|
||||
|
||||
// ImportSpec = [ "." | PackageName ] ImportPath .
|
||||
// ImportPath = string_lit .
|
||||
//
|
||||
// import_here switches the underlying lexed source to the export data
|
||||
// of the imported package.
|
||||
func (p *parser) import_here() int {
|
||||
func (p *parser) importdcl() {
|
||||
if trace && Debug['x'] != 0 {
|
||||
defer p.trace("import_here")()
|
||||
defer p.trace("importdcl")()
|
||||
}
|
||||
|
||||
importmyname = nil
|
||||
var my *Sym
|
||||
switch p.tok {
|
||||
case LNAME, '@', '?':
|
||||
// import with given name
|
||||
importmyname = p.sym()
|
||||
my = p.sym()
|
||||
|
||||
case '.':
|
||||
// import into my name space
|
||||
importmyname = Lookup(".")
|
||||
my = Lookup(".")
|
||||
p.next()
|
||||
}
|
||||
|
||||
var path Val
|
||||
if p.tok == LLITERAL {
|
||||
path = p.val
|
||||
p.next()
|
||||
} else {
|
||||
if p.tok != LLITERAL {
|
||||
p.syntax_error("missing import path; require quoted string")
|
||||
p.advance(';', ')')
|
||||
return
|
||||
}
|
||||
|
||||
line := parserline()
|
||||
importfile(&path, line)
|
||||
return line
|
||||
line := int32(parserline())
|
||||
path := p.val
|
||||
p.next()
|
||||
|
||||
importfile(&path, p.indent)
|
||||
if importpkg == nil {
|
||||
if nerrors == 0 {
|
||||
Fatalf("phase error in import")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
ipkg := importpkg
|
||||
importpkg = nil
|
||||
|
||||
ipkg.Direct = true
|
||||
|
||||
if my == nil {
|
||||
my = Lookup(ipkg.Name)
|
||||
}
|
||||
|
||||
pack := Nod(OPACK, nil, nil)
|
||||
pack.Sym = my
|
||||
pack.Name.Pkg = ipkg
|
||||
pack.Lineno = line
|
||||
|
||||
if strings.HasPrefix(my.Name, ".") {
|
||||
importdot(ipkg, pack)
|
||||
return
|
||||
}
|
||||
if my.Name == "init" {
|
||||
lineno = line
|
||||
Yyerror("cannot import package as init - init must be a func")
|
||||
return
|
||||
}
|
||||
if my.Name == "_" {
|
||||
return
|
||||
}
|
||||
if my.Def != nil {
|
||||
lineno = line
|
||||
redeclare(my, "as imported package name")
|
||||
}
|
||||
my.Def = pack
|
||||
my.Lastlineno = line
|
||||
my.Block = 1 // at top level
|
||||
}
|
||||
|
||||
// import_package parses the header of an imported package as exported
|
||||
|
|
@ -467,9 +397,10 @@ func (p *parser) import_package() {
|
|||
p.import_error()
|
||||
}
|
||||
|
||||
importsafe := false
|
||||
if p.tok == LNAME {
|
||||
if p.sym_.Name == "safe" {
|
||||
curio.importsafe = true
|
||||
importsafe = true
|
||||
}
|
||||
p.next()
|
||||
}
|
||||
|
|
@ -481,23 +412,9 @@ func (p *parser) import_package() {
|
|||
} else if importpkg.Name != name {
|
||||
Yyerror("conflicting names %s and %s for package %q", importpkg.Name, name, importpkg.Path)
|
||||
}
|
||||
if incannedimport == 0 {
|
||||
importpkg.Direct = true
|
||||
}
|
||||
importpkg.Safe = curio.importsafe
|
||||
|
||||
if safemode != 0 && !curio.importsafe {
|
||||
Yyerror("cannot import unsafe package %q", importpkg.Path)
|
||||
}
|
||||
}
|
||||
|
||||
// import_there parses the imported package definitions and then switches
|
||||
// the underlying lexed source back to the importing package.
|
||||
func (p *parser) import_there() {
|
||||
if trace && Debug['x'] != 0 {
|
||||
defer p.trace("import_there")()
|
||||
}
|
||||
importpkg.Safe = importsafe
|
||||
|
||||
typecheckok = true
|
||||
defercheckwidth()
|
||||
|
||||
p.hidden_import_list()
|
||||
|
|
@ -508,7 +425,7 @@ func (p *parser) import_there() {
|
|||
}
|
||||
|
||||
resumecheckwidth()
|
||||
unimportfile()
|
||||
typecheckok = false
|
||||
}
|
||||
|
||||
// Declaration = ConstDecl | TypeDecl | VarDecl .
|
||||
|
|
@ -1136,65 +1053,16 @@ func (p *parser) if_stmt() *Node {
|
|||
|
||||
stmt.Nbody = p.loop_body("if clause")
|
||||
|
||||
l := p.elseif_list_else() // does markdcl
|
||||
|
||||
n := stmt
|
||||
popdcl()
|
||||
for nn := l; nn != nil; nn = nn.Next {
|
||||
if nn.N.Op == OIF {
|
||||
popdcl()
|
||||
}
|
||||
n.Rlist = list1(nn.N)
|
||||
n = nn.N
|
||||
}
|
||||
|
||||
return stmt
|
||||
}
|
||||
|
||||
func (p *parser) elseif() *NodeList {
|
||||
if trace && Debug['x'] != 0 {
|
||||
defer p.trace("elseif")()
|
||||
}
|
||||
|
||||
// LELSE LIF already consumed
|
||||
markdcl() // matching popdcl in if_stmt
|
||||
|
||||
stmt := p.if_header()
|
||||
if stmt.Left == nil {
|
||||
Yyerror("missing condition in if statement")
|
||||
}
|
||||
|
||||
stmt.Nbody = p.loop_body("if clause")
|
||||
|
||||
return list1(stmt)
|
||||
}
|
||||
|
||||
func (p *parser) elseif_list_else() (l *NodeList) {
|
||||
if trace && Debug['x'] != 0 {
|
||||
defer p.trace("elseif_list_else")()
|
||||
}
|
||||
|
||||
for p.got(LELSE) {
|
||||
if p.got(LIF) {
|
||||
l = concat(l, p.elseif())
|
||||
if p.got(LELSE) {
|
||||
if p.tok == LIF {
|
||||
stmt.Rlist = list1(p.if_stmt())
|
||||
} else {
|
||||
l = concat(l, p.else_())
|
||||
break
|
||||
stmt.Rlist = list1(p.compound_stmt(true))
|
||||
}
|
||||
}
|
||||
|
||||
return l
|
||||
}
|
||||
|
||||
func (p *parser) else_() *NodeList {
|
||||
if trace && Debug['x'] != 0 {
|
||||
defer p.trace("else")()
|
||||
}
|
||||
|
||||
l := &NodeList{N: p.compound_stmt(true)}
|
||||
l.End = l
|
||||
return l
|
||||
|
||||
popdcl()
|
||||
return stmt
|
||||
}
|
||||
|
||||
// switch_stmt parses both expression and type switch statements.
|
||||
|
|
|
|||
|
|
@ -187,21 +187,12 @@ func emitptrargsmap() {
|
|||
// the top of the stack and increasing in size.
|
||||
// Non-autos sort on offset.
|
||||
func cmpstackvarlt(a, b *Node) bool {
|
||||
if a.Class != b.Class {
|
||||
if a.Class == PAUTO {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
if (a.Class == PAUTO) != (b.Class == PAUTO) {
|
||||
return b.Class == PAUTO
|
||||
}
|
||||
|
||||
if a.Class != PAUTO {
|
||||
if a.Xoffset < b.Xoffset {
|
||||
return true
|
||||
}
|
||||
if a.Xoffset > b.Xoffset {
|
||||
return false
|
||||
}
|
||||
return false
|
||||
return a.Xoffset < b.Xoffset
|
||||
}
|
||||
|
||||
if a.Used != b.Used {
|
||||
|
|
@ -220,11 +211,8 @@ func cmpstackvarlt(a, b *Node) bool {
|
|||
return ap
|
||||
}
|
||||
|
||||
if a.Type.Width < b.Type.Width {
|
||||
return false
|
||||
}
|
||||
if a.Type.Width > b.Type.Width {
|
||||
return true
|
||||
if a.Type.Width != b.Type.Width {
|
||||
return a.Type.Width > b.Type.Width
|
||||
}
|
||||
|
||||
return a.Sym.Name < b.Sym.Name
|
||||
|
|
|
|||
|
|
@ -40,6 +40,16 @@ func TestCmpstackvar(t *testing.T) {
|
|||
Node{Class: PFUNC, Xoffset: 10},
|
||||
false,
|
||||
},
|
||||
{
|
||||
Node{Class: PPARAM, Xoffset: 10},
|
||||
Node{Class: PPARAMOUT, Xoffset: 20},
|
||||
true,
|
||||
},
|
||||
{
|
||||
Node{Class: PPARAMOUT, Xoffset: 10},
|
||||
Node{Class: PPARAM, Xoffset: 20},
|
||||
true,
|
||||
},
|
||||
{
|
||||
Node{Class: PAUTO, Used: true},
|
||||
Node{Class: PAUTO, Used: false},
|
||||
|
|
@ -101,6 +111,10 @@ func TestCmpstackvar(t *testing.T) {
|
|||
if got != d.lt {
|
||||
t.Errorf("want %#v < %#v", d.a, d.b)
|
||||
}
|
||||
// If we expect a < b to be true, check that b < a is false.
|
||||
if d.lt && cmpstackvarlt(&d.b, &d.a) {
|
||||
t.Errorf("unexpected %#v < %#v", d.b, d.a)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -241,6 +241,19 @@ var flowmark int
|
|||
// will not have flow graphs and consequently will not be optimized.
|
||||
const MaxFlowProg = 50000
|
||||
|
||||
var ffcache []Flow // reusable []Flow, to reduce allocation
|
||||
|
||||
func growffcache(n int) {
|
||||
if n > cap(ffcache) {
|
||||
n = (n * 5) / 4
|
||||
if n > MaxFlowProg {
|
||||
n = MaxFlowProg
|
||||
}
|
||||
ffcache = make([]Flow, n)
|
||||
}
|
||||
ffcache = ffcache[:n]
|
||||
}
|
||||
|
||||
func Flowstart(firstp *obj.Prog, newData func() interface{}) *Graph {
|
||||
// Count and mark instructions to annotate.
|
||||
nf := 0
|
||||
|
|
@ -268,7 +281,9 @@ func Flowstart(firstp *obj.Prog, newData func() interface{}) *Graph {
|
|||
|
||||
// Allocate annotations and assign to instructions.
|
||||
graph := new(Graph)
|
||||
ff := make([]Flow, nf)
|
||||
|
||||
growffcache(nf)
|
||||
ff := ffcache
|
||||
start := &ff[0]
|
||||
id := 0
|
||||
var last *Flow
|
||||
|
|
@ -331,6 +346,10 @@ func Flowend(graph *Graph) {
|
|||
f.Prog.Info.Flags = 0 // drop cached proginfo
|
||||
f.Prog.Opt = nil
|
||||
}
|
||||
clear := ffcache[:graph.Num]
|
||||
for i := range clear {
|
||||
clear[i] = Flow{}
|
||||
}
|
||||
}
|
||||
|
||||
// find looping structure
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ func siglt(a, b *Sig) bool {
|
|||
// the given map type. This type is not visible to users -
|
||||
// we include only enough information to generate a correct GC
|
||||
// program for it.
|
||||
// Make sure this stays in sync with ../../runtime/hashmap.go!
|
||||
// Make sure this stays in sync with ../../../../runtime/hashmap.go!
|
||||
const (
|
||||
BUCKETSIZE = 8
|
||||
MAXKEYSIZE = 128
|
||||
|
|
@ -149,7 +149,7 @@ func mapbucket(t *Type) *Type {
|
|||
}
|
||||
|
||||
// Builds a type representing a Hmap structure for the given map type.
|
||||
// Make sure this stays in sync with ../../runtime/hashmap.go!
|
||||
// Make sure this stays in sync with ../../../../runtime/hashmap.go!
|
||||
func hmap(t *Type) *Type {
|
||||
if t.Hmap != nil {
|
||||
return t.Hmap
|
||||
|
|
@ -186,7 +186,7 @@ func hiter(t *Type) *Type {
|
|||
}
|
||||
|
||||
// build a struct:
|
||||
// hash_iter {
|
||||
// hiter {
|
||||
// key *Key
|
||||
// val *Value
|
||||
// t *MapType
|
||||
|
|
@ -200,7 +200,7 @@ func hiter(t *Type) *Type {
|
|||
// bucket uintptr
|
||||
// checkBucket uintptr
|
||||
// }
|
||||
// must match ../../runtime/hashmap.go:hash_iter.
|
||||
// must match ../../../../runtime/hashmap.go:hiter.
|
||||
var field [12]*Type
|
||||
field[0] = makefield("key", Ptrto(t.Down))
|
||||
|
||||
|
|
@ -473,7 +473,7 @@ func dgopkgpath(s *Sym, ot int, pkg *Pkg) int {
|
|||
}
|
||||
|
||||
// uncommonType
|
||||
// ../../runtime/type.go:/uncommonType
|
||||
// ../../../../runtime/type.go:/uncommonType
|
||||
func dextratype(sym *Sym, off int, t *Type, ptroff int) int {
|
||||
m := methods(t)
|
||||
if t.Sym == nil && len(m) == 0 {
|
||||
|
|
@ -513,7 +513,7 @@ func dextratype(sym *Sym, off int, t *Type, ptroff int) int {
|
|||
// methods
|
||||
for _, a := range m {
|
||||
// method
|
||||
// ../../runtime/type.go:/method
|
||||
// ../../../../runtime/type.go:/method
|
||||
ot = dgostringptr(s, ot, a.name)
|
||||
|
||||
ot = dgopkgpath(s, ot, a.pkg)
|
||||
|
|
@ -710,21 +710,21 @@ func dcommontype(s *Sym, ot int, t *Type) int {
|
|||
|
||||
gcsym, useGCProg, ptrdata := dgcsym(t)
|
||||
|
||||
// ../../pkg/reflect/type.go:/^type.commonType
|
||||
// ../../../../reflect/type.go:/^type.rtype
|
||||
// actual type structure
|
||||
// type commonType struct {
|
||||
// type rtype struct {
|
||||
// size uintptr
|
||||
// ptrsize uintptr
|
||||
// ptrdata uintptr
|
||||
// hash uint32
|
||||
// _ uint8
|
||||
// align uint8
|
||||
// fieldAlign uint8
|
||||
// kind uint8
|
||||
// alg unsafe.Pointer
|
||||
// gcdata unsafe.Pointer
|
||||
// alg *typeAlg
|
||||
// gcdata *byte
|
||||
// string *string
|
||||
// *extraType
|
||||
// ptrToThis *Type
|
||||
// *uncommonType
|
||||
// ptrToThis *rtype
|
||||
// }
|
||||
ot = duintptr(s, ot, uint64(t.Width))
|
||||
ot = duintptr(s, ot, uint64(ptrdata))
|
||||
|
|
@ -1010,7 +1010,7 @@ ok:
|
|||
|
||||
case TARRAY:
|
||||
if t.Bound >= 0 {
|
||||
// ../../runtime/type.go:/ArrayType
|
||||
// ../../../../runtime/type.go:/arrayType
|
||||
s1 := dtypesym(t.Type)
|
||||
|
||||
t2 := typ(TARRAY)
|
||||
|
|
@ -1023,7 +1023,7 @@ ok:
|
|||
ot = dsymptr(s, ot, s2, 0)
|
||||
ot = duintptr(s, ot, uint64(t.Bound))
|
||||
} else {
|
||||
// ../../runtime/type.go:/SliceType
|
||||
// ../../../../runtime/type.go:/sliceType
|
||||
s1 := dtypesym(t.Type)
|
||||
|
||||
ot = dcommontype(s, ot, t)
|
||||
|
|
@ -1031,7 +1031,7 @@ ok:
|
|||
ot = dsymptr(s, ot, s1, 0)
|
||||
}
|
||||
|
||||
// ../../runtime/type.go:/ChanType
|
||||
// ../../../../runtime/type.go:/chanType
|
||||
case TCHAN:
|
||||
s1 := dtypesym(t.Type)
|
||||
|
||||
|
|
@ -1090,7 +1090,7 @@ ok:
|
|||
dtypesym(a.type_)
|
||||
}
|
||||
|
||||
// ../../../runtime/type.go:/InterfaceType
|
||||
// ../../../../runtime/type.go:/interfaceType
|
||||
ot = dcommontype(s, ot, t)
|
||||
|
||||
xt = ot - 2*Widthptr
|
||||
|
|
@ -1098,14 +1098,14 @@ ok:
|
|||
ot = duintxx(s, ot, uint64(n), Widthint)
|
||||
ot = duintxx(s, ot, uint64(n), Widthint)
|
||||
for _, a := range m {
|
||||
// ../../../runtime/type.go:/imethod
|
||||
// ../../../../runtime/type.go:/imethod
|
||||
ot = dgostringptr(s, ot, a.name)
|
||||
|
||||
ot = dgopkgpath(s, ot, a.pkg)
|
||||
ot = dsymptr(s, ot, dtypesym(a.type_), 0)
|
||||
}
|
||||
|
||||
// ../../../runtime/type.go:/MapType
|
||||
// ../../../../runtime/type.go:/mapType
|
||||
case TMAP:
|
||||
s1 := dtypesym(t.Down)
|
||||
|
||||
|
|
@ -1140,20 +1140,20 @@ ok:
|
|||
|
||||
case TPTR32, TPTR64:
|
||||
if t.Type.Etype == TANY {
|
||||
// ../../runtime/type.go:/UnsafePointerType
|
||||
// ../../../../runtime/type.go:/UnsafePointerType
|
||||
ot = dcommontype(s, ot, t)
|
||||
|
||||
break
|
||||
}
|
||||
|
||||
// ../../runtime/type.go:/PtrType
|
||||
// ../../../../runtime/type.go:/ptrType
|
||||
s1 := dtypesym(t.Type)
|
||||
|
||||
ot = dcommontype(s, ot, t)
|
||||
xt = ot - 2*Widthptr
|
||||
ot = dsymptr(s, ot, s1, 0)
|
||||
|
||||
// ../../runtime/type.go:/StructType
|
||||
// ../../../../runtime/type.go:/structType
|
||||
// for security, only the exported fields.
|
||||
case TSTRUCT:
|
||||
n := 0
|
||||
|
|
@ -1169,7 +1169,7 @@ ok:
|
|||
ot = duintxx(s, ot, uint64(n), Widthint)
|
||||
ot = duintxx(s, ot, uint64(n), Widthint)
|
||||
for t1 := t.Type; t1 != nil; t1 = t1.Down {
|
||||
// ../../runtime/type.go:/structField
|
||||
// ../../../../runtime/type.go:/structField
|
||||
if t1.Sym != nil && t1.Embedded == 0 {
|
||||
ot = dgostringptr(s, ot, t1.Sym.Name)
|
||||
if exportname(t1.Sym.Name) {
|
||||
|
|
@ -1349,7 +1349,7 @@ func dalgsym(t *Type) *Sym {
|
|||
ggloblsym(eqfunc, int32(Widthptr), obj.DUPOK|obj.RODATA)
|
||||
}
|
||||
|
||||
// ../../runtime/alg.go:/typeAlg
|
||||
// ../../../../runtime/alg.go:/typeAlg
|
||||
ot := 0
|
||||
|
||||
ot = dsymptr(s, ot, hashfunc, 0)
|
||||
|
|
|
|||
|
|
@ -561,7 +561,7 @@ func (s *state) stmt(n *Node) {
|
|||
|
||||
case OAS2DOTTYPE:
|
||||
res, resok := s.dottype(n.Rlist.N, true)
|
||||
s.assign(n.List.N, res, false, false, n.Lineno)
|
||||
s.assign(n.List.N, res, needwritebarrier(n.List.N, n.Rlist.N), false, n.Lineno)
|
||||
s.assign(n.List.Next.N, resok, false, false, n.Lineno)
|
||||
return
|
||||
|
||||
|
|
|
|||
|
|
@ -116,12 +116,6 @@ func Yyerror(format string, args ...interface{}) {
|
|||
if strings.HasPrefix(msg, "syntax error") {
|
||||
nsyntaxerrors++
|
||||
|
||||
// An unexpected EOF caused a syntax error. Use the previous
|
||||
// line number since getc generated a fake newline character.
|
||||
if curio.eofnl {
|
||||
lexlineno = prevlineno
|
||||
}
|
||||
|
||||
// only one syntax error per line
|
||||
if int32(yyerror_lastsyntax) == lexlineno {
|
||||
return
|
||||
|
|
@ -465,6 +459,15 @@ func algtype1(t *Type, bad **Type) int {
|
|||
return a
|
||||
}
|
||||
|
||||
switch t.Bound {
|
||||
case 0:
|
||||
// We checked above that the element type is comparable.
|
||||
return AMEM
|
||||
case 1:
|
||||
// Single-element array is same as its lone element.
|
||||
return a
|
||||
}
|
||||
|
||||
return -1 // needs special compare
|
||||
|
||||
case TSTRUCT:
|
||||
|
|
@ -500,28 +503,20 @@ func algtype1(t *Type, bad **Type) int {
|
|||
|
||||
func algtype(t *Type) int {
|
||||
a := algtype1(t, nil)
|
||||
if a == AMEM || a == ANOEQ {
|
||||
if Isslice(t) {
|
||||
return ASLICE
|
||||
}
|
||||
if a == AMEM {
|
||||
switch t.Width {
|
||||
case 0:
|
||||
return a + AMEM0 - AMEM
|
||||
|
||||
return AMEM0
|
||||
case 1:
|
||||
return a + AMEM8 - AMEM
|
||||
|
||||
return AMEM8
|
||||
case 2:
|
||||
return a + AMEM16 - AMEM
|
||||
|
||||
return AMEM16
|
||||
case 4:
|
||||
return a + AMEM32 - AMEM
|
||||
|
||||
return AMEM32
|
||||
case 8:
|
||||
return a + AMEM64 - AMEM
|
||||
|
||||
return AMEM64
|
||||
case 16:
|
||||
return a + AMEM128 - AMEM
|
||||
return AMEM128
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -2640,17 +2635,13 @@ func genhash(sym *Sym, t *Type) {
|
|||
safemode = old_safemode
|
||||
}
|
||||
|
||||
// Return node for
|
||||
// if p.field != q.field { return false }
|
||||
// eqfield returns the node
|
||||
// p.field == q.field
|
||||
func eqfield(p *Node, q *Node, field *Node) *Node {
|
||||
nx := Nod(OXDOT, p, field)
|
||||
ny := Nod(OXDOT, q, field)
|
||||
nif := Nod(OIF, nil, nil)
|
||||
nif.Left = Nod(ONE, nx, ny)
|
||||
r := Nod(ORETURN, nil, nil)
|
||||
r.List = list(r.List, Nodbool(false))
|
||||
nif.Nbody = list(nif.Nbody, r)
|
||||
return nif
|
||||
ne := Nod(OEQ, nx, ny)
|
||||
return ne
|
||||
}
|
||||
|
||||
func eqmemfunc(size int64, type_ *Type, needsize *int) *Node {
|
||||
|
|
@ -2671,8 +2662,8 @@ func eqmemfunc(size int64, type_ *Type, needsize *int) *Node {
|
|||
return fn
|
||||
}
|
||||
|
||||
// Return node for
|
||||
// if !memequal(&p.field, &q.field [, size]) { return false }
|
||||
// eqmem returns the node
|
||||
// memequal(&p.field, &q.field [, size])
|
||||
func eqmem(p *Node, q *Node, field *Node, size int64) *Node {
|
||||
var needsize int
|
||||
|
||||
|
|
@ -2690,15 +2681,11 @@ func eqmem(p *Node, q *Node, field *Node, size int64) *Node {
|
|||
call.List = list(call.List, Nodintconst(size))
|
||||
}
|
||||
|
||||
nif := Nod(OIF, nil, nil)
|
||||
nif.Left = Nod(ONOT, call, nil)
|
||||
r := Nod(ORETURN, nil, nil)
|
||||
r.List = list(r.List, Nodbool(false))
|
||||
nif.Nbody = list(nif.Nbody, r)
|
||||
return nif
|
||||
return call
|
||||
}
|
||||
|
||||
// Generate a helper function to check equality of two values of type t.
|
||||
// geneq generates a helper function to
|
||||
// check equality of two values of type t.
|
||||
func geneq(sym *Sym, t *Type) {
|
||||
if Debug['r'] != 0 {
|
||||
fmt.Printf("geneq %v %v\n", sym, t)
|
||||
|
|
@ -2768,12 +2755,18 @@ func geneq(sym *Sym, t *Type) {
|
|||
nrange.Nbody = list(nrange.Nbody, nif)
|
||||
fn.Nbody = list(fn.Nbody, nrange)
|
||||
|
||||
// Walk the struct using memequal for runs of AMEM
|
||||
// return true
|
||||
ret := Nod(ORETURN, nil, nil)
|
||||
ret.List = list(ret.List, Nodbool(true))
|
||||
fn.Nbody = list(fn.Nbody, ret)
|
||||
|
||||
// Walk the struct using memequal for runs of AMEM
|
||||
// and calling specific equality tests for the others.
|
||||
// Skip blank-named fields.
|
||||
case TSTRUCT:
|
||||
var first *Type
|
||||
|
||||
var conjuncts []*Node
|
||||
offend := int64(0)
|
||||
var size int64
|
||||
for t1 := t.Type; ; t1 = t1.Down {
|
||||
|
|
@ -2796,17 +2789,17 @@ func geneq(sym *Sym, t *Type) {
|
|||
// cross-package unexported fields.
|
||||
if first != nil {
|
||||
if first.Down == t1 {
|
||||
fn.Nbody = list(fn.Nbody, eqfield(np, nq, newname(first.Sym)))
|
||||
conjuncts = append(conjuncts, eqfield(np, nq, newname(first.Sym)))
|
||||
} else if first.Down.Down == t1 {
|
||||
fn.Nbody = list(fn.Nbody, eqfield(np, nq, newname(first.Sym)))
|
||||
conjuncts = append(conjuncts, eqfield(np, nq, newname(first.Sym)))
|
||||
first = first.Down
|
||||
if !isblanksym(first.Sym) {
|
||||
fn.Nbody = list(fn.Nbody, eqfield(np, nq, newname(first.Sym)))
|
||||
conjuncts = append(conjuncts, eqfield(np, nq, newname(first.Sym)))
|
||||
}
|
||||
} else {
|
||||
// More than two fields: use memequal.
|
||||
size = offend - first.Width // first->width is offset
|
||||
fn.Nbody = list(fn.Nbody, eqmem(np, nq, newname(first.Sym), size))
|
||||
conjuncts = append(conjuncts, eqmem(np, nq, newname(first.Sym), size))
|
||||
}
|
||||
|
||||
first = nil
|
||||
|
|
@ -2820,16 +2813,27 @@ func geneq(sym *Sym, t *Type) {
|
|||
}
|
||||
|
||||
// Check this field, which is not just memory.
|
||||
fn.Nbody = list(fn.Nbody, eqfield(np, nq, newname(t1.Sym)))
|
||||
conjuncts = append(conjuncts, eqfield(np, nq, newname(t1.Sym)))
|
||||
}
|
||||
|
||||
var and *Node
|
||||
switch len(conjuncts) {
|
||||
case 0:
|
||||
and = Nodbool(true)
|
||||
case 1:
|
||||
and = conjuncts[0]
|
||||
default:
|
||||
and = Nod(OANDAND, conjuncts[0], conjuncts[1])
|
||||
for _, conjunct := range conjuncts[2:] {
|
||||
and = Nod(OANDAND, and, conjunct)
|
||||
}
|
||||
}
|
||||
|
||||
ret := Nod(ORETURN, nil, nil)
|
||||
ret.List = list(ret.List, and)
|
||||
fn.Nbody = list(fn.Nbody, ret)
|
||||
}
|
||||
|
||||
// return true
|
||||
r := Nod(ORETURN, nil, nil)
|
||||
|
||||
r.List = list(r.List, Nodbool(true))
|
||||
fn.Nbody = list(fn.Nbody, r)
|
||||
|
||||
if Debug['r'] != 0 {
|
||||
dumplist("geneq body", fn.Nbody)
|
||||
}
|
||||
|
|
@ -2847,10 +2851,18 @@ func geneq(sym *Sym, t *Type) {
|
|||
// for a struct containing a reflect.Value, which itself has
|
||||
// an unexported field of type unsafe.Pointer.
|
||||
old_safemode := safemode
|
||||
|
||||
safemode = 0
|
||||
|
||||
// Disable checknils while compiling this code.
|
||||
// We are comparing a struct or an array,
|
||||
// neither of which can be nil, and our comparisons
|
||||
// are shallow.
|
||||
Disable_checknil++
|
||||
|
||||
funccompile(fn)
|
||||
|
||||
safemode = old_safemode
|
||||
Disable_checknil--
|
||||
}
|
||||
|
||||
func ifacelookdot(s *Sym, t *Type, followptr *bool, ignorecase int) *Type {
|
||||
|
|
|
|||
|
|
@ -549,20 +549,6 @@ func (s *typeSwitch) walk(sw *Node) {
|
|||
// set up labels and jumps
|
||||
casebody(sw, s.facename)
|
||||
|
||||
// calculate type hash
|
||||
t := cond.Right.Type
|
||||
if isnilinter(t) {
|
||||
a = syslook("efacethash", 1)
|
||||
} else {
|
||||
a = syslook("ifacethash", 1)
|
||||
}
|
||||
substArgTypes(a, t)
|
||||
a = Nod(OCALL, a, nil)
|
||||
a.List = list1(s.facename)
|
||||
a = Nod(OAS, s.hashname, a)
|
||||
typecheck(&a, Etop)
|
||||
cas = list(cas, a)
|
||||
|
||||
cc := caseClauses(sw, switchKindType)
|
||||
sw.List = nil
|
||||
var def *Node
|
||||
|
|
@ -572,22 +558,66 @@ func (s *typeSwitch) walk(sw *Node) {
|
|||
} else {
|
||||
def = Nod(OBREAK, nil, nil)
|
||||
}
|
||||
var typenil *Node
|
||||
if len(cc) > 0 && cc[0].typ == caseKindTypeNil {
|
||||
typenil = cc[0].node.Right
|
||||
cc = cc[1:]
|
||||
}
|
||||
|
||||
// For empty interfaces, do:
|
||||
// if e._type == nil {
|
||||
// do nil case if it exists, otherwise default
|
||||
// }
|
||||
// h := e._type.hash
|
||||
// Use a similar strategy for non-empty interfaces.
|
||||
|
||||
// Get interface descriptor word.
|
||||
typ := Nod(OITAB, s.facename, nil)
|
||||
|
||||
// Check for nil first.
|
||||
i := Nod(OIF, nil, nil)
|
||||
i.Left = Nod(OEQ, typ, nodnil())
|
||||
if typenil != nil {
|
||||
// Do explicit nil case right here.
|
||||
i.Nbody = list1(typenil)
|
||||
} else {
|
||||
// Jump to default case.
|
||||
lbl := newCaseLabel()
|
||||
i.Nbody = list1(Nod(OGOTO, lbl, nil))
|
||||
// Wrap default case with label.
|
||||
blk := Nod(OBLOCK, nil, nil)
|
||||
blk.List = list(list1(Nod(OLABEL, lbl, nil)), def)
|
||||
def = blk
|
||||
}
|
||||
typecheck(&i.Left, Erv)
|
||||
cas = list(cas, i)
|
||||
|
||||
if !isnilinter(cond.Right.Type) {
|
||||
// Load type from itab.
|
||||
typ = Nod(ODOTPTR, typ, nil)
|
||||
typ.Type = Ptrto(Types[TUINT8])
|
||||
typ.Typecheck = 1
|
||||
typ.Xoffset = int64(Widthptr) // offset of _type in runtime.itab
|
||||
typ.Bounded = true // guaranteed not to fault
|
||||
}
|
||||
// Load hash from type.
|
||||
h := Nod(ODOTPTR, typ, nil)
|
||||
h.Type = Types[TUINT32]
|
||||
h.Typecheck = 1
|
||||
h.Xoffset = int64(2 * Widthptr) // offset of hash in runtime._type
|
||||
h.Bounded = true // guaranteed not to fault
|
||||
a = Nod(OAS, s.hashname, h)
|
||||
typecheck(&a, Etop)
|
||||
cas = list(cas, a)
|
||||
|
||||
// insert type equality check into each case block
|
||||
for _, c := range cc {
|
||||
n := c.node
|
||||
switch c.typ {
|
||||
case caseKindTypeNil:
|
||||
var v Val
|
||||
v.U = new(NilVal)
|
||||
a = Nod(OIF, nil, nil)
|
||||
a.Left = Nod(OEQ, s.facename, nodlit(v))
|
||||
typecheck(&a.Left, Erv)
|
||||
a.Nbody = list1(n.Right) // if i==nil { goto l }
|
||||
n.Right = a
|
||||
|
||||
case caseKindTypeVar, caseKindTypeConst:
|
||||
n.Right = s.typeone(n)
|
||||
default:
|
||||
Fatalf("typeSwitch with bad kind: %d", c.typ)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -936,7 +936,6 @@ OpSwitch:
|
|||
n.Type = n.Right.Type
|
||||
n.Right = nil
|
||||
if n.Type == nil {
|
||||
n.Type = nil
|
||||
return
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package gc
|
||||
|
||||
import (
|
||||
|
|
|
|||
|
|
@ -3193,6 +3193,21 @@ func walkcompare(np **Node, init **NodeList) {
|
|||
return
|
||||
}
|
||||
|
||||
if t.Etype == TARRAY {
|
||||
// Zero- or single-element array, of any type.
|
||||
switch t.Bound {
|
||||
case 0:
|
||||
finishcompare(np, n, Nodbool(n.Op == OEQ), init)
|
||||
return
|
||||
case 1:
|
||||
l0 := Nod(OINDEX, l, Nodintconst(0))
|
||||
r0 := Nod(OINDEX, r, Nodintconst(0))
|
||||
a := Nod(n.Op, l0, r0)
|
||||
finishcompare(np, n, a, init)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if t.Etype == TSTRUCT && countfield(t) <= 4 {
|
||||
// Struct of four or fewer fields.
|
||||
// Inline comparisons.
|
||||
|
|
|
|||
|
|
@ -181,6 +181,10 @@ func (f *File) Visit(node ast.Node) ast.Visitor {
|
|||
}
|
||||
n.List = f.addCounters(n.Lbrace, n.Rbrace+1, n.List, true) // +1 to step past closing brace.
|
||||
case *ast.IfStmt:
|
||||
if n.Init != nil {
|
||||
ast.Walk(f, n.Init)
|
||||
}
|
||||
ast.Walk(f, n.Cond)
|
||||
ast.Walk(f, n.Body)
|
||||
if n.Else == nil {
|
||||
return nil
|
||||
|
|
@ -219,11 +223,21 @@ func (f *File) Visit(node ast.Node) ast.Visitor {
|
|||
case *ast.SwitchStmt:
|
||||
// Don't annotate an empty switch - creates a syntax error.
|
||||
if n.Body == nil || len(n.Body.List) == 0 {
|
||||
if n.Init != nil {
|
||||
ast.Walk(f, n.Init)
|
||||
}
|
||||
if n.Tag != nil {
|
||||
ast.Walk(f, n.Tag)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
case *ast.TypeSwitchStmt:
|
||||
// Don't annotate an empty type switch - creates a syntax error.
|
||||
if n.Body == nil || len(n.Body.List) == 0 {
|
||||
if n.Init != nil {
|
||||
ast.Walk(f, n.Init)
|
||||
}
|
||||
ast.Walk(f, n.Assign)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -24,6 +24,7 @@ func testAll() {
|
|||
testSelect2()
|
||||
testPanic()
|
||||
testEmptySwitches()
|
||||
testFunctionLiteral()
|
||||
}
|
||||
|
||||
// The indexes of the counters in testPanic are known to main.go
|
||||
|
|
@ -216,3 +217,32 @@ func testEmptySwitches() {
|
|||
<-c
|
||||
check(LINE, 1)
|
||||
}
|
||||
|
||||
func testFunctionLiteral() {
|
||||
a := func(f func()) error {
|
||||
f()
|
||||
f()
|
||||
return nil
|
||||
}
|
||||
|
||||
b := func(f func()) bool {
|
||||
f()
|
||||
f()
|
||||
return true
|
||||
}
|
||||
|
||||
check(LINE, 1)
|
||||
a(func() {
|
||||
check(LINE, 2)
|
||||
})
|
||||
|
||||
if err := a(func() {
|
||||
check(LINE, 2)
|
||||
}); err != nil {
|
||||
}
|
||||
|
||||
switch b(func() {
|
||||
check(LINE, 2)
|
||||
}) {
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -754,7 +754,7 @@ func matchtag(tag string) bool {
|
|||
}
|
||||
return !matchtag(tag[1:])
|
||||
}
|
||||
return tag == goos || tag == goarch || tag == "cmd_go_bootstrap" || tag == "go1.1" || (goos == "android" && tag == "linux")
|
||||
return tag == "gc" || tag == goos || tag == goarch || tag == "cmd_go_bootstrap" || tag == "go1.1" || (goos == "android" && tag == "linux")
|
||||
}
|
||||
|
||||
// shouldbuild reports whether we should build this file.
|
||||
|
|
@ -798,10 +798,15 @@ func shouldbuild(file, dir string) bool {
|
|||
if p == "" {
|
||||
continue
|
||||
}
|
||||
if strings.Contains(p, "package documentation") {
|
||||
code := p
|
||||
i := strings.Index(code, "//")
|
||||
if i > 0 {
|
||||
code = strings.TrimSpace(code[:i])
|
||||
}
|
||||
if code == "package documentation" {
|
||||
return false
|
||||
}
|
||||
if strings.Contains(p, "package main") && dir != "cmd/go" && dir != "cmd/cgo" {
|
||||
if code == "package main" && dir != "cmd/go" && dir != "cmd/cgo" {
|
||||
return false
|
||||
}
|
||||
if !strings.HasPrefix(p, "//") {
|
||||
|
|
@ -810,11 +815,11 @@ func shouldbuild(file, dir string) bool {
|
|||
if !strings.Contains(p, "+build") {
|
||||
continue
|
||||
}
|
||||
fields := splitfields(p)
|
||||
if len(fields) < 2 || fields[1] != "+build" {
|
||||
fields := splitfields(p[2:])
|
||||
if len(fields) < 1 || fields[0] != "+build" {
|
||||
continue
|
||||
}
|
||||
for _, p := range fields[2:] {
|
||||
for _, p := range fields[1:] {
|
||||
if matchfield(p) {
|
||||
goto fieldmatch
|
||||
}
|
||||
|
|
|
|||
|
|
@ -947,6 +947,11 @@ func (t *tester) raceTest(dt *distTest) error {
|
|||
t.addCmd(dt, "src", "go", "test", "-race", "-i", "runtime/race", "flag", "os/exec")
|
||||
t.addCmd(dt, "src", "go", "test", "-race", "-run=Output", "runtime/race")
|
||||
t.addCmd(dt, "src", "go", "test", "-race", "-short", "-run=TestParse|TestEcho", "flag", "os/exec")
|
||||
// We don't want the following line, because it
|
||||
// slows down all.bash (by 10 seconds on my laptop).
|
||||
// The race builder should catch any error here, but doesn't.
|
||||
// TODO(iant): Figure out how to catch this.
|
||||
// t.addCmd(dt, "src", "go", "test", "-race", "-run=TestParallelTest", "cmd/go")
|
||||
if t.cgoEnabled {
|
||||
env := mergeEnvLists([]string{"GOTRACEBACK=2"}, os.Environ())
|
||||
cmd := t.addCmd(dt, "misc/cgo/test", "go", "test", "-race", "-short")
|
||||
|
|
|
|||
|
|
@ -221,6 +221,7 @@ var tests = []test{
|
|||
`type ExportedType struct`, // Type definition.
|
||||
`Comment before exported field.*\n.*ExportedField +int` +
|
||||
`.*Comment on line with exported field.`,
|
||||
`ExportedEmbeddedType.*Comment on line with exported embedded field.`,
|
||||
`Has unexported fields`,
|
||||
`func \(ExportedType\) ExportedMethod\(a int\) bool`,
|
||||
`const ExportedTypedConstant ExportedType = iota`, // Must include associated constant.
|
||||
|
|
@ -228,6 +229,7 @@ var tests = []test{
|
|||
},
|
||||
[]string{
|
||||
`unexportedField`, // No unexported field.
|
||||
`int.*embedded`, // No unexported embedded field.
|
||||
`Comment about exported method.`, // No comment about exported method.
|
||||
`unexportedMethod`, // No unexported method.
|
||||
`unexportedTypedConstant`, // No unexported constant.
|
||||
|
|
@ -241,7 +243,11 @@ var tests = []test{
|
|||
`Comment about exported type`, // Include comment.
|
||||
`type ExportedType struct`, // Type definition.
|
||||
`Comment before exported field.*\n.*ExportedField +int`,
|
||||
`unexportedField int.*Comment on line with unexported field.`,
|
||||
`unexportedField.*int.*Comment on line with unexported field.`,
|
||||
`ExportedEmbeddedType.*Comment on line with exported embedded field.`,
|
||||
`\*ExportedEmbeddedType.*Comment on line with exported embedded \*field.`,
|
||||
`unexportedType.*Comment on line with unexported embedded field.`,
|
||||
`\*unexportedType.*Comment on line with unexported embedded \*field.`,
|
||||
`func \(ExportedType\) unexportedMethod\(a int\) bool`,
|
||||
`unexportedTypedConstant`,
|
||||
},
|
||||
|
|
@ -448,7 +454,6 @@ var trimTests = []trimTest{
|
|||
{"", "", "", true},
|
||||
{"/usr/gopher", "/usr/gopher", "/usr/gopher", true},
|
||||
{"/usr/gopher/bar", "/usr/gopher", "bar", true},
|
||||
{"/usr/gopher", "/usr/gopher", "/usr/gopher", true},
|
||||
{"/usr/gopherflakes", "/usr/gopher", "/usr/gopherflakes", false},
|
||||
{"/usr/gopher/bar", "/usr/zot", "/usr/gopher/bar", false},
|
||||
}
|
||||
|
|
|
|||
|
|
@ -487,9 +487,27 @@ func trimUnexportedFields(fields *ast.FieldList, what string) *ast.FieldList {
|
|||
trimmed := false
|
||||
list := make([]*ast.Field, 0, len(fields.List))
|
||||
for _, field := range fields.List {
|
||||
names := field.Names
|
||||
if len(names) == 0 {
|
||||
// Embedded type. Use the name of the type. It must be of type ident or *ident.
|
||||
// Nothing else is allowed.
|
||||
switch ident := field.Type.(type) {
|
||||
case *ast.Ident:
|
||||
names = []*ast.Ident{ident}
|
||||
case *ast.StarExpr:
|
||||
// Must have the form *identifier.
|
||||
if ident, ok := ident.X.(*ast.Ident); ok {
|
||||
names = []*ast.Ident{ident}
|
||||
}
|
||||
}
|
||||
if names == nil {
|
||||
// Can only happen if AST is incorrect. Safe to continue with a nil list.
|
||||
log.Print("invalid program: unexpected type for embedded field")
|
||||
}
|
||||
}
|
||||
// Trims if any is unexported. Good enough in practice.
|
||||
ok := true
|
||||
for _, name := range field.Names {
|
||||
for _, name := range names {
|
||||
if !isExported(name.Name) {
|
||||
trimmed = true
|
||||
ok = false
|
||||
|
|
|
|||
|
|
@ -60,8 +60,12 @@ func internalFunc(a int) bool
|
|||
// Comment about exported type.
|
||||
type ExportedType struct {
|
||||
// Comment before exported field.
|
||||
ExportedField int // Comment on line with exported field.
|
||||
unexportedField int // Comment on line with unexported field.
|
||||
ExportedField int // Comment on line with exported field.
|
||||
unexportedField int // Comment on line with unexported field.
|
||||
ExportedEmbeddedType // Comment on line with exported embedded field.
|
||||
*ExportedEmbeddedType // Comment on line with exported embedded *field.
|
||||
unexportedType // Comment on line with unexported embedded field.
|
||||
*unexportedType // Comment on line with unexported embedded *field.
|
||||
}
|
||||
|
||||
// Comment about exported method.
|
||||
|
|
|
|||
|
|
@ -1022,12 +1022,6 @@ Vendor directories do not affect the placement of new repositories
|
|||
being checked out for the first time by 'go get': those are always
|
||||
placed in the main GOPATH, never in a vendor subtree.
|
||||
|
||||
In Go 1.5, as an experiment, setting the environment variable
|
||||
GO15VENDOREXPERIMENT=1 enabled these features.
|
||||
As of Go 1.6 they are on by default. To turn them off, set
|
||||
GO15VENDOREXPERIMENT=0. In Go 1.7, the environment
|
||||
variable will stop having any effect.
|
||||
|
||||
See https://golang.org/s/go15vendor for details.
|
||||
|
||||
|
||||
|
|
@ -1094,8 +1088,6 @@ Special-purpose environment variables:
|
|||
installed in a location other than where it is built.
|
||||
File names in stack traces are rewritten from GOROOT to
|
||||
GOROOT_FINAL.
|
||||
GO15VENDOREXPERIMENT
|
||||
Set to 0 to disable vendoring semantics.
|
||||
GO_EXTLINK_ENABLED
|
||||
Whether the linker should use external linking mode
|
||||
when using -linkmode=auto with code that uses cgo.
|
||||
|
|
|
|||
|
|
@ -667,6 +667,7 @@ var (
|
|||
goarch string
|
||||
goos string
|
||||
exeSuffix string
|
||||
gopath []string
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
|
@ -675,6 +676,7 @@ func init() {
|
|||
if goos == "windows" {
|
||||
exeSuffix = ".exe"
|
||||
}
|
||||
gopath = filepath.SplitList(buildContext.GOPATH)
|
||||
}
|
||||
|
||||
// A builder holds global state about a build.
|
||||
|
|
@ -684,6 +686,7 @@ type builder struct {
|
|||
work string // the temporary work directory (ends in filepath.Separator)
|
||||
actionCache map[cacheKey]*action // a cache of already-constructed actions
|
||||
mkdirCache map[string]bool // a cache of created directories
|
||||
flagCache map[string]bool // a cache of supported compiler flags
|
||||
print func(args ...interface{}) (int, error)
|
||||
|
||||
output sync.Mutex
|
||||
|
|
@ -1684,6 +1687,22 @@ func (b *builder) includeArgs(flag string, all []*action) []string {
|
|||
inc = append(inc, flag, b.work)
|
||||
|
||||
// Finally, look in the installed package directories for each action.
|
||||
// First add the package dirs corresponding to GOPATH entries
|
||||
// in the original GOPATH order.
|
||||
need := map[string]*build.Package{}
|
||||
for _, a1 := range all {
|
||||
if a1.p != nil && a1.pkgdir == a1.p.build.PkgRoot {
|
||||
need[a1.p.build.Root] = a1.p.build
|
||||
}
|
||||
}
|
||||
for _, root := range gopath {
|
||||
if p := need[root]; p != nil && !incMap[p.PkgRoot] {
|
||||
incMap[p.PkgRoot] = true
|
||||
inc = append(inc, flag, p.PkgTargetRoot)
|
||||
}
|
||||
}
|
||||
|
||||
// Then add anything that's left.
|
||||
for _, a1 := range all {
|
||||
if a1.p == nil {
|
||||
continue
|
||||
|
|
@ -2909,6 +2928,17 @@ func (b *builder) ccompilerCmd(envvar, defcmd, objdir string) []string {
|
|||
// disable word wrapping in error messages
|
||||
a = append(a, "-fmessage-length=0")
|
||||
|
||||
// Tell gcc not to include the work directory in object files.
|
||||
if b.gccSupportsFlag("-fdebug-prefix-map=a=b") {
|
||||
a = append(a, "-fdebug-prefix-map="+b.work+"=/tmp/go-build")
|
||||
}
|
||||
|
||||
// Tell gcc not to include flags in object files, which defeats the
|
||||
// point of -fdebug-prefix-map above.
|
||||
if b.gccSupportsFlag("-gno-record-gcc-switches") {
|
||||
a = append(a, "-gno-record-gcc-switches")
|
||||
}
|
||||
|
||||
// On OS X, some of the compilers behave as if -fno-common
|
||||
// is always set, and the Mach-O linker in 6l/8l assumes this.
|
||||
// See https://golang.org/issue/3253.
|
||||
|
|
@ -2923,19 +2953,24 @@ func (b *builder) ccompilerCmd(envvar, defcmd, objdir string) []string {
|
|||
// -no-pie must be passed when doing a partial link with -Wl,-r. But -no-pie is
|
||||
// not supported by all compilers.
|
||||
func (b *builder) gccSupportsNoPie() bool {
|
||||
if goos != "linux" {
|
||||
// On some BSD platforms, error messages from the
|
||||
// compiler make it to the console despite cmd.Std*
|
||||
// all being nil. As -no-pie is only required on linux
|
||||
// systems so far, we only test there.
|
||||
return false
|
||||
return b.gccSupportsFlag("-no-pie")
|
||||
}
|
||||
|
||||
// gccSupportsFlag checks to see if the compiler supports a flag.
|
||||
func (b *builder) gccSupportsFlag(flag string) bool {
|
||||
b.exec.Lock()
|
||||
defer b.exec.Unlock()
|
||||
if b, ok := b.flagCache[flag]; ok {
|
||||
return b
|
||||
}
|
||||
src := filepath.Join(b.work, "trivial.c")
|
||||
if err := ioutil.WriteFile(src, []byte{}, 0666); err != nil {
|
||||
return false
|
||||
if b.flagCache == nil {
|
||||
src := filepath.Join(b.work, "trivial.c")
|
||||
if err := ioutil.WriteFile(src, []byte{}, 0666); err != nil {
|
||||
return false
|
||||
}
|
||||
b.flagCache = make(map[string]bool)
|
||||
}
|
||||
cmdArgs := b.gccCmd(b.work)
|
||||
cmdArgs = append(cmdArgs, "-no-pie", "-c", "trivial.c")
|
||||
cmdArgs := append(envList("CC", defaultCC), flag, "-c", "trivial.c")
|
||||
if buildN || buildX {
|
||||
b.showcmd(b.work, "%s", joinUnambiguously(cmdArgs))
|
||||
if buildN {
|
||||
|
|
@ -2946,7 +2981,9 @@ func (b *builder) gccSupportsNoPie() bool {
|
|||
cmd.Dir = b.work
|
||||
cmd.Env = envForDir(cmd.Dir, os.Environ())
|
||||
out, err := cmd.CombinedOutput()
|
||||
return err == nil && !bytes.Contains(out, []byte("unrecognized"))
|
||||
supported := err == nil && !bytes.Contains(out, []byte("unrecognized"))
|
||||
b.flagCache[flag] = supported
|
||||
return supported
|
||||
}
|
||||
|
||||
// gccArchArgs returns arguments to pass to gcc based on the architecture.
|
||||
|
|
|
|||
|
|
@ -33,11 +33,6 @@ func mkEnv() []envVar {
|
|||
var b builder
|
||||
b.init()
|
||||
|
||||
vendorExpValue := "0"
|
||||
if go15VendorExperiment {
|
||||
vendorExpValue = "1"
|
||||
}
|
||||
|
||||
env := []envVar{
|
||||
{"GOARCH", goarch},
|
||||
{"GOBIN", gobin},
|
||||
|
|
@ -49,7 +44,6 @@ func mkEnv() []envVar {
|
|||
{"GORACE", os.Getenv("GORACE")},
|
||||
{"GOROOT", goroot},
|
||||
{"GOTOOLDIR", toolDir},
|
||||
{"GO15VENDOREXPERIMENT", vendorExpValue},
|
||||
|
||||
// disable escape codes in clang errors
|
||||
{"TERM", "dumb"},
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ import (
|
|||
"fmt"
|
||||
"go/build"
|
||||
"go/format"
|
||||
"internal/race"
|
||||
"internal/testenv"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
|
|
@ -69,7 +70,11 @@ func TestMain(m *testing.M) {
|
|||
flag.Parse()
|
||||
|
||||
if canRun {
|
||||
out, err := exec.Command("go", "build", "-tags", "testgo", "-o", "testgo"+exeSuffix).CombinedOutput()
|
||||
args := []string{"build", "-tags", "testgo", "-o", "testgo" + exeSuffix}
|
||||
if race.Enabled {
|
||||
args = append(args, "-race")
|
||||
}
|
||||
out, err := exec.Command("go", args...).CombinedOutput()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "building testgo failed: %v\n%s", err, out)
|
||||
os.Exit(2)
|
||||
|
|
@ -1652,8 +1657,8 @@ func TestLdflagsArgumentsWithSpacesIssue3941(t *testing.T) {
|
|||
func main() {
|
||||
println(extern)
|
||||
}`)
|
||||
tg.run("run", "-ldflags", `-X main.extern "hello world"`, tg.path("main.go"))
|
||||
tg.grepStderr("^hello world", `ldflags -X main.extern 'hello world' failed`)
|
||||
tg.run("run", "-ldflags", `-X "main.extern=hello world"`, tg.path("main.go"))
|
||||
tg.grepStderr("^hello world", `ldflags -X "main.extern=hello world"' failed`)
|
||||
}
|
||||
|
||||
func TestGoTestCpuprofileLeavesBinaryBehind(t *testing.T) {
|
||||
|
|
@ -1721,7 +1726,6 @@ func TestSymlinksVendor(t *testing.T) {
|
|||
|
||||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.tempDir("gopath/src/dir1/vendor/v")
|
||||
tg.tempFile("gopath/src/dir1/p.go", "package main\nimport _ `v`\nfunc main(){}")
|
||||
tg.tempFile("gopath/src/dir1/vendor/v/v.go", "package v")
|
||||
|
|
@ -2333,7 +2337,7 @@ func TestGoGetHTTPS404(t *testing.T) {
|
|||
tg.run("get", "bazil.org/fuse/fs/fstestutil")
|
||||
}
|
||||
|
||||
// Test that you can not import a main package.
|
||||
// Test that you cannot import a main package.
|
||||
func TestIssue4210(t *testing.T) {
|
||||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
|
|
@ -2565,6 +2569,59 @@ func TestGoInstallShadowedGOPATH(t *testing.T) {
|
|||
tg.grepStderr("no install location for.*gopath2.src.test: hidden by .*gopath1.src.test", "missing error")
|
||||
}
|
||||
|
||||
func TestGoBuildGOPATHOrder(t *testing.T) {
|
||||
// golang.org/issue/14176#issuecomment-179895769
|
||||
// golang.org/issue/14192
|
||||
// -I arguments to compiler could end up not in GOPATH order,
|
||||
// leading to unexpected import resolution in the compiler.
|
||||
// This is still not a complete fix (see golang.org/issue/14271 and next test)
|
||||
// but it is clearly OK and enough to fix both of the two reported
|
||||
// instances of the underlying problem. It will have to do for now.
|
||||
|
||||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.makeTempdir()
|
||||
tg.setenv("GOPATH", tg.path("p1")+string(filepath.ListSeparator)+tg.path("p2"))
|
||||
|
||||
tg.tempFile("p1/src/foo/foo.go", "package foo\n")
|
||||
tg.tempFile("p2/src/baz/baz.go", "package baz\n")
|
||||
tg.tempFile("p2/pkg/"+runtime.GOOS+"_"+runtime.GOARCH+"/foo.a", "bad\n")
|
||||
tg.tempFile("p1/src/bar/bar.go", `
|
||||
package bar
|
||||
import _ "baz"
|
||||
import _ "foo"
|
||||
`)
|
||||
|
||||
tg.run("install", "-x", "bar")
|
||||
}
|
||||
|
||||
func TestGoBuildGOPATHOrderBroken(t *testing.T) {
|
||||
// This test is known not to work.
|
||||
// See golang.org/issue/14271.
|
||||
t.Skip("golang.org/issue/14271")
|
||||
|
||||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.makeTempdir()
|
||||
|
||||
tg.tempFile("p1/src/foo/foo.go", "package foo\n")
|
||||
tg.tempFile("p2/src/baz/baz.go", "package baz\n")
|
||||
tg.tempFile("p1/pkg/"+runtime.GOOS+"_"+runtime.GOARCH+"/baz.a", "bad\n")
|
||||
tg.tempFile("p2/pkg/"+runtime.GOOS+"_"+runtime.GOARCH+"/foo.a", "bad\n")
|
||||
tg.tempFile("p1/src/bar/bar.go", `
|
||||
package bar
|
||||
import _ "baz"
|
||||
import _ "foo"
|
||||
`)
|
||||
|
||||
colon := string(filepath.ListSeparator)
|
||||
tg.setenv("GOPATH", tg.path("p1")+colon+tg.path("p2"))
|
||||
tg.run("install", "-x", "bar")
|
||||
|
||||
tg.setenv("GOPATH", tg.path("p2")+colon+tg.path("p1"))
|
||||
tg.run("install", "-x", "bar")
|
||||
}
|
||||
|
||||
func TestIssue11709(t *testing.T) {
|
||||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
|
|
@ -2682,3 +2739,49 @@ func TestIssue13655(t *testing.T) {
|
|||
tg.grepStdout("runtime/internal/sys", "did not find required dependency of "+pkg+" on runtime/internal/sys")
|
||||
}
|
||||
}
|
||||
|
||||
// For issue 14337.
|
||||
func TestParallelTest(t *testing.T) {
|
||||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.makeTempdir()
|
||||
const testSrc = `package package_test
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
func TestTest(t *testing.T) {
|
||||
}`
|
||||
tg.tempFile("src/p1/p1_test.go", strings.Replace(testSrc, "package_test", "p1_test", 1))
|
||||
tg.tempFile("src/p2/p2_test.go", strings.Replace(testSrc, "package_test", "p2_test", 1))
|
||||
tg.tempFile("src/p3/p3_test.go", strings.Replace(testSrc, "package_test", "p3_test", 1))
|
||||
tg.tempFile("src/p4/p4_test.go", strings.Replace(testSrc, "package_test", "p4_test", 1))
|
||||
tg.setenv("GOPATH", tg.path("."))
|
||||
tg.run("test", "-p=4", "p1", "p2", "p3", "p4")
|
||||
}
|
||||
|
||||
func TestCgoConsistentResults(t *testing.T) {
|
||||
if !canCgo {
|
||||
t.Skip("skipping because cgo not enabled")
|
||||
}
|
||||
|
||||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.parallel()
|
||||
tg.makeTempdir()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
|
||||
exe1 := tg.path("cgotest1" + exeSuffix)
|
||||
exe2 := tg.path("cgotest2" + exeSuffix)
|
||||
tg.run("build", "-o", exe1, "cgotest")
|
||||
tg.run("build", "-x", "-o", exe2, "cgotest")
|
||||
b1, err := ioutil.ReadFile(exe1)
|
||||
tg.must(err)
|
||||
b2, err := ioutil.ReadFile(exe2)
|
||||
tg.must(err)
|
||||
|
||||
if !tg.doGrepMatch(`-fdebug-prefix-map=\$WORK`, &tg.stderr) {
|
||||
t.Skip("skipping because C compiler does not support -fdebug-prefix-map")
|
||||
}
|
||||
if !bytes.Equal(b1, b2) {
|
||||
t.Error("building cgotest twice did not produce the same output")
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -421,12 +421,6 @@ Vendor directories do not affect the placement of new repositories
|
|||
being checked out for the first time by 'go get': those are always
|
||||
placed in the main GOPATH, never in a vendor subtree.
|
||||
|
||||
In Go 1.5, as an experiment, setting the environment variable
|
||||
GO15VENDOREXPERIMENT=1 enabled these features.
|
||||
As of Go 1.6 they are on by default. To turn them off, set
|
||||
GO15VENDOREXPERIMENT=0. In Go 1.7, the environment
|
||||
variable will stop having any effect.
|
||||
|
||||
See https://golang.org/s/go15vendor for details.
|
||||
`,
|
||||
}
|
||||
|
|
@ -497,8 +491,6 @@ Special-purpose environment variables:
|
|||
installed in a location other than where it is built.
|
||||
File names in stack traces are rewritten from GOROOT to
|
||||
GOROOT_FINAL.
|
||||
GO15VENDOREXPERIMENT
|
||||
Set to 0 to disable vendoring semantics.
|
||||
GO_EXTLINK_ENABLED
|
||||
Whether the linker should use external linking mode
|
||||
when using -linkmode=auto with code that uses cgo.
|
||||
|
|
|
|||
|
|
@ -454,7 +454,9 @@ func envForDir(dir string, base []string) []string {
|
|||
|
||||
// mergeEnvLists merges the two environment lists such that
|
||||
// variables with the same name in "in" replace those in "out".
|
||||
// This always returns a newly allocated slice.
|
||||
func mergeEnvLists(in, out []string) []string {
|
||||
out = append([]string(nil), out...)
|
||||
NextVar:
|
||||
for _, inkv := range in {
|
||||
k := strings.SplitAfterN(inkv, "=", 2)[0]
|
||||
|
|
|
|||
|
|
@ -263,15 +263,6 @@ func reloadPackage(arg string, stk *importStack) *Package {
|
|||
return loadPackage(arg, stk)
|
||||
}
|
||||
|
||||
// The Go 1.5 vendoring experiment was enabled by setting GO15VENDOREXPERIMENT=1.
|
||||
// In Go 1.6 this is on by default and is disabled by setting GO15VENDOREXPERIMENT=0.
|
||||
// In Go 1.7 the variable will stop having any effect.
|
||||
// The variable is obnoxiously long so that years from now when people find it in
|
||||
// their profiles and wonder what it does, there is some chance that a web search
|
||||
// might answer the question.
|
||||
// There is a copy of this variable in src/go/build/build.go. Delete that one when this one goes away.
|
||||
var go15VendorExperiment = os.Getenv("GO15VENDOREXPERIMENT") != "0"
|
||||
|
||||
// dirToImportPath returns the pseudo-import path we use for a package
|
||||
// outside the Go path. It begins with _/ and then contains the full path
|
||||
// to the directory. If the package lives in c:\home\gopher\my\pkg then
|
||||
|
|
@ -361,7 +352,7 @@ func loadImport(path, srcDir string, parent *Package, stk *importStack, importPo
|
|||
// TODO: After Go 1, decide when to pass build.AllowBinary here.
|
||||
// See issue 3268 for mistakes to avoid.
|
||||
buildMode := build.ImportComment
|
||||
if !go15VendorExperiment || mode&useVendor == 0 || path != origPath {
|
||||
if mode&useVendor == 0 || path != origPath {
|
||||
// Not vendoring, or we already found the vendored path.
|
||||
buildMode |= build.IgnoreVendor
|
||||
}
|
||||
|
|
@ -371,7 +362,7 @@ func loadImport(path, srcDir string, parent *Package, stk *importStack, importPo
|
|||
bp.BinDir = gobin
|
||||
}
|
||||
if err == nil && !isLocal && bp.ImportComment != "" && bp.ImportComment != path &&
|
||||
(!go15VendorExperiment || (!strings.Contains(path, "/vendor/") && !strings.HasPrefix(path, "vendor/"))) {
|
||||
!strings.Contains(path, "/vendor/") && !strings.HasPrefix(path, "vendor/") {
|
||||
err = fmt.Errorf("code in directory %s expects import %q", bp.Dir, bp.ImportComment)
|
||||
}
|
||||
p.load(stk, bp, err)
|
||||
|
|
@ -412,7 +403,7 @@ func isDir(path string) bool {
|
|||
// x/vendor/path, vendor/path, or else stay path if none of those exist.
|
||||
// vendoredImportPath returns the expanded path or, if no expansion is found, the original.
|
||||
func vendoredImportPath(parent *Package, path string) (found string) {
|
||||
if parent == nil || parent.Root == "" || !go15VendorExperiment {
|
||||
if parent == nil || parent.Root == "" {
|
||||
return path
|
||||
}
|
||||
|
||||
|
|
@ -580,10 +571,6 @@ func findInternal(path string) (index int, ok bool) {
|
|||
// If the import is allowed, disallowVendor returns the original package p.
|
||||
// If not, it returns a new package containing just an appropriate error.
|
||||
func disallowVendor(srcDir, path string, p *Package, stk *importStack) *Package {
|
||||
if !go15VendorExperiment {
|
||||
return p
|
||||
}
|
||||
|
||||
// The stack includes p.ImportPath.
|
||||
// If that's the only thing on the stack, we started
|
||||
// with a name given on the command line, not an
|
||||
|
|
@ -967,7 +954,7 @@ func (p *Package) load(stk *importStack, bp *build.Package, err error) *Package
|
|||
}
|
||||
}
|
||||
}
|
||||
if p.Standard && !p1.Standard && p.Error == nil {
|
||||
if p.Standard && p.Error == nil && !p1.Standard && p1.Error == nil {
|
||||
p.Error = &PackageError{
|
||||
ImportStack: stk.copy(),
|
||||
Err: fmt.Sprintf("non-standard import %q in standard package %q", path, p.ImportPath),
|
||||
|
|
|
|||
|
|
@ -383,7 +383,7 @@ func (v *vcsCmd) ping(scheme, repo string) error {
|
|||
// The parent of dir must exist; dir must not.
|
||||
func (v *vcsCmd) create(dir, repo string) error {
|
||||
for _, cmd := range v.createCmd {
|
||||
if !go15VendorExperiment && strings.Contains(cmd, "submodule") {
|
||||
if strings.Contains(cmd, "submodule") {
|
||||
continue
|
||||
}
|
||||
if err := v.run(".", cmd, "dir", dir, "repo", repo); err != nil {
|
||||
|
|
@ -396,7 +396,7 @@ func (v *vcsCmd) create(dir, repo string) error {
|
|||
// download downloads any new changes for the repo in dir.
|
||||
func (v *vcsCmd) download(dir string) error {
|
||||
for _, cmd := range v.downloadCmd {
|
||||
if !go15VendorExperiment && strings.Contains(cmd, "submodule") {
|
||||
if strings.Contains(cmd, "submodule") {
|
||||
continue
|
||||
}
|
||||
if err := v.run(dir, cmd); err != nil {
|
||||
|
|
@ -445,7 +445,7 @@ func (v *vcsCmd) tagSync(dir, tag string) error {
|
|||
|
||||
if tag == "" && v.tagSyncDefault != nil {
|
||||
for _, cmd := range v.tagSyncDefault {
|
||||
if !go15VendorExperiment && strings.Contains(cmd, "submodule") {
|
||||
if strings.Contains(cmd, "submodule") {
|
||||
continue
|
||||
}
|
||||
if err := v.run(dir, cmd); err != nil {
|
||||
|
|
@ -456,7 +456,7 @@ func (v *vcsCmd) tagSync(dir, tag string) error {
|
|||
}
|
||||
|
||||
for _, cmd := range v.tagSyncCmd {
|
||||
if !go15VendorExperiment && strings.Contains(cmd, "submodule") {
|
||||
if strings.Contains(cmd, "submodule") {
|
||||
continue
|
||||
}
|
||||
if err := v.run(dir, cmd, "tag", tag); err != nil {
|
||||
|
|
|
|||
|
|
@ -20,7 +20,6 @@ func TestVendorImports(t *testing.T) {
|
|||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.run("list", "-f", "{{.ImportPath}} {{.Imports}}", "vend/...")
|
||||
want := `
|
||||
vend [vend/vendor/p r]
|
||||
|
|
@ -51,7 +50,6 @@ func TestVendorBuild(t *testing.T) {
|
|||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.run("build", "vend/x")
|
||||
}
|
||||
|
||||
|
|
@ -59,7 +57,6 @@ func TestVendorRun(t *testing.T) {
|
|||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.cd(filepath.Join(tg.pwd(), "testdata/src/vend/hello"))
|
||||
tg.run("run", "hello.go")
|
||||
tg.grepStdout("hello, world", "missing hello world output")
|
||||
|
|
@ -74,7 +71,6 @@ func TestVendorGOPATH(t *testing.T) {
|
|||
}
|
||||
gopath := changeVolume(filepath.Join(tg.pwd(), "testdata"), strings.ToLower)
|
||||
tg.setenv("GOPATH", gopath)
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
cd := changeVolume(filepath.Join(tg.pwd(), "testdata/src/vend/hello"), strings.ToUpper)
|
||||
tg.cd(cd)
|
||||
tg.run("run", "hello.go")
|
||||
|
|
@ -85,7 +81,6 @@ func TestVendorTest(t *testing.T) {
|
|||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.cd(filepath.Join(tg.pwd(), "testdata/src/vend/hello"))
|
||||
tg.run("test", "-v")
|
||||
tg.grepStdout("TestMsgInternal", "missing use in internal test")
|
||||
|
|
@ -96,7 +91,6 @@ func TestVendorInvalid(t *testing.T) {
|
|||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
|
||||
tg.runFail("build", "vend/x/invalid")
|
||||
tg.grepStderr("must be imported as foo", "missing vendor import error")
|
||||
|
|
@ -106,7 +100,6 @@ func TestVendorImportError(t *testing.T) {
|
|||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata"))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
|
||||
tg.runFail("build", "vend/x/vendor/p/p")
|
||||
|
||||
|
|
@ -173,7 +166,6 @@ func TestVendorGet(t *testing.T) {
|
|||
package p
|
||||
const C = 1`)
|
||||
tg.setenv("GOPATH", tg.path("."))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.cd(tg.path("src/v"))
|
||||
tg.run("run", "m.go")
|
||||
tg.run("test")
|
||||
|
|
@ -192,7 +184,6 @@ func TestVendorGetUpdate(t *testing.T) {
|
|||
defer tg.cleanup()
|
||||
tg.makeTempdir()
|
||||
tg.setenv("GOPATH", tg.path("."))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.run("get", "github.com/rsc/go-get-issue-11864")
|
||||
tg.run("get", "-u", "github.com/rsc/go-get-issue-11864")
|
||||
}
|
||||
|
|
@ -204,7 +195,6 @@ func TestGetSubmodules(t *testing.T) {
|
|||
defer tg.cleanup()
|
||||
tg.makeTempdir()
|
||||
tg.setenv("GOPATH", tg.path("."))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.run("get", "-d", "github.com/rsc/go-get-issue-12612")
|
||||
tg.run("get", "-u", "-d", "github.com/rsc/go-get-issue-12612")
|
||||
}
|
||||
|
|
@ -213,7 +203,6 @@ func TestVendorCache(t *testing.T) {
|
|||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata/testvendor"))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.runFail("build", "p")
|
||||
tg.grepStderr("must be imported as x", "did not fail to build p")
|
||||
}
|
||||
|
|
@ -225,7 +214,6 @@ func TestVendorTest2(t *testing.T) {
|
|||
defer tg.cleanup()
|
||||
tg.makeTempdir()
|
||||
tg.setenv("GOPATH", tg.path("."))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.run("get", "github.com/rsc/go-get-issue-11864")
|
||||
|
||||
// build -i should work
|
||||
|
|
@ -251,7 +239,6 @@ func TestVendorList(t *testing.T) {
|
|||
defer tg.cleanup()
|
||||
tg.makeTempdir()
|
||||
tg.setenv("GOPATH", tg.path("."))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.run("get", "github.com/rsc/go-get-issue-11864")
|
||||
|
||||
tg.run("list", "-f", `{{join .TestImports "\n"}}`, "github.com/rsc/go-get-issue-11864/t")
|
||||
|
|
@ -272,7 +259,6 @@ func TestVendor12156(t *testing.T) {
|
|||
tg := testgo(t)
|
||||
defer tg.cleanup()
|
||||
tg.setenv("GOPATH", filepath.Join(tg.pwd(), "testdata/testvendor2"))
|
||||
tg.setenv("GO15VENDOREXPERIMENT", "1")
|
||||
tg.cd(filepath.Join(tg.pwd(), "testdata/testvendor2/src/p"))
|
||||
tg.runFail("build", "p.go")
|
||||
tg.grepStderrNot("panic", "panicked")
|
||||
|
|
|
|||
|
|
@ -143,7 +143,9 @@ func visitFile(path string, f os.FileInfo, err error) error {
|
|||
if err == nil && isGoFile(f) {
|
||||
err = processFile(path, nil, os.Stdout, false)
|
||||
}
|
||||
if err != nil {
|
||||
// Don't complain if a file was deleted in the meantime (i.e.
|
||||
// the directory changed concurrently while running gofmt).
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
report(err)
|
||||
}
|
||||
return nil
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ func progedit(ctxt *obj.Link, p *obj.Prog) {
|
|||
// Treat MRC 15, 0, <reg>, C13, C0, 3 specially.
|
||||
case AMRC:
|
||||
if p.To.Offset&0xffff0fff == 0xee1d0f70 {
|
||||
// Because the instruction might be rewriten to a BL which returns in R0
|
||||
// Because the instruction might be rewritten to a BL which returns in R0
|
||||
// the register must be zero.
|
||||
if p.To.Offset&0xf000 != 0 {
|
||||
ctxt.Diag("%v: TLS MRC instruction must write to R0 as it might get translated into a BL instruction", p.Line())
|
||||
|
|
|
|||
|
|
@ -290,3 +290,21 @@ func linkgetline(ctxt *Link, lineno int32, f **LSym, l *int32) {
|
|||
func Linkprfile(ctxt *Link, line int) {
|
||||
fmt.Printf("%s ", ctxt.LineHist.LineString(line))
|
||||
}
|
||||
|
||||
func fieldtrack(ctxt *Link, cursym *LSym) {
|
||||
p := cursym.Text
|
||||
if p == nil || p.Link == nil { // handle external functions and ELF section symbols
|
||||
return
|
||||
}
|
||||
ctxt.Cursym = cursym
|
||||
|
||||
for ; p != nil; p = p.Link {
|
||||
if p.As == AUSEFIELD {
|
||||
r := Addrel(ctxt.Cursym)
|
||||
r.Off = 0
|
||||
r.Siz = 0
|
||||
r.Sym = p.From.Sym
|
||||
r.Type = R_USEFIELD
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -298,6 +298,7 @@ func Flushplist(ctxt *Link) {
|
|||
ctxt.Arch.Follow(ctxt, s)
|
||||
ctxt.Arch.Preprocess(ctxt, s)
|
||||
ctxt.Arch.Assemble(ctxt, s)
|
||||
fieldtrack(ctxt, s)
|
||||
linkpcln(ctxt, s)
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -295,8 +295,6 @@ const (
|
|||
AFMOVX
|
||||
AFMOVXP
|
||||
|
||||
AFCOMB
|
||||
AFCOMBP
|
||||
AFCOMD
|
||||
AFCOMDP
|
||||
AFCOMDPP
|
||||
|
|
@ -626,14 +624,7 @@ const (
|
|||
APADDUSW
|
||||
APADDW
|
||||
APAND
|
||||
APANDB
|
||||
APANDL
|
||||
APANDN
|
||||
APANDSB
|
||||
APANDSW
|
||||
APANDUSB
|
||||
APANDUSW
|
||||
APANDW
|
||||
APAVGB
|
||||
APAVGW
|
||||
APCMPEQB
|
||||
|
|
@ -650,23 +641,6 @@ const (
|
|||
APEXTRD
|
||||
APEXTRQ
|
||||
APEXTRW
|
||||
APFACC
|
||||
APFADD
|
||||
APFCMPEQ
|
||||
APFCMPGE
|
||||
APFCMPGT
|
||||
APFMAX
|
||||
APFMIN
|
||||
APFMUL
|
||||
APFNACC
|
||||
APFPNACC
|
||||
APFRCP
|
||||
APFRCPI2T
|
||||
APFRCPIT1
|
||||
APFRSQIT1
|
||||
APFRSQRT
|
||||
APFSUB
|
||||
APFSUBR
|
||||
APHADDD
|
||||
APHADDSW
|
||||
APHADDW
|
||||
|
|
@ -697,7 +671,6 @@ const (
|
|||
APMOVZXWD
|
||||
APMOVZXWQ
|
||||
APMULDQ
|
||||
APMULHRW
|
||||
APMULHUW
|
||||
APMULHW
|
||||
APMULLD
|
||||
|
|
@ -728,7 +701,6 @@ const (
|
|||
APSUBUSB
|
||||
APSUBUSW
|
||||
APSUBW
|
||||
APSWAPL
|
||||
APUNPCKHBW
|
||||
APUNPCKHLQ
|
||||
APUNPCKHQDQ
|
||||
|
|
@ -767,11 +739,6 @@ const (
|
|||
AUNPCKLPS
|
||||
AXORPD
|
||||
AXORPS
|
||||
|
||||
APF2IW
|
||||
APF2IL
|
||||
API2FW
|
||||
API2FL
|
||||
ARETFW
|
||||
ARETFL
|
||||
ARETFQ
|
||||
|
|
|
|||
|
|
@ -255,8 +255,6 @@ var Anames = []string{
|
|||
"FMOVWP",
|
||||
"FMOVX",
|
||||
"FMOVXP",
|
||||
"FCOMB",
|
||||
"FCOMBP",
|
||||
"FCOMD",
|
||||
"FCOMDP",
|
||||
"FCOMDPP",
|
||||
|
|
@ -569,14 +567,7 @@ var Anames = []string{
|
|||
"PADDUSW",
|
||||
"PADDW",
|
||||
"PAND",
|
||||
"PANDB",
|
||||
"PANDL",
|
||||
"PANDN",
|
||||
"PANDSB",
|
||||
"PANDSW",
|
||||
"PANDUSB",
|
||||
"PANDUSW",
|
||||
"PANDW",
|
||||
"PAVGB",
|
||||
"PAVGW",
|
||||
"PCMPEQB",
|
||||
|
|
@ -593,23 +584,6 @@ var Anames = []string{
|
|||
"PEXTRD",
|
||||
"PEXTRQ",
|
||||
"PEXTRW",
|
||||
"PFACC",
|
||||
"PFADD",
|
||||
"PFCMPEQ",
|
||||
"PFCMPGE",
|
||||
"PFCMPGT",
|
||||
"PFMAX",
|
||||
"PFMIN",
|
||||
"PFMUL",
|
||||
"PFNACC",
|
||||
"PFPNACC",
|
||||
"PFRCP",
|
||||
"PFRCPI2T",
|
||||
"PFRCPIT1",
|
||||
"PFRSQIT1",
|
||||
"PFRSQRT",
|
||||
"PFSUB",
|
||||
"PFSUBR",
|
||||
"PHADDD",
|
||||
"PHADDSW",
|
||||
"PHADDW",
|
||||
|
|
@ -640,7 +614,6 @@ var Anames = []string{
|
|||
"PMOVZXWD",
|
||||
"PMOVZXWQ",
|
||||
"PMULDQ",
|
||||
"PMULHRW",
|
||||
"PMULHUW",
|
||||
"PMULHW",
|
||||
"PMULLD",
|
||||
|
|
@ -671,7 +644,6 @@ var Anames = []string{
|
|||
"PSUBUSB",
|
||||
"PSUBUSW",
|
||||
"PSUBW",
|
||||
"PSWAPL",
|
||||
"PUNPCKHBW",
|
||||
"PUNPCKHLQ",
|
||||
"PUNPCKHQDQ",
|
||||
|
|
@ -710,10 +682,6 @@ var Anames = []string{
|
|||
"UNPCKLPS",
|
||||
"XORPD",
|
||||
"XORPS",
|
||||
"PF2IW",
|
||||
"PF2IL",
|
||||
"PI2FW",
|
||||
"PI2FL",
|
||||
"RETFW",
|
||||
"RETFL",
|
||||
"RETFQ",
|
||||
|
|
|
|||
|
|
@ -184,7 +184,6 @@ const (
|
|||
Zm2_r
|
||||
Zm_r_xm
|
||||
Zm_r_i_xm
|
||||
Zm_r_3d
|
||||
Zm_r_xm_nr
|
||||
Zr_m_xm_nr
|
||||
Zibm_r /* mmx1,mmx2/mem64,imm8 */
|
||||
|
|
@ -753,10 +752,6 @@ var yxrrl = []ytab{
|
|||
{Yxr, Ynone, Yrl, Zm_r, 1},
|
||||
}
|
||||
|
||||
var ymfp = []ytab{
|
||||
{Ymm, Ynone, Ymr, Zm_r_3d, 1},
|
||||
}
|
||||
|
||||
var ymrxr = []ytab{
|
||||
{Ymr, Ynone, Yxr, Zm_r, 1},
|
||||
{Yxm, Ynone, Yxr, Zm_r_xm, 1},
|
||||
|
|
@ -1085,7 +1080,6 @@ var optab =
|
|||
{ACVTPD2PS, yxm, Pe, [23]uint8{0x5a}},
|
||||
{ACVTPS2PL, yxcvm1, Px, [23]uint8{Pe, 0x5b, Pm, 0x2d}},
|
||||
{ACVTPS2PD, yxm, Pm, [23]uint8{0x5a}},
|
||||
{API2FW, ymfp, Px, [23]uint8{0x0c}},
|
||||
{ACVTSD2SL, yxcvfl, Pf2, [23]uint8{0x2d}},
|
||||
{ACVTSD2SQ, yxcvfq, Pw, [23]uint8{Pf2, 0x2d}},
|
||||
{ACVTSD2SS, yxm, Pf2, [23]uint8{0x5a}},
|
||||
|
|
@ -1303,26 +1297,6 @@ var optab =
|
|||
{APEXTRB, yextr, Pq, [23]uint8{0x3a, 0x14, 00}},
|
||||
{APEXTRD, yextr, Pq, [23]uint8{0x3a, 0x16, 00}},
|
||||
{APEXTRQ, yextr, Pq3, [23]uint8{0x3a, 0x16, 00}},
|
||||
{APF2IL, ymfp, Px, [23]uint8{0x1d}},
|
||||
{APF2IW, ymfp, Px, [23]uint8{0x1c}},
|
||||
{API2FL, ymfp, Px, [23]uint8{0x0d}},
|
||||
{APFACC, ymfp, Px, [23]uint8{0xae}},
|
||||
{APFADD, ymfp, Px, [23]uint8{0x9e}},
|
||||
{APFCMPEQ, ymfp, Px, [23]uint8{0xb0}},
|
||||
{APFCMPGE, ymfp, Px, [23]uint8{0x90}},
|
||||
{APFCMPGT, ymfp, Px, [23]uint8{0xa0}},
|
||||
{APFMAX, ymfp, Px, [23]uint8{0xa4}},
|
||||
{APFMIN, ymfp, Px, [23]uint8{0x94}},
|
||||
{APFMUL, ymfp, Px, [23]uint8{0xb4}},
|
||||
{APFNACC, ymfp, Px, [23]uint8{0x8a}},
|
||||
{APFPNACC, ymfp, Px, [23]uint8{0x8e}},
|
||||
{APFRCP, ymfp, Px, [23]uint8{0x96}},
|
||||
{APFRCPIT1, ymfp, Px, [23]uint8{0xa6}},
|
||||
{APFRCPI2T, ymfp, Px, [23]uint8{0xb6}},
|
||||
{APFRSQIT1, ymfp, Px, [23]uint8{0xa7}},
|
||||
{APFRSQRT, ymfp, Px, [23]uint8{0x97}},
|
||||
{APFSUB, ymfp, Px, [23]uint8{0x9a}},
|
||||
{APFSUBR, ymfp, Px, [23]uint8{0xaa}},
|
||||
{APHADDD, ymmxmm0f38, Px, [23]uint8{0x0F, 0x38, 0x02, 0, 0x66, 0x0F, 0x38, 0x02, 0}},
|
||||
{APHADDSW, yxm_q4, Pq4, [23]uint8{0x03}},
|
||||
{APHADDW, yxm_q4, Pq4, [23]uint8{0x01}},
|
||||
|
|
@ -1353,7 +1327,6 @@ var optab =
|
|||
{APMOVZXWD, yxm_q4, Pq4, [23]uint8{0x33}},
|
||||
{APMOVZXWQ, yxm_q4, Pq4, [23]uint8{0x34}},
|
||||
{APMULDQ, yxm_q4, Pq4, [23]uint8{0x28}},
|
||||
{APMULHRW, ymfp, Px, [23]uint8{0xb7}},
|
||||
{APMULHUW, ymm, Py1, [23]uint8{0xe4, Pe, 0xe4}},
|
||||
{APMULHW, ymm, Py1, [23]uint8{0xe5, Pe, 0xe5}},
|
||||
{APMULLD, yxm_q4, Pq4, [23]uint8{0x40}},
|
||||
|
|
@ -1395,7 +1368,6 @@ var optab =
|
|||
{APSUBUSB, yxm, Pe, [23]uint8{0xd8}},
|
||||
{APSUBUSW, yxm, Pe, [23]uint8{0xd9}},
|
||||
{APSUBW, yxm, Pe, [23]uint8{0xf9}},
|
||||
{APSWAPL, ymfp, Px, [23]uint8{0xbb}},
|
||||
{APUNPCKHBW, ymm, Py1, [23]uint8{0x68, Pe, 0x68}},
|
||||
{APUNPCKHLQ, ymm, Py1, [23]uint8{0x6a, Pe, 0x6a}},
|
||||
{APUNPCKHQDQ, yxm, Pe, [23]uint8{0x6d}},
|
||||
|
|
@ -1553,8 +1525,6 @@ var optab =
|
|||
{AFCMOVNE, yfcmv, Px, [23]uint8{0xdb, 01}},
|
||||
{AFCMOVNU, yfcmv, Px, [23]uint8{0xdb, 03}},
|
||||
{AFCMOVUN, yfcmv, Px, [23]uint8{0xda, 03}},
|
||||
{AFCOMB, nil, 0, [23]uint8{}},
|
||||
{AFCOMBP, nil, 0, [23]uint8{}},
|
||||
{AFCOMD, yfadd, Px, [23]uint8{0xdc, 02, 0xd8, 02, 0xdc, 02}}, /* botch */
|
||||
{AFCOMDP, yfadd, Px, [23]uint8{0xdc, 03, 0xd8, 03, 0xdc, 03}}, /* botch */
|
||||
{AFCOMDPP, ycompp, Px, [23]uint8{0xde, 03}},
|
||||
|
|
@ -3556,15 +3526,6 @@ func doasm(ctxt *obj.Link, p *obj.Prog) {
|
|||
ctxt.Andptr[0] = byte(p.To.Offset)
|
||||
ctxt.Andptr = ctxt.Andptr[1:]
|
||||
|
||||
case Zm_r_3d:
|
||||
ctxt.Andptr[0] = 0x0f
|
||||
ctxt.Andptr = ctxt.Andptr[1:]
|
||||
ctxt.Andptr[0] = 0x0f
|
||||
ctxt.Andptr = ctxt.Andptr[1:]
|
||||
asmand(ctxt, p, &p.From, &p.To)
|
||||
ctxt.Andptr[0] = byte(op)
|
||||
ctxt.Andptr = ctxt.Andptr[1:]
|
||||
|
||||
case Zibm_r, Zibr_m:
|
||||
for {
|
||||
tmp1 := z
|
||||
|
|
@ -4618,15 +4579,6 @@ func asmins(ctxt *obj.Link, p *obj.Prog) {
|
|||
ctxt.Andptr = ctxt.And[:]
|
||||
ctxt.Asmode = int(p.Mode)
|
||||
|
||||
if p.As == obj.AUSEFIELD {
|
||||
r := obj.Addrel(ctxt.Cursym)
|
||||
r.Off = 0
|
||||
r.Siz = 0
|
||||
r.Sym = p.From.Sym
|
||||
r.Type = obj.R_USEFIELD
|
||||
return
|
||||
}
|
||||
|
||||
if ctxt.Headtype == obj.Hnacl && p.Mode == 32 {
|
||||
switch p.As {
|
||||
case obj.ARET:
|
||||
|
|
|
|||
|
|
@ -15,8 +15,8 @@ import (
|
|||
"strings"
|
||||
"text/tabwriter"
|
||||
|
||||
"golang.org/x/arch/arm/armasm"
|
||||
"golang.org/x/arch/x86/x86asm"
|
||||
"cmd/internal/unvendor/golang.org/x/arch/arm/armasm"
|
||||
"cmd/internal/unvendor/golang.org/x/arch/x86/x86asm"
|
||||
)
|
||||
|
||||
// Disasm is a disassembler for a given File.
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue