mirror of https://github.com/golang/go.git
[dev.ssa] Merge remote-tracking branch 'origin/master' into mergebranch
Semi-regular merge from tip into dev.ssa. Change-Id: Iadb60e594ef65a99c0e1404b14205fa67c32a9e9
This commit is contained in:
commit
d2286ea284
58
doc/asm.html
58
doc/asm.html
|
|
@ -780,6 +780,64 @@ mode as on the x86, but the only scale allowed is <code>1</code>.
|
|||
|
||||
</ul>
|
||||
|
||||
<h3 id="s390x">IBM z/Architecture, a.k.a. s390x</h3>
|
||||
|
||||
<p>
|
||||
The registers <code>R10</code> and <code>R11</code> are reserved.
|
||||
The assembler uses them to hold temporary values when assembling some instructions.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<code>R13</code> points to the <code>g</code> (goroutine) structure.
|
||||
This register must be referred to as <code>g</code>; the name <code>R13</code> is not recognized.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<code>R15</code> points to the stack frame and should typically only be accessed using the
|
||||
virtual registers <code>SP</code> and <code>FP</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Load- and store-multiple instructions operate on a range of registers.
|
||||
The range of registers is specified by a start register and an end register.
|
||||
For example, <code>LMG</code> <code>(R9),</code> <code>R5,</code> <code>R7</code> would load
|
||||
<code>R5</code>, <code>R6</code> and <code>R7</code> with the 64-bit values at
|
||||
<code>0(R9)</code>, <code>8(R9)</code> and <code>16(R9)</code> respectively.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Storage-and-storage instructions such as <code>MVC</code> and <code>XC</code> are written
|
||||
with the length as the first argument.
|
||||
For example, <code>XC</code> <code>$8,</code> <code>(R9),</code> <code>(R9)</code> would clear
|
||||
eight bytes at the address specified in <code>R9</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
If a vector instruction takes a length or an index as an argument then it will be the
|
||||
first argument.
|
||||
For example, <code>VLEIF</code> <code>$1,</code> <code>$16,</code> <code>V2</code> will load
|
||||
the value sixteen into index one of <code>V2</code>.
|
||||
Care should be taken when using vector instructions to ensure that they are available at
|
||||
runtime.
|
||||
To use vector instructions a machine must have both the vector facility (bit 129 in the
|
||||
facility list) and kernel support.
|
||||
Without kernel support a vector instruction will have no effect (it will be equivalent
|
||||
to a <code>NOP</code> instruction).
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Addressing modes:
|
||||
</p>
|
||||
|
||||
<ul>
|
||||
|
||||
<li>
|
||||
<code>(R5)(R6*1)</code>: The location at <code>R5</code> plus <code>R6</code>.
|
||||
It is a scaled mode as on the x86, but the only scale allowed is <code>1</code>.
|
||||
</li>
|
||||
|
||||
</ul>
|
||||
|
||||
<h3 id="unsupported_opcodes">Unsupported opcodes</h3>
|
||||
|
||||
<p>
|
||||
|
|
|
|||
|
|
@ -53,6 +53,14 @@ See the <a href="https://github.com/golang/go/issues?q=milestone%3AGo1.6.2">Go
|
|||
1.6.2 milestone</a> on our issue tracker for details.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
go1.6.3 (released 2016/07/17) includes security fixes to the
|
||||
<code>net/http/cgi</code> package and <code>net/http</code> package when used in
|
||||
a CGI environment. This release also adds support for macOS Sierra.
|
||||
See the <a href="https://github.com/golang/go/issues?q=milestone%3AGo1.6.3">Go
|
||||
1.6.3 milestone</a> on our issue tracker for details.
|
||||
</p>
|
||||
|
||||
<h2 id="go1.5">go1.5 (released 2015/08/19)</h2>
|
||||
|
||||
<p>
|
||||
|
|
|
|||
|
|
@ -74,6 +74,13 @@ This change has no effect on the correctness of existing programs.
|
|||
|
||||
<h2 id="ports">Ports</h2>
|
||||
|
||||
<p>
|
||||
Go 1.7 adds support for macOS 10.12 Sierra.
|
||||
This support was backported to Go 1.6.3.
|
||||
Binaries built with versions of Go before 1.6.3 will not work
|
||||
correctly on Sierra.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Go 1.7 adds an experimental port to <a href="https://en.wikipedia.org/wiki/Linux_on_z_Systems">Linux on z Systems</a> (<code>linux/s390x</code>)
|
||||
and the beginning of a port to Plan 9 on ARM (<code>plan9/arm</code>).
|
||||
|
|
@ -85,14 +92,27 @@ added in Go 1.6 now have full support for cgo and external linking.
|
|||
</p>
|
||||
|
||||
<p>
|
||||
The experimental port to Linux on big-endian 64-bit PowerPC (<code>linux/ppc64</code>)
|
||||
The experimental port to Linux on little-endian 64-bit PowerPC (<code>linux/ppc64le</code>)
|
||||
now requires the POWER8 architecture or later.
|
||||
Big-endian 64-bit PowerPC (<code>linux/ppc64</code>) only requires the
|
||||
POWER5 architecture.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The OpenBSD port now requires OpenBSD 5.6 or later, for access to the <a href="http://man.openbsd.org/getentropy.2"><i>getentropy</i>(2)</a> system call.
|
||||
</p>
|
||||
|
||||
<h3 id="known_issues">Known Issues</h3>
|
||||
|
||||
<p>
|
||||
There are some instabilities on FreeBSD that are known but not understood.
|
||||
These can lead to program crashes in rare cases.
|
||||
See <a href="https://golang.org/issue/16136">issue 16136</a>,
|
||||
<a href="https://golang.org/issue/15658">issue 15658</a>,
|
||||
and <a href="https://golang.org/issue/16396">issue 16396</a>.
|
||||
Any help in solving these FreeBSD-specific issues would be appreciated.
|
||||
</p>
|
||||
|
||||
<h2 id="tools">Tools</h2>
|
||||
|
||||
<h3 id="cmd_asm">Assembler</h3>
|
||||
|
|
@ -367,6 +387,12 @@ and
|
|||
packages.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Garbage collection pauses should be significantly shorter than they
|
||||
were in Go 1.6 for programs with large numbers of idle goroutines,
|
||||
substantial stack size fluctuation, or large package-level variables.
|
||||
</p>
|
||||
|
||||
<h2 id="library">Core library</h2>
|
||||
|
||||
<h3 id="context">Context</h3>
|
||||
|
|
@ -462,6 +488,13 @@ eliminating the
|
|||
common in some environments.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The runtime can now return unused memory to the operating system on
|
||||
all architectures.
|
||||
In Go 1.6 and earlier, the runtime could not
|
||||
release memory on ARM64, 64-bit PowerPC, or MIPS.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
On Windows, Go programs in Go 1.5 and earlier forced
|
||||
the global Windows timer resolution to 1ms at startup
|
||||
|
|
@ -883,6 +916,12 @@ For example, the address on which a request received is
|
|||
<code>req.Context().Value(http.LocalAddrContextKey).(net.Addr)</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The server's <a href="/pkg/net/http/#Server.Serve"><code>Serve</code></a> method
|
||||
now only enables HTTP/2 support if the <code>Server.TLSConfig</code> field is <code>nil</code>
|
||||
or includes <code>"h2"</code> in its <code>TLSConfig.NextProto</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
The server implementation now
|
||||
pads response codes less than 100 to three digits
|
||||
|
|
|
|||
|
|
@ -5281,7 +5281,7 @@ if(traces.length&&!this.hasEventDataDecoder_(importers)){throw new Error('Could
|
|||
importers.sort(function(x,y){return x.importPriority-y.importPriority;});},this);lastTask=lastTask.timedAfter('TraceImport',function importClockSyncMarkers(task){importers.forEach(function(importer,index){task.subTask(Timing.wrapNamedFunction('TraceImport',importer.importerName,function runImportClockSyncMarkersOnOneImporter(){progressMeter.update('Importing clock sync markers '+(index+1)+' of '+
|
||||
importers.length);importer.importClockSyncMarkers();}),this);},this);},this);lastTask=lastTask.timedAfter('TraceImport',function runImport(task){importers.forEach(function(importer,index){task.subTask(Timing.wrapNamedFunction('TraceImport',importer.importerName,function runImportEventsOnOneImporter(){progressMeter.update('Importing '+(index+1)+' of '+importers.length);importer.importEvents();}),this);},this);},this);if(this.importOptions_.customizeModelCallback){lastTask=lastTask.timedAfter('TraceImport',function runCustomizeCallbacks(task){this.importOptions_.customizeModelCallback(this.model_);},this);}
|
||||
lastTask=lastTask.timedAfter('TraceImport',function importSampleData(task){importers.forEach(function(importer,index){progressMeter.update('Importing sample data '+(index+1)+'/'+importers.length);importer.importSampleData();},this);},this);lastTask=lastTask.timedAfter('TraceImport',function runAutoclosers(){progressMeter.update('Autoclosing open slices...');this.model_.autoCloseOpenSlices();this.model_.createSubSlices();},this);lastTask=lastTask.timedAfter('TraceImport',function finalizeImport(task){importers.forEach(function(importer,index){progressMeter.update('Finalizing import '+(index+1)+'/'+importers.length);importer.finalizeImport();},this);},this);lastTask=lastTask.timedAfter('TraceImport',function runPreinits(){progressMeter.update('Initializing objects (step 1/2)...');this.model_.preInitializeObjects();},this);if(this.importOptions_.pruneEmptyContainers){lastTask=lastTask.timedAfter('TraceImport',function runPruneEmptyContainers(){progressMeter.update('Pruning empty containers...');this.model_.pruneEmptyContainers();},this);}
|
||||
lastTask=lastTask.timedAfter('TraceImport',function runMergeKernelWithuserland(){progressMeter.update('Merging kernel with userland...');this.model_.mergeKernelWithUserland();},this);var auditors=[];lastTask=lastTask.timedAfter('TraceImport',function createAuditorsAndRunAnnotate(){progressMeter.update('Adding arbitrary data to model...');auditors=this.importOptions_.auditorConstructors.map(function(auditorConstructor){return new auditorConstructor(this.model_);},this);auditors.forEach(function(auditor){auditor.runAnnotate();auditor.installUserFriendlyCategoryDriverIfNeeded();});},this);lastTask=lastTask.timedAfter('TraceImport',function computeWorldBounds(){progressMeter.update('Computing final world bounds...');this.model_.computeWorldBounds(this.importOptions_.shiftWorldToZero);},this);lastTask=lastTask.timedAfter('TraceImport',function buildFlowEventIntervalTree(){progressMeter.update('Building flow event map...');this.model_.buildFlowEventIntervalTree();},this);lastTask=lastTask.timedAfter('TraceImport',function joinRefs(){progressMeter.update('Joining object refs...');this.model_.joinRefs();},this);lastTask=lastTask.timedAfter('TraceImport',function cleanupUndeletedObjects(){progressMeter.update('Cleaning up undeleted objects...');this.model_.cleanupUndeletedObjects();},this);lastTask=lastTask.timedAfter('TraceImport',function sortMemoryDumps(){progressMeter.update('Sorting memory dumps...');this.model_.sortMemoryDumps();},this);lastTask=lastTask.timedAfter('TraceImport',function finalizeMemoryGraphs(){progressMeter.update('Finalizing memory dump graphs...');this.model_.finalizeMemoryGraphs();},this);lastTask=lastTask.timedAfter('TraceImport',function initializeObjects(){progressMeter.update('Initializing objects (step 2/2)...');this.model_.initializeObjects();},this);lastTask=lastTask.timedAfter('TraceImport',function buildEventIndices(){progressMeter.update('Building event indices...');this.model_.buildEventIndices();},this);lastTask=lastTask.timedAfter('TraceImport',function buildUserModel(){progressMeter.update('Building UserModel...');var userModelBuilder=new tr.importer.UserModelBuilder(this.model_);userModelBuilder.buildUserModel();},this);lastTask=lastTask.timedAfter('TraceImport',function sortExpectations(){progressMeter.update('Sorting user expectations...');this.model_.userModel.sortExpectations();},this);lastTask=lastTask.timedAfter('TraceImport',function runAudits(){progressMeter.update('Running auditors...');auditors.forEach(function(auditor){auditor.runAudit();});},this);lastTask=lastTask.timedAfter('TraceImport',function sortAlerts(){progressMeter.update('Updating alerts...');this.model_.sortAlerts();},this);lastTask=lastTask.timedAfter('TraceImport',function lastUpdateBounds(){progressMeter.update('Update bounds...');this.model_.updateBounds();},this);lastTask=lastTask.timedAfter('TraceImport',function addModelWarnings(){progressMeter.update('Looking for warnings...');if(!this.model_.isTimeHighResolution){this.model_.importWarning({type:'low_resolution_timer',message:'Trace time is low resolution, trace may be unusable.',showToUser:true});}},this);lastTask.after(function(){this.importing_=false;},this);return importTask;},createImporter_:function(eventData){var importerConstructor=tr.importer.Importer.findImporterFor(eventData);if(!importerConstructor){throw new Error('Couldn\'t create an importer for the provided '+'eventData.');}
|
||||
lastTask=lastTask.timedAfter('TraceImport',function runMergeKernelWithuserland(){progressMeter.update('Merging kernel with userland...');this.model_.mergeKernelWithUserland();},this);var auditors=[];lastTask=lastTask.timedAfter('TraceImport',function createAuditorsAndRunAnnotate(){progressMeter.update('Adding arbitrary data to model...');auditors=this.importOptions_.auditorConstructors.map(function(auditorConstructor){return new auditorConstructor(this.model_);},this);auditors.forEach(function(auditor){auditor.runAnnotate();auditor.installUserFriendlyCategoryDriverIfNeeded();});},this);lastTask=lastTask.timedAfter('TraceImport',function computeWorldBounds(){progressMeter.update('Computing final world bounds...');this.model_.computeWorldBounds(this.importOptions_.shiftWorldToZero);},this);lastTask=lastTask.timedAfter('TraceImport',function buildFlowEventIntervalTree(){progressMeter.update('Building flow event map...');this.model_.buildFlowEventIntervalTree();},this);lastTask=lastTask.timedAfter('TraceImport',function joinRefs(){progressMeter.update('Joining object refs...');this.model_.joinRefs();},this);lastTask=lastTask.timedAfter('TraceImport',function cleanupUndeletedObjects(){progressMeter.update('Cleaning up undeleted objects...');this.model_.cleanupUndeletedObjects();},this);lastTask=lastTask.timedAfter('TraceImport',function sortMemoryDumps(){progressMeter.update('Sorting memory dumps...');this.model_.sortMemoryDumps();},this);lastTask=lastTask.timedAfter('TraceImport',function finalizeMemoryGraphs(){progressMeter.update('Finalizing memory dump graphs...');this.model_.finalizeMemoryGraphs();},this);lastTask=lastTask.timedAfter('TraceImport',function initializeObjects(){progressMeter.update('Initializing objects (step 2/2)...');this.model_.initializeObjects();},this);lastTask=lastTask.timedAfter('TraceImport',function buildEventIndices(){progressMeter.update('Building event indices...');this.model_.buildEventIndices();},this);lastTask=lastTask.timedAfter('TraceImport',function buildUserModel(){progressMeter.update('Building UserModel...');var userModelBuilder=new tr.importer.UserModelBuilder(this.model_);userModelBuilder.buildUserModel();},this);lastTask=lastTask.timedAfter('TraceImport',function sortExpectations(){progressMeter.update('Sorting user expectations...');this.model_.userModel.sortExpectations();},this);lastTask=lastTask.timedAfter('TraceImport',function runAudits(){progressMeter.update('Running auditors...');auditors.forEach(function(auditor){auditor.runAudit();});},this);lastTask=lastTask.timedAfter('TraceImport',function sortAlerts(){progressMeter.update('Updating alerts...');this.model_.sortAlerts();},this);lastTask=lastTask.timedAfter('TraceImport',function lastUpdateBounds(){progressMeter.update('Update bounds...');this.model_.updateBounds();},this);lastTask=lastTask.timedAfter('TraceImport',function addModelWarnings(){progressMeter.update('Looking for warnings...');if(!this.model_.isTimeHighResolution){this.model_.importWarning({type:'low_resolution_timer',message:'Trace time is low resolution, trace may be unusable.',showToUser:false});}},this);lastTask.after(function(){this.importing_=false;},this);return importTask;},createImporter_:function(eventData){var importerConstructor=tr.importer.Importer.findImporterFor(eventData);if(!importerConstructor){throw new Error('Couldn\'t create an importer for the provided '+'eventData.');}
|
||||
return new importerConstructor(this.model_,eventData);},hasEventDataDecoder_:function(importers){for(var i=0;i<importers.length;++i){if(!importers[i].isTraceDataContainer())
|
||||
return true;}
|
||||
return false;}};return{ImportOptions:ImportOptions,Import:Import};});'use strict';tr.exportTo('tr.e.cc',function(){function PictureAsImageData(picture,errorOrImageData){this.picture_=picture;if(errorOrImageData instanceof ImageData){this.error_=undefined;this.imageData_=errorOrImageData;}else{this.error_=errorOrImageData;this.imageData_=undefined;}};PictureAsImageData.Pending=function(picture){return new PictureAsImageData(picture,undefined);};PictureAsImageData.prototype={get picture(){return this.picture_;},get error(){return this.error_;},get imageData(){return this.imageData_;},isPending:function(){return this.error_===undefined&&this.imageData_===undefined;},asCanvas:function(){if(!this.imageData_)
|
||||
|
|
|
|||
|
|
@ -425,7 +425,7 @@ func (w *Walker) Import(name string) (*types.Package, error) {
|
|||
w.imported[name] = &importing
|
||||
|
||||
root := w.root
|
||||
if strings.HasPrefix(name, "golang.org/x/") {
|
||||
if strings.HasPrefix(name, "golang_org/x/") {
|
||||
root = filepath.Join(root, "vendor")
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -156,6 +156,36 @@ func opregreg(op obj.As, dest, src int16) *obj.Prog {
|
|||
return p
|
||||
}
|
||||
|
||||
// DUFFZERO consists of repeated blocks of 4 MOVUPSs + ADD,
|
||||
// See runtime/mkduff.go.
|
||||
func duffStart(size int64) int64 {
|
||||
x, _ := duff(size)
|
||||
return x
|
||||
}
|
||||
func duffAdj(size int64) int64 {
|
||||
_, x := duff(size)
|
||||
return x
|
||||
}
|
||||
|
||||
// duff returns the offset (from duffzero, in bytes) and pointer adjust (in bytes)
|
||||
// required to use the duffzero mechanism for a block of the given size.
|
||||
func duff(size int64) (int64, int64) {
|
||||
if size < 32 || size > 1024 || size%dzClearStep != 0 {
|
||||
panic("bad duffzero size")
|
||||
}
|
||||
steps := size / dzClearStep
|
||||
blocks := steps / dzBlockLen
|
||||
steps %= dzBlockLen
|
||||
off := dzBlockSize * (dzBlocks - blocks)
|
||||
var adj int64
|
||||
if steps != 0 {
|
||||
off -= dzAddSize
|
||||
off -= dzMovSize * steps
|
||||
adj -= dzClearStep * (dzBlockLen - steps)
|
||||
}
|
||||
return off, adj
|
||||
}
|
||||
|
||||
func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
|
||||
s.SetLineno(v.Line)
|
||||
switch v.Op {
|
||||
|
|
@ -647,10 +677,20 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
|
|||
ssa.OpAMD64CVTSS2SD, ssa.OpAMD64CVTSD2SS:
|
||||
opregreg(v.Op.Asm(), gc.SSARegNum(v), gc.SSARegNum(v.Args[0]))
|
||||
case ssa.OpAMD64DUFFZERO:
|
||||
p := gc.Prog(obj.ADUFFZERO)
|
||||
off := duffStart(v.AuxInt)
|
||||
adj := duffAdj(v.AuxInt)
|
||||
var p *obj.Prog
|
||||
if adj != 0 {
|
||||
p = gc.Prog(x86.AADDQ)
|
||||
p.From.Type = obj.TYPE_CONST
|
||||
p.From.Offset = adj
|
||||
p.To.Type = obj.TYPE_REG
|
||||
p.To.Reg = x86.REG_DI
|
||||
}
|
||||
p = gc.Prog(obj.ADUFFZERO)
|
||||
p.To.Type = obj.TYPE_ADDR
|
||||
p.To.Sym = gc.Linksym(gc.Pkglookup("duffzero", gc.Runtimepkg))
|
||||
p.To.Offset = v.AuxInt
|
||||
p.To.Offset = off
|
||||
case ssa.OpAMD64MOVOconst:
|
||||
if v.AuxInt != 0 {
|
||||
v.Unimplementedf("MOVOconst can only do constant=0")
|
||||
|
|
|
|||
|
|
@ -153,10 +153,13 @@ func (s *state) locatePotentialPhiFunctions(fn *Node) *sparseDefState {
|
|||
p := e.Block()
|
||||
dm.Use(t, p) // always count phi pred as "use"; no-op except for loop edges, which matter.
|
||||
x := t.stm.Find(p, ssa.AdjustAfter, helper) // Look for defs reaching or within predecessors.
|
||||
if x == nil { // nil def from a predecessor means a backedge that will be visited soon.
|
||||
continue
|
||||
}
|
||||
if defseen == nil {
|
||||
defseen = x
|
||||
}
|
||||
if defseen != x || x == nil { // TODO: too conservative at loops, does better if x == nil -> continue
|
||||
if defseen != x {
|
||||
// Need to insert a phi function here because predecessors's definitions differ.
|
||||
change = true
|
||||
// Phi insertion is at AdjustBefore, visible with find in same block at AdjustWithin or AdjustAfter.
|
||||
|
|
|
|||
|
|
@ -270,6 +270,7 @@ var passes = [...]pass{
|
|||
{name: "checkLower", fn: checkLower, required: true},
|
||||
{name: "late phielim", fn: phielim},
|
||||
{name: "late copyelim", fn: copyelim},
|
||||
{name: "phi tighten", fn: phiTighten},
|
||||
{name: "late deadcode", fn: deadcode},
|
||||
{name: "critical", fn: critical, required: true}, // remove critical edges
|
||||
{name: "likelyadjust", fn: likelyadjust},
|
||||
|
|
|
|||
|
|
@ -400,9 +400,7 @@
|
|||
(Zero [SizeAndAlign(s).Size()-8] (ADDQconst [8] destptr) (MOVQstore destptr (MOVQconst [0]) mem))
|
||||
(Zero [s] destptr mem)
|
||||
&& SizeAndAlign(s).Size() <= 1024 && SizeAndAlign(s).Size()%16 == 0 && !config.noDuffDevice ->
|
||||
(DUFFZERO [duffStartAMD64(SizeAndAlign(s).Size())]
|
||||
(ADDQconst [duffAdjAMD64(SizeAndAlign(s).Size())] destptr) (MOVOconst [0])
|
||||
mem)
|
||||
(DUFFZERO [SizeAndAlign(s).Size()] destptr (MOVOconst [0]) mem)
|
||||
|
||||
// Large zeroing uses REP STOSQ.
|
||||
(Zero [s] destptr mem)
|
||||
|
|
|
|||
|
|
@ -416,10 +416,10 @@ func init() {
|
|||
{name: "MOVQstoreconstidx1", argLength: 3, reg: gpstoreconstidx, asm: "MOVQ", aux: "SymValAndOff", typ: "Mem"}, // store 8 bytes of ... arg1 ...
|
||||
{name: "MOVQstoreconstidx8", argLength: 3, reg: gpstoreconstidx, asm: "MOVQ", aux: "SymValAndOff", typ: "Mem"}, // store 8 bytes of ... 8*arg1 ...
|
||||
|
||||
// arg0 = (duff-adjusted) pointer to start of memory to zero
|
||||
// arg0 = pointer to start of memory to zero
|
||||
// arg1 = value to store (will always be zero)
|
||||
// arg2 = mem
|
||||
// auxint = offset into duffzero code to start executing
|
||||
// auxint = # of bytes to zero
|
||||
// returns mem
|
||||
{
|
||||
name: "DUFFZERO",
|
||||
|
|
|
|||
|
|
@ -259,51 +259,6 @@ func isSamePtr(p1, p2 *Value) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
func duffStartAMD64(size int64) int64 {
|
||||
x, _ := duffAMD64(size)
|
||||
return x
|
||||
}
|
||||
func duffAdjAMD64(size int64) int64 {
|
||||
_, x := duffAMD64(size)
|
||||
return x
|
||||
}
|
||||
|
||||
// duff returns the offset (from duffzero, in bytes) and pointer adjust (in bytes)
|
||||
// required to use the duffzero mechanism for a block of the given size.
|
||||
func duffAMD64(size int64) (int64, int64) {
|
||||
// DUFFZERO consists of repeated blocks of 4 MOVUPSs + ADD,
|
||||
// See runtime/mkduff.go.
|
||||
const (
|
||||
dzBlocks = 16 // number of MOV/ADD blocks
|
||||
dzBlockLen = 4 // number of clears per block
|
||||
dzBlockSize = 19 // size of instructions in a single block
|
||||
dzMovSize = 4 // size of single MOV instruction w/ offset
|
||||
dzAddSize = 4 // size of single ADD instruction
|
||||
dzClearStep = 16 // number of bytes cleared by each MOV instruction
|
||||
|
||||
dzTailLen = 4 // number of final STOSQ instructions
|
||||
dzTailSize = 2 // size of single STOSQ instruction
|
||||
|
||||
dzClearLen = dzClearStep * dzBlockLen // bytes cleared by one block
|
||||
dzSize = dzBlocks * dzBlockSize
|
||||
)
|
||||
|
||||
if size < 32 || size > 1024 || size%dzClearStep != 0 {
|
||||
panic("bad duffzero size")
|
||||
}
|
||||
steps := size / dzClearStep
|
||||
blocks := steps / dzBlockLen
|
||||
steps %= dzBlockLen
|
||||
off := dzBlockSize * (dzBlocks - blocks)
|
||||
var adj int64
|
||||
if steps != 0 {
|
||||
off -= dzAddSize
|
||||
off -= dzMovSize * steps
|
||||
adj -= dzClearStep * (dzBlockLen - steps)
|
||||
}
|
||||
return off, adj
|
||||
}
|
||||
|
||||
// moveSize returns the number of bytes an aligned MOV instruction moves
|
||||
func moveSize(align int64, c *Config) int64 {
|
||||
switch {
|
||||
|
|
|
|||
|
|
@ -17415,7 +17415,7 @@ func rewriteValueAMD64_OpZero(v *Value, config *Config) bool {
|
|||
}
|
||||
// match: (Zero [s] destptr mem)
|
||||
// cond: SizeAndAlign(s).Size() <= 1024 && SizeAndAlign(s).Size()%16 == 0 && !config.noDuffDevice
|
||||
// result: (DUFFZERO [duffStartAMD64(SizeAndAlign(s).Size())] (ADDQconst [duffAdjAMD64(SizeAndAlign(s).Size())] destptr) (MOVOconst [0]) mem)
|
||||
// result: (DUFFZERO [SizeAndAlign(s).Size()] destptr (MOVOconst [0]) mem)
|
||||
for {
|
||||
s := v.AuxInt
|
||||
destptr := v.Args[0]
|
||||
|
|
@ -17424,14 +17424,11 @@ func rewriteValueAMD64_OpZero(v *Value, config *Config) bool {
|
|||
break
|
||||
}
|
||||
v.reset(OpAMD64DUFFZERO)
|
||||
v.AuxInt = duffStartAMD64(SizeAndAlign(s).Size())
|
||||
v0 := b.NewValue0(v.Line, OpAMD64ADDQconst, config.fe.TypeUInt64())
|
||||
v0.AuxInt = duffAdjAMD64(SizeAndAlign(s).Size())
|
||||
v0.AddArg(destptr)
|
||||
v.AuxInt = SizeAndAlign(s).Size()
|
||||
v.AddArg(destptr)
|
||||
v0 := b.NewValue0(v.Line, OpAMD64MOVOconst, TypeInt128)
|
||||
v0.AuxInt = 0
|
||||
v.AddArg(v0)
|
||||
v1 := b.NewValue0(v.Line, OpAMD64MOVOconst, TypeInt128)
|
||||
v1.AuxInt = 0
|
||||
v.AddArg(v1)
|
||||
v.AddArg(mem)
|
||||
return true
|
||||
}
|
||||
|
|
|
|||
|
|
@ -14,8 +14,8 @@ import "fmt"
|
|||
// the nearest tree ancestor of a given node such that the
|
||||
// ancestor is also in the set.
|
||||
//
|
||||
// Given a set of blocks {B1, B2, B3} within the dominator tree, established by
|
||||
// stm.Insert()ing B1, B2, B3, etc, a query at block B
|
||||
// Given a set of blocks {B1, B2, B3} within the dominator tree, established
|
||||
// by stm.Insert()ing B1, B2, B3, etc, a query at block B
|
||||
// (performed with stm.Find(stm, B, adjust, helper))
|
||||
// will return the member of the set that is the nearest strict
|
||||
// ancestor of B within the dominator tree, or nil if none exists.
|
||||
|
|
@ -49,9 +49,9 @@ type SparseTreeMap RBTint32
|
|||
// packages, such as gc.
|
||||
type SparseTreeHelper struct {
|
||||
Sdom []SparseTreeNode // indexed by block.ID
|
||||
Po []*Block // exported data
|
||||
Dom []*Block // exported data
|
||||
Ponums []int32 // exported data
|
||||
Po []*Block // exported data; the blocks, in a post-order
|
||||
Dom []*Block // exported data; the dominator of this block.
|
||||
Ponums []int32 // exported data; Po[Ponums[b.ID]] == b; the index of b in Po
|
||||
}
|
||||
|
||||
// NewSparseTreeHelper returns a SparseTreeHelper for use
|
||||
|
|
@ -79,11 +79,19 @@ func makeSparseTreeHelper(sdom SparseTree, dom, po []*Block, ponums []int32) *Sp
|
|||
// A sparseTreeMapEntry contains the data stored in a binary search
|
||||
// data structure indexed by (dominator tree walk) entry and exit numbers.
|
||||
// Each entry is added twice, once keyed by entry-1/entry/entry+1 and
|
||||
// once keyed by exit+1/exit/exit-1. (there are three choices of paired indices, not 9, and they properly nest)
|
||||
// once keyed by exit+1/exit/exit-1.
|
||||
//
|
||||
// Within a sparse tree, the two entries added bracket all their descendant
|
||||
// entries within the tree; the first insertion is keyed by entry number,
|
||||
// which comes before all the entry and exit numbers of descendants, and
|
||||
// the second insertion is keyed by exit number, which comes after all the
|
||||
// entry and exit numbers of the descendants.
|
||||
type sparseTreeMapEntry struct {
|
||||
index *SparseTreeNode
|
||||
block *Block // TODO: store this in a separate index.
|
||||
data interface{}
|
||||
index *SparseTreeNode // references the entry and exit numbers for a block in the sparse tree
|
||||
block *Block // TODO: store this in a separate index.
|
||||
data interface{}
|
||||
sparseParent *sparseTreeMapEntry // references the nearest ancestor of this block in the sparse tree.
|
||||
adjust int32 // at what adjustment was this node entered into the sparse tree? The same block may be entered more than once, but at different adjustments.
|
||||
}
|
||||
|
||||
// Insert creates a definition within b with data x.
|
||||
|
|
@ -98,12 +106,25 @@ func (m *SparseTreeMap) Insert(b *Block, adjust int32, x interface{}, helper *Sp
|
|||
// assert unreachable
|
||||
return
|
||||
}
|
||||
entry := &sparseTreeMapEntry{index: blockIndex, data: x}
|
||||
// sp will be the sparse parent in this sparse tree (nearest ancestor in the larger tree that is also in this sparse tree)
|
||||
sp := m.findEntry(b, adjust, helper)
|
||||
entry := &sparseTreeMapEntry{index: blockIndex, block: b, data: x, sparseParent: sp, adjust: adjust}
|
||||
|
||||
right := blockIndex.exit - adjust
|
||||
_ = rbtree.Insert(right, entry)
|
||||
|
||||
left := blockIndex.entry + adjust
|
||||
_ = rbtree.Insert(left, entry)
|
||||
|
||||
// This newly inserted block may now be the sparse parent of some existing nodes (the new sparse children of this block)
|
||||
// Iterate over nodes bracketed by this new node to correct their parent, but not over the proper sparse descendants of those nodes.
|
||||
_, d := rbtree.Lub(left) // Lub (not EQ) of left is either right or a sparse child
|
||||
for tme := d.(*sparseTreeMapEntry); tme != entry; tme = d.(*sparseTreeMapEntry) {
|
||||
tme.sparseParent = entry
|
||||
// all descendants of tme are unchanged;
|
||||
// next sparse sibling (or right-bracketing sparse parent == entry) is first node after tme.index.exit - tme.adjust
|
||||
_, d = rbtree.Lub(tme.index.exit - tme.adjust)
|
||||
}
|
||||
}
|
||||
|
||||
// Find returns the definition visible from block b, or nil if none can be found.
|
||||
|
|
@ -118,45 +139,41 @@ func (m *SparseTreeMap) Insert(b *Block, adjust int32, x interface{}, helper *Sp
|
|||
//
|
||||
// Another way to think of this is that Find searches for inputs, Insert defines outputs.
|
||||
func (m *SparseTreeMap) Find(b *Block, adjust int32, helper *SparseTreeHelper) interface{} {
|
||||
v := m.findEntry(b, adjust, helper)
|
||||
if v == nil {
|
||||
return nil
|
||||
}
|
||||
return v.data
|
||||
}
|
||||
|
||||
func (m *SparseTreeMap) findEntry(b *Block, adjust int32, helper *SparseTreeHelper) *sparseTreeMapEntry {
|
||||
rbtree := (*RBTint32)(m)
|
||||
if rbtree == nil {
|
||||
return nil
|
||||
}
|
||||
blockIndex := &helper.Sdom[b.ID]
|
||||
_, v := rbtree.Glb(blockIndex.entry + adjust)
|
||||
for v != nil {
|
||||
otherEntry := v.(*sparseTreeMapEntry)
|
||||
otherIndex := otherEntry.index
|
||||
// Two cases -- either otherIndex brackets blockIndex,
|
||||
// or it doesn't.
|
||||
//
|
||||
// Note that if otherIndex and blockIndex are
|
||||
// the same block, then the glb test only passed
|
||||
// because the definition is "before",
|
||||
// i.e., k == blockIndex.entry-1
|
||||
// allowing equality is okay on the blocks check.
|
||||
if otherIndex.exit >= blockIndex.exit {
|
||||
// bracketed.
|
||||
return otherEntry.data
|
||||
}
|
||||
// In the not-bracketed case, we could memoize the results of
|
||||
// walking up the tree, but for now we won't.
|
||||
// Memoize plan is to take the gap (inclusive)
|
||||
// from otherIndex.exit+1 to blockIndex.entry-1
|
||||
// and insert it into this or a second tree.
|
||||
// Said tree would then need adjusting whenever
|
||||
// an insertion occurred.
|
||||
|
||||
// Expectation is that per-variable tree is sparse,
|
||||
// therefore probe siblings instead of climbing up.
|
||||
// Note that each sibling encountered in this walk
|
||||
// to find a defining ancestor shares that ancestor
|
||||
// because the walk skips over the interior -- each
|
||||
// Glb will be an exit, and the iteration is to the
|
||||
// Glb of the entry.
|
||||
_, v = rbtree.Glb(otherIndex.entry - 1)
|
||||
// The Glb (not EQ) of this probe is either the entry-indexed end of a sparse parent
|
||||
// or the exit-indexed end of a sparse sibling
|
||||
_, v := rbtree.Glb(blockIndex.entry + adjust)
|
||||
|
||||
if v == nil {
|
||||
return nil
|
||||
}
|
||||
return nil // nothing found
|
||||
|
||||
otherEntry := v.(*sparseTreeMapEntry)
|
||||
if otherEntry.index.exit >= blockIndex.exit { // otherEntry exit after blockIndex exit; therefore, brackets
|
||||
return otherEntry
|
||||
}
|
||||
// otherEntry is a sparse Sibling, and shares the same sparse parent (nearest ancestor within larger tree)
|
||||
sp := otherEntry.sparseParent
|
||||
if sp != nil {
|
||||
if sp.index.exit < blockIndex.exit { // no ancestor found
|
||||
return nil
|
||||
}
|
||||
return sp
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *SparseTreeMap) String() string {
|
||||
|
|
@ -165,5 +182,8 @@ func (m *SparseTreeMap) String() string {
|
|||
}
|
||||
|
||||
func (e *sparseTreeMapEntry) String() string {
|
||||
return fmt.Sprintf("index=%v, data=%v", e.index, e.data)
|
||||
if e == nil {
|
||||
return "nil"
|
||||
}
|
||||
return fmt.Sprintf("(index=%v, block=%v, data=%v)->%v", e.index, e.block, e.data, e.sparseParent)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -92,3 +92,26 @@ func tighten(f *Func) {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
// phiTighten moves constants closer to phi users.
|
||||
// This pass avoids having lots of constants live for lots of the program.
|
||||
// See issue 16407.
|
||||
func phiTighten(f *Func) {
|
||||
for _, b := range f.Blocks {
|
||||
for _, v := range b.Values {
|
||||
if v.Op != OpPhi {
|
||||
continue
|
||||
}
|
||||
for i, a := range v.Args {
|
||||
if !a.rematerializeable() {
|
||||
continue // not a constant we can move around
|
||||
}
|
||||
if a.Block == b.Preds[i].b {
|
||||
continue // already in the right place
|
||||
}
|
||||
// Make a copy of a, put in predecessor block.
|
||||
v.SetArg(i, a.copyInto(b.Preds[i].b))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -61,6 +61,7 @@ var tests = []test{
|
|||
`var ExportedVariable = 1`, // Simple variable.
|
||||
`var VarOne = 1`, // First entry in variable block.
|
||||
`func ExportedFunc\(a int\) bool`, // Function.
|
||||
`func ReturnUnexported\(\) unexportedType`, // Function with unexported return type.
|
||||
`type ExportedType struct { ... }`, // Exported type.
|
||||
`const ExportedTypedConstant ExportedType = iota`, // Typed constant.
|
||||
`const ExportedTypedConstant_unexported unexportedType`, // Typed constant, exported for unexported type.
|
||||
|
|
@ -89,9 +90,10 @@ var tests = []test{
|
|||
"full package with u",
|
||||
[]string{`-u`, p},
|
||||
[]string{
|
||||
`const ExportedConstant = 1`, // Simple constant.
|
||||
`const internalConstant = 2`, // Internal constants.
|
||||
`func internalFunc\(a int\) bool`, // Internal functions.
|
||||
`const ExportedConstant = 1`, // Simple constant.
|
||||
`const internalConstant = 2`, // Internal constants.
|
||||
`func internalFunc\(a int\) bool`, // Internal functions.
|
||||
`func ReturnUnexported\(\) unexportedType`, // Function with unexported return type.
|
||||
},
|
||||
[]string{
|
||||
`Comment about exported constant`, // No comment for simple constant.
|
||||
|
|
@ -221,6 +223,7 @@ var tests = []test{
|
|||
`func \(ExportedType\) ExportedMethod\(a int\) bool`,
|
||||
`const ExportedTypedConstant ExportedType = iota`, // Must include associated constant.
|
||||
`func ExportedTypeConstructor\(\) \*ExportedType`, // Must include constructor.
|
||||
`io.Reader.*Comment on line with embedded Reader.`,
|
||||
},
|
||||
[]string{
|
||||
`unexportedField`, // No unexported field.
|
||||
|
|
@ -228,6 +231,7 @@ var tests = []test{
|
|||
`Comment about exported method.`, // No comment about exported method.
|
||||
`unexportedMethod`, // No unexported method.
|
||||
`unexportedTypedConstant`, // No unexported constant.
|
||||
`error`, // No embedded error.
|
||||
},
|
||||
},
|
||||
// Type -u with unexported fields.
|
||||
|
|
@ -243,6 +247,8 @@ var tests = []test{
|
|||
`\*ExportedEmbeddedType.*Comment on line with exported embedded \*field.`,
|
||||
`unexportedType.*Comment on line with unexported embedded field.`,
|
||||
`\*unexportedType.*Comment on line with unexported embedded \*field.`,
|
||||
`io.Reader.*Comment on line with embedded Reader.`,
|
||||
`error.*Comment on line with embedded error.`,
|
||||
`func \(ExportedType\) unexportedMethod\(a int\) bool`,
|
||||
`unexportedTypedConstant`,
|
||||
},
|
||||
|
|
@ -274,6 +280,8 @@ var tests = []test{
|
|||
`type ExportedInterface interface`, // Interface definition.
|
||||
`Comment before exported method.*\n.*ExportedMethod\(\)` +
|
||||
`.*Comment on line with exported method`,
|
||||
`io.Reader.*Comment on line with embedded Reader.`,
|
||||
`error.*Comment on line with embedded error.`,
|
||||
`Has unexported methods`,
|
||||
},
|
||||
[]string{
|
||||
|
|
@ -293,6 +301,8 @@ var tests = []test{
|
|||
`Comment before exported method.*\n.*ExportedMethod\(\)` +
|
||||
`.*Comment on line with exported method`,
|
||||
`unexportedMethod\(\).*Comment on line with unexported method.`,
|
||||
`io.Reader.*Comment on line with embedded Reader.`,
|
||||
`error.*Comment on line with embedded error.`,
|
||||
},
|
||||
[]string{
|
||||
`Has unexported methods`,
|
||||
|
|
|
|||
|
|
@ -317,7 +317,9 @@ func (pkg *Package) funcSummary(funcs []*doc.Func, showConstructors bool) {
|
|||
isConstructor = make(map[*doc.Func]bool)
|
||||
for _, typ := range pkg.doc.Types {
|
||||
for _, constructor := range typ.Funcs {
|
||||
isConstructor[constructor] = true
|
||||
if isExported(typ.Name) {
|
||||
isConstructor[constructor] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -494,14 +496,19 @@ func trimUnexportedElems(spec *ast.TypeSpec) {
|
|||
}
|
||||
switch typ := spec.Type.(type) {
|
||||
case *ast.StructType:
|
||||
typ.Fields = trimUnexportedFields(typ.Fields, "fields")
|
||||
typ.Fields = trimUnexportedFields(typ.Fields, false)
|
||||
case *ast.InterfaceType:
|
||||
typ.Methods = trimUnexportedFields(typ.Methods, "methods")
|
||||
typ.Methods = trimUnexportedFields(typ.Methods, true)
|
||||
}
|
||||
}
|
||||
|
||||
// trimUnexportedFields returns the field list trimmed of unexported fields.
|
||||
func trimUnexportedFields(fields *ast.FieldList, what string) *ast.FieldList {
|
||||
func trimUnexportedFields(fields *ast.FieldList, isInterface bool) *ast.FieldList {
|
||||
what := "methods"
|
||||
if !isInterface {
|
||||
what = "fields"
|
||||
}
|
||||
|
||||
trimmed := false
|
||||
list := make([]*ast.Field, 0, len(fields.List))
|
||||
for _, field := range fields.List {
|
||||
|
|
@ -511,12 +518,23 @@ func trimUnexportedFields(fields *ast.FieldList, what string) *ast.FieldList {
|
|||
// Nothing else is allowed.
|
||||
switch ident := field.Type.(type) {
|
||||
case *ast.Ident:
|
||||
if isInterface && ident.Name == "error" && ident.Obj == nil {
|
||||
// For documentation purposes, we consider the builtin error
|
||||
// type special when embedded in an interface, such that it
|
||||
// always gets shown publicly.
|
||||
list = append(list, field)
|
||||
continue
|
||||
}
|
||||
names = []*ast.Ident{ident}
|
||||
case *ast.StarExpr:
|
||||
// Must have the form *identifier.
|
||||
if ident, ok := ident.X.(*ast.Ident); ok {
|
||||
// This is only valid on embedded types in structs.
|
||||
if ident, ok := ident.X.(*ast.Ident); ok && !isInterface {
|
||||
names = []*ast.Ident{ident}
|
||||
}
|
||||
case *ast.SelectorExpr:
|
||||
// An embedded type may refer to a type in another package.
|
||||
names = []*ast.Ident{ident.Sel}
|
||||
}
|
||||
if names == nil {
|
||||
// Can only happen if AST is incorrect. Safe to continue with a nil list.
|
||||
|
|
|
|||
|
|
@ -66,6 +66,8 @@ type ExportedType struct {
|
|||
*ExportedEmbeddedType // Comment on line with exported embedded *field.
|
||||
unexportedType // Comment on line with unexported embedded field.
|
||||
*unexportedType // Comment on line with unexported embedded *field.
|
||||
io.Reader // Comment on line with embedded Reader.
|
||||
error // Comment on line with embedded error.
|
||||
}
|
||||
|
||||
// Comment about exported method.
|
||||
|
|
@ -96,6 +98,8 @@ type ExportedInterface interface {
|
|||
// Comment before exported method.
|
||||
ExportedMethod() // Comment on line with exported method.
|
||||
unexportedMethod() // Comment on line with unexported method.
|
||||
io.Reader // Comment on line with embedded Reader.
|
||||
error // Comment on line with embedded error.
|
||||
}
|
||||
|
||||
// Comment about unexported type.
|
||||
|
|
@ -119,3 +123,6 @@ const unexportedTypedConstant unexportedType = 1 // In a separate section to tes
|
|||
// For case matching.
|
||||
const CaseMatch = 1
|
||||
const Casematch = 2
|
||||
|
||||
func ReturnUnexported() unexportedType { return 0 }
|
||||
func ReturnExported() ExportedType { return ExportedType{} }
|
||||
|
|
|
|||
|
|
@ -15,7 +15,17 @@ const (
|
|||
BestSpeed = 1
|
||||
BestCompression = 9
|
||||
DefaultCompression = -1
|
||||
HuffmanOnly = -2 // Disables match search and only does Huffman entropy reduction.
|
||||
|
||||
// HuffmanOnly disables Lempel-Ziv match searching and only performs Huffman
|
||||
// entropy encoding. This mode is useful in compressing data that has
|
||||
// already been compressed with an LZ style algorithm (e.g. Snappy or LZ4)
|
||||
// that lacks an entropy encoder. Compression gains are achieved when
|
||||
// certain bytes in the input stream occur more frequently than others.
|
||||
//
|
||||
// Note that HuffmanOnly produces a compressed output that is
|
||||
// RFC 1951 compliant. That is, any valid DEFLATE decompressor will
|
||||
// continue to be able to decompress this output.
|
||||
HuffmanOnly = -2
|
||||
)
|
||||
|
||||
const (
|
||||
|
|
@ -644,7 +654,6 @@ func (d *compressor) close() error {
|
|||
// a very fast compression for all types of input, but sacrificing considerable
|
||||
// compression efficiency.
|
||||
//
|
||||
//
|
||||
// If level is in the range [-2, 9] then the error returned will be nil.
|
||||
// Otherwise the error returned will be non-nil.
|
||||
func NewWriter(w io.Writer, level int) (*Writer, error) {
|
||||
|
|
|
|||
|
|
@ -255,6 +255,12 @@ func TestDeadline(t *testing.T) {
|
|||
o = otherContext{c}
|
||||
c, _ = WithDeadline(o, time.Now().Add(4*time.Second))
|
||||
testDeadline(c, "WithDeadline+otherContext+WithDeadline", 2*time.Second, t)
|
||||
|
||||
c, _ = WithDeadline(Background(), time.Now().Add(-time.Millisecond))
|
||||
testDeadline(c, "WithDeadline+inthepast", time.Second, t)
|
||||
|
||||
c, _ = WithDeadline(Background(), time.Now())
|
||||
testDeadline(c, "WithDeadline+now", time.Second, t)
|
||||
}
|
||||
|
||||
func TestTimeout(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -10,9 +10,65 @@ package x509
|
|||
#cgo CFLAGS: -mmacosx-version-min=10.6 -D__MAC_OS_X_VERSION_MAX_ALLOWED=1060
|
||||
#cgo LDFLAGS: -framework CoreFoundation -framework Security
|
||||
|
||||
#include <errno.h>
|
||||
#include <sys/sysctl.h>
|
||||
|
||||
#include <CoreFoundation/CoreFoundation.h>
|
||||
#include <Security/Security.h>
|
||||
|
||||
// FetchPEMRoots_MountainLion is the version of FetchPEMRoots from Go 1.6
|
||||
// which still works on OS X 10.8 (Mountain Lion).
|
||||
// It lacks support for admin & user cert domains.
|
||||
// See golang.org/issue/16473
|
||||
int FetchPEMRoots_MountainLion(CFDataRef *pemRoots) {
|
||||
if (pemRoots == NULL) {
|
||||
return -1;
|
||||
}
|
||||
CFArrayRef certs = NULL;
|
||||
OSStatus err = SecTrustCopyAnchorCertificates(&certs);
|
||||
if (err != noErr) {
|
||||
return -1;
|
||||
}
|
||||
CFMutableDataRef combinedData = CFDataCreateMutable(kCFAllocatorDefault, 0);
|
||||
int i, ncerts = CFArrayGetCount(certs);
|
||||
for (i = 0; i < ncerts; i++) {
|
||||
CFDataRef data = NULL;
|
||||
SecCertificateRef cert = (SecCertificateRef)CFArrayGetValueAtIndex(certs, i);
|
||||
if (cert == NULL) {
|
||||
continue;
|
||||
}
|
||||
// Note: SecKeychainItemExport is deprecated as of 10.7 in favor of SecItemExport.
|
||||
// Once we support weak imports via cgo we should prefer that, and fall back to this
|
||||
// for older systems.
|
||||
err = SecKeychainItemExport(cert, kSecFormatX509Cert, kSecItemPemArmour, NULL, &data);
|
||||
if (err != noErr) {
|
||||
continue;
|
||||
}
|
||||
if (data != NULL) {
|
||||
CFDataAppendBytes(combinedData, CFDataGetBytePtr(data), CFDataGetLength(data));
|
||||
CFRelease(data);
|
||||
}
|
||||
}
|
||||
CFRelease(certs);
|
||||
*pemRoots = combinedData;
|
||||
return 0;
|
||||
}
|
||||
|
||||
// useOldCode reports whether the running machine is OS X 10.8 Mountain Lion
|
||||
// or older. We only support Mountain Lion and higher, but we'll at least try our
|
||||
// best on older machines and continue to use the old code path.
|
||||
//
|
||||
// See golang.org/issue/16473
|
||||
int useOldCode() {
|
||||
char str[256];
|
||||
size_t size = sizeof(str);
|
||||
memset(str, 0, size);
|
||||
sysctlbyname("kern.osrelease", str, &size, NULL, 0);
|
||||
// OS X 10.8 is osrelease "12.*", 10.7 is 11.*, 10.6 is 10.*.
|
||||
// We never supported things before that.
|
||||
return memcmp(str, "12.", 3) == 0 || memcmp(str, "11.", 3) == 0 || memcmp(str, "10.", 3) == 0;
|
||||
}
|
||||
|
||||
// FetchPEMRoots fetches the system's list of trusted X.509 root certificates.
|
||||
//
|
||||
// On success it returns 0 and fills pemRoots with a CFDataRef that contains the extracted root
|
||||
|
|
@ -21,6 +77,10 @@ package x509
|
|||
// Note: The CFDataRef returned in pemRoots must be released (using CFRelease) after
|
||||
// we've consumed its content.
|
||||
int FetchPEMRoots(CFDataRef *pemRoots) {
|
||||
if (useOldCode()) {
|
||||
return FetchPEMRoots_MountainLion(pemRoots);
|
||||
}
|
||||
|
||||
// Get certificates from all domains, not just System, this lets
|
||||
// the user add CAs to their "login" keychain, and Admins to add
|
||||
// to the "System" keychain
|
||||
|
|
|
|||
|
|
@ -325,9 +325,9 @@ func (r *readRune) readByte() (b byte, err error) {
|
|||
r.pending--
|
||||
return
|
||||
}
|
||||
_, err = r.reader.Read(r.pendBuf[:1])
|
||||
if err != nil {
|
||||
return
|
||||
n, err := io.ReadFull(r.reader, r.pendBuf[:1])
|
||||
if n != 1 {
|
||||
return 0, err
|
||||
}
|
||||
return r.pendBuf[0], err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ import (
|
|||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
"testing/iotest"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
|
|
@ -118,20 +119,6 @@ func (s *IntString) Scan(state ScanState, verb rune) error {
|
|||
|
||||
var intStringVal IntString
|
||||
|
||||
// myStringReader implements Read but not ReadRune, allowing us to test our readRune wrapper
|
||||
// type that creates something that can read runes given only Read().
|
||||
type myStringReader struct {
|
||||
r *strings.Reader
|
||||
}
|
||||
|
||||
func (s *myStringReader) Read(p []byte) (n int, err error) {
|
||||
return s.r.Read(p)
|
||||
}
|
||||
|
||||
func newReader(s string) *myStringReader {
|
||||
return &myStringReader{strings.NewReader(s)}
|
||||
}
|
||||
|
||||
var scanTests = []ScanTest{
|
||||
// Basic types
|
||||
{"T\n", &boolVal, true}, // boolean test vals toggle to be sure they are written
|
||||
|
|
@ -363,25 +350,38 @@ var multiTests = []ScanfMultiTest{
|
|||
{"%v%v", "FALSE23", args(&truth, &i), args(false, 23), ""},
|
||||
}
|
||||
|
||||
func testScan(name string, t *testing.T, scan func(r io.Reader, a ...interface{}) (int, error)) {
|
||||
var readers = []struct {
|
||||
name string
|
||||
f func(string) io.Reader
|
||||
}{
|
||||
{"StringReader", func(s string) io.Reader {
|
||||
return strings.NewReader(s)
|
||||
}},
|
||||
{"ReaderOnly", func(s string) io.Reader {
|
||||
return struct{ io.Reader }{strings.NewReader(s)}
|
||||
}},
|
||||
{"OneByteReader", func(s string) io.Reader {
|
||||
return iotest.OneByteReader(strings.NewReader(s))
|
||||
}},
|
||||
{"DataErrReader", func(s string) io.Reader {
|
||||
return iotest.DataErrReader(strings.NewReader(s))
|
||||
}},
|
||||
}
|
||||
|
||||
func testScan(t *testing.T, f func(string) io.Reader, scan func(r io.Reader, a ...interface{}) (int, error)) {
|
||||
for _, test := range scanTests {
|
||||
var r io.Reader
|
||||
if name == "StringReader" {
|
||||
r = strings.NewReader(test.text)
|
||||
} else {
|
||||
r = newReader(test.text)
|
||||
}
|
||||
r := f(test.text)
|
||||
n, err := scan(r, test.in)
|
||||
if err != nil {
|
||||
m := ""
|
||||
if n > 0 {
|
||||
m = Sprintf(" (%d fields ok)", n)
|
||||
}
|
||||
t.Errorf("%s got error scanning %q: %s%s", name, test.text, err, m)
|
||||
t.Errorf("got error scanning %q: %s%s", test.text, err, m)
|
||||
continue
|
||||
}
|
||||
if n != 1 {
|
||||
t.Errorf("%s count error on entry %q: got %d", name, test.text, n)
|
||||
t.Errorf("count error on entry %q: got %d", test.text, n)
|
||||
continue
|
||||
}
|
||||
// The incoming value may be a pointer
|
||||
|
|
@ -391,25 +391,25 @@ func testScan(name string, t *testing.T, scan func(r io.Reader, a ...interface{}
|
|||
}
|
||||
val := v.Interface()
|
||||
if !reflect.DeepEqual(val, test.out) {
|
||||
t.Errorf("%s scanning %q: expected %#v got %#v, type %T", name, test.text, test.out, val, val)
|
||||
t.Errorf("scanning %q: expected %#v got %#v, type %T", test.text, test.out, val, val)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestScan(t *testing.T) {
|
||||
testScan("StringReader", t, Fscan)
|
||||
}
|
||||
|
||||
func TestMyReaderScan(t *testing.T) {
|
||||
testScan("myStringReader", t, Fscan)
|
||||
for _, r := range readers {
|
||||
t.Run(r.name, func(t *testing.T) {
|
||||
testScan(t, r.f, Fscan)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestScanln(t *testing.T) {
|
||||
testScan("StringReader", t, Fscanln)
|
||||
}
|
||||
|
||||
func TestMyReaderScanln(t *testing.T) {
|
||||
testScan("myStringReader", t, Fscanln)
|
||||
for _, r := range readers {
|
||||
t.Run(r.name, func(t *testing.T) {
|
||||
testScan(t, r.f, Fscanln)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestScanf(t *testing.T) {
|
||||
|
|
@ -500,15 +500,10 @@ func TestInf(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func testScanfMulti(name string, t *testing.T) {
|
||||
func testScanfMulti(t *testing.T, f func(string) io.Reader) {
|
||||
sliceType := reflect.TypeOf(make([]interface{}, 1))
|
||||
for _, test := range multiTests {
|
||||
var r io.Reader
|
||||
if name == "StringReader" {
|
||||
r = strings.NewReader(test.text)
|
||||
} else {
|
||||
r = newReader(test.text)
|
||||
}
|
||||
r := f(test.text)
|
||||
n, err := Fscanf(r, test.format, test.in...)
|
||||
if err != nil {
|
||||
if test.err == "" {
|
||||
|
|
@ -539,11 +534,11 @@ func testScanfMulti(name string, t *testing.T) {
|
|||
}
|
||||
|
||||
func TestScanfMulti(t *testing.T) {
|
||||
testScanfMulti("StringReader", t)
|
||||
}
|
||||
|
||||
func TestMyReaderScanfMulti(t *testing.T) {
|
||||
testScanfMulti("myStringReader", t)
|
||||
for _, r := range readers {
|
||||
t.Run(r.name, func(t *testing.T) {
|
||||
testScanfMulti(t, r.f)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestScanMultiple(t *testing.T) {
|
||||
|
|
@ -818,20 +813,10 @@ func TestMultiLine(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
// simpleReader is a strings.Reader that implements only Read, not ReadRune.
|
||||
// Good for testing readahead.
|
||||
type simpleReader struct {
|
||||
sr *strings.Reader
|
||||
}
|
||||
|
||||
func (s *simpleReader) Read(b []byte) (n int, err error) {
|
||||
return s.sr.Read(b)
|
||||
}
|
||||
|
||||
// TestLineByLineFscanf tests that Fscanf does not read past newline. Issue
|
||||
// 3481.
|
||||
func TestLineByLineFscanf(t *testing.T) {
|
||||
r := &simpleReader{strings.NewReader("1\n2\n")}
|
||||
r := struct{ io.Reader }{strings.NewReader("1\n2\n")}
|
||||
var i, j int
|
||||
n, err := Fscanf(r, "%v\n", &i)
|
||||
if n != 1 || err != nil {
|
||||
|
|
@ -1000,7 +985,7 @@ func BenchmarkScanRecursiveIntReaderWrapper(b *testing.B) {
|
|||
ints := makeInts(intCount)
|
||||
var r RecursiveInt
|
||||
for i := b.N - 1; i >= 0; i-- {
|
||||
buf := newReader(string(ints))
|
||||
buf := struct{ io.Reader }{strings.NewReader(string(ints))}
|
||||
b.StartTimer()
|
||||
Fscan(buf, &r)
|
||||
b.StopTimer()
|
||||
|
|
|
|||
|
|
@ -303,11 +303,11 @@ func TestImportVendor(t *testing.T) {
|
|||
testenv.MustHaveGoBuild(t) // really must just have source
|
||||
ctxt := Default
|
||||
ctxt.GOPATH = ""
|
||||
p, err := ctxt.Import("golang.org/x/net/http2/hpack", filepath.Join(ctxt.GOROOT, "src/net/http"), 0)
|
||||
p, err := ctxt.Import("golang_org/x/net/http2/hpack", filepath.Join(ctxt.GOROOT, "src/net/http"), 0)
|
||||
if err != nil {
|
||||
t.Fatalf("cannot find vendored golang.org/x/net/http2/hpack from net/http directory: %v", err)
|
||||
t.Fatalf("cannot find vendored golang_org/x/net/http2/hpack from net/http directory: %v", err)
|
||||
}
|
||||
want := "vendor/golang.org/x/net/http2/hpack"
|
||||
want := "vendor/golang_org/x/net/http2/hpack"
|
||||
if p.ImportPath != want {
|
||||
t.Fatalf("Import succeeded but found %q, want %q", p.ImportPath, want)
|
||||
}
|
||||
|
|
@ -333,7 +333,7 @@ func TestImportVendorParentFailure(t *testing.T) {
|
|||
ctxt := Default
|
||||
ctxt.GOPATH = ""
|
||||
// This import should fail because the vendor/golang.org/x/net/http2 directory has no source code.
|
||||
p, err := ctxt.Import("golang.org/x/net/http2", filepath.Join(ctxt.GOROOT, "src/net/http"), 0)
|
||||
p, err := ctxt.Import("golang_org/x/net/http2", filepath.Join(ctxt.GOROOT, "src/net/http"), 0)
|
||||
if err == nil {
|
||||
t.Fatalf("found empty parent in %s", p.Dir)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -297,7 +297,7 @@ var pkgDeps = map[string][]string{
|
|||
"context", "math/rand", "os", "sort", "syscall", "time",
|
||||
"internal/nettrace",
|
||||
"internal/syscall/windows", "internal/singleflight", "internal/race",
|
||||
"golang.org/x/net/route",
|
||||
"golang_org/x/net/route",
|
||||
},
|
||||
|
||||
// NET enables use of basic network-related packages.
|
||||
|
|
@ -378,8 +378,8 @@ var pkgDeps = map[string][]string{
|
|||
"context", "compress/gzip", "container/list", "crypto/tls",
|
||||
"mime/multipart", "runtime/debug",
|
||||
"net/http/internal",
|
||||
"golang.org/x/net/http2/hpack",
|
||||
"golang.org/x/net/lex/httplex",
|
||||
"golang_org/x/net/http2/hpack",
|
||||
"golang_org/x/net/lex/httplex",
|
||||
"internal/nettrace",
|
||||
"net/http/httptrace",
|
||||
},
|
||||
|
|
@ -443,7 +443,7 @@ func listStdPkgs(goroot string) ([]string, error) {
|
|||
}
|
||||
|
||||
name := filepath.ToSlash(path[len(src):])
|
||||
if name == "builtin" || name == "cmd" || strings.Contains(name, ".") {
|
||||
if name == "builtin" || name == "cmd" || strings.Contains(name, "golang_org") {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -695,6 +695,11 @@ func TestDialerLocalAddr(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestDialerDualStack(t *testing.T) {
|
||||
// This test is known to be flaky. Don't frighten regular
|
||||
// users about it; only fail on the build dashboard.
|
||||
if testenv.Builder() == "" {
|
||||
testenv.SkipFlaky(t, 13324)
|
||||
}
|
||||
if !supportsIPv4 || !supportsIPv6 {
|
||||
t.Skip("both IPv4 and IPv6 are required")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,108 @@
|
|||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
|
||||
|
||||
package net
|
||||
|
||||
import (
|
||||
"context"
|
||||
"syscall"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Issue 16523
|
||||
func TestDialContextCancelRace(t *testing.T) {
|
||||
oldConnectFunc := connectFunc
|
||||
oldGetsockoptIntFunc := getsockoptIntFunc
|
||||
oldTestHookCanceledDial := testHookCanceledDial
|
||||
defer func() {
|
||||
connectFunc = oldConnectFunc
|
||||
getsockoptIntFunc = oldGetsockoptIntFunc
|
||||
testHookCanceledDial = oldTestHookCanceledDial
|
||||
}()
|
||||
|
||||
ln, err := newLocalListener("tcp")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
listenerDone := make(chan struct{})
|
||||
go func() {
|
||||
defer close(listenerDone)
|
||||
c, err := ln.Accept()
|
||||
if err == nil {
|
||||
c.Close()
|
||||
}
|
||||
}()
|
||||
defer func() { <-listenerDone }()
|
||||
defer ln.Close()
|
||||
|
||||
sawCancel := make(chan bool, 1)
|
||||
testHookCanceledDial = func() {
|
||||
sawCancel <- true
|
||||
}
|
||||
|
||||
ctx, cancelCtx := context.WithCancel(context.Background())
|
||||
|
||||
connectFunc = func(fd int, addr syscall.Sockaddr) error {
|
||||
err := oldConnectFunc(fd, addr)
|
||||
t.Logf("connect(%d, addr) = %v", fd, err)
|
||||
if err == nil {
|
||||
// On some operating systems, localhost
|
||||
// connects _sometimes_ succeed immediately.
|
||||
// Prevent that, so we exercise the code path
|
||||
// we're interested in testing. This seems
|
||||
// harmless. It makes FreeBSD 10.10 work when
|
||||
// run with many iterations. It failed about
|
||||
// half the time previously.
|
||||
return syscall.EINPROGRESS
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
getsockoptIntFunc = func(fd, level, opt int) (val int, err error) {
|
||||
val, err = oldGetsockoptIntFunc(fd, level, opt)
|
||||
t.Logf("getsockoptIntFunc(%d, %d, %d) = (%v, %v)", fd, level, opt, val, err)
|
||||
if level == syscall.SOL_SOCKET && opt == syscall.SO_ERROR && err == nil && val == 0 {
|
||||
t.Logf("canceling context")
|
||||
|
||||
// Cancel the context at just the moment which
|
||||
// caused the race in issue 16523.
|
||||
cancelCtx()
|
||||
|
||||
// And wait for the "interrupter" goroutine to
|
||||
// cancel the dial by messing with its write
|
||||
// timeout before returning.
|
||||
select {
|
||||
case <-sawCancel:
|
||||
t.Logf("saw cancel")
|
||||
case <-time.After(5 * time.Second):
|
||||
t.Errorf("didn't see cancel after 5 seconds")
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
var d Dialer
|
||||
c, err := d.DialContext(ctx, "tcp", ln.Addr().String())
|
||||
if err == nil {
|
||||
c.Close()
|
||||
t.Fatal("unexpected successful dial; want context canceled error")
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
case <-time.After(5 * time.Second):
|
||||
t.Fatal("expected context to be canceled")
|
||||
}
|
||||
|
||||
oe, ok := err.(*OpError)
|
||||
if !ok || oe.Op != "dial" {
|
||||
t.Fatalf("Dial error = %#v; want dial *OpError", err)
|
||||
}
|
||||
if oe.Err != ctx.Err() {
|
||||
t.Errorf("DialContext = (%v, %v); want OpError with error %v", c, err, ctx.Err())
|
||||
}
|
||||
}
|
||||
|
|
@ -64,7 +64,7 @@ func (fd *netFD) name() string {
|
|||
return fd.net + ":" + ls + "->" + rs
|
||||
}
|
||||
|
||||
func (fd *netFD) connect(ctx context.Context, la, ra syscall.Sockaddr) error {
|
||||
func (fd *netFD) connect(ctx context.Context, la, ra syscall.Sockaddr) (ret error) {
|
||||
// Do not need to call fd.writeLock here,
|
||||
// because fd is not yet accessible to user,
|
||||
// so no concurrent operations are possible.
|
||||
|
|
@ -101,21 +101,44 @@ func (fd *netFD) connect(ctx context.Context, la, ra syscall.Sockaddr) error {
|
|||
defer fd.setWriteDeadline(noDeadline)
|
||||
}
|
||||
|
||||
// Wait for the goroutine converting context.Done into a write timeout
|
||||
// to exist, otherwise our caller might cancel the context and
|
||||
// cause fd.setWriteDeadline(aLongTimeAgo) to cancel a successful dial.
|
||||
done := make(chan bool) // must be unbuffered
|
||||
defer func() { done <- true }()
|
||||
go func() {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
// Force the runtime's poller to immediately give
|
||||
// up waiting for writability.
|
||||
fd.setWriteDeadline(aLongTimeAgo)
|
||||
<-done
|
||||
case <-done:
|
||||
}
|
||||
}()
|
||||
// Start the "interrupter" goroutine, if this context might be canceled.
|
||||
// (The background context cannot)
|
||||
//
|
||||
// The interrupter goroutine waits for the context to be done and
|
||||
// interrupts the dial (by altering the fd's write deadline, which
|
||||
// wakes up waitWrite).
|
||||
if ctx != context.Background() {
|
||||
// Wait for the interrupter goroutine to exit before returning
|
||||
// from connect.
|
||||
done := make(chan struct{})
|
||||
interruptRes := make(chan error)
|
||||
defer func() {
|
||||
close(done)
|
||||
if ctxErr := <-interruptRes; ctxErr != nil && ret == nil {
|
||||
// The interrupter goroutine called setWriteDeadline,
|
||||
// but the connect code below had returned from
|
||||
// waitWrite already and did a successful connect (ret
|
||||
// == nil). Because we've now poisoned the connection
|
||||
// by making it unwritable, don't return a successful
|
||||
// dial. This was issue 16523.
|
||||
ret = ctxErr
|
||||
fd.Close() // prevent a leak
|
||||
}
|
||||
}()
|
||||
go func() {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
// Force the runtime's poller to immediately give up
|
||||
// waiting for writability, unblocking waitWrite
|
||||
// below.
|
||||
fd.setWriteDeadline(aLongTimeAgo)
|
||||
testHookCanceledDial()
|
||||
interruptRes <- ctx.Err()
|
||||
case <-done:
|
||||
interruptRes <- nil
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
for {
|
||||
// Performing multiple connect system calls on a
|
||||
|
|
|
|||
|
|
@ -9,7 +9,8 @@ package net
|
|||
import "syscall"
|
||||
|
||||
var (
|
||||
testHookDialChannel = func() {} // see golang.org/issue/5349
|
||||
testHookDialChannel = func() {} // for golang.org/issue/5349
|
||||
testHookCanceledDial = func() {} // for golang.org/issue/16523
|
||||
|
||||
// Placeholders for socket system calls.
|
||||
socketFunc func(int, int, int) (int, error) = syscall.Socket
|
||||
|
|
|
|||
|
|
@ -153,6 +153,10 @@ func (h *Handler) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
|
|||
|
||||
for k, v := range req.Header {
|
||||
k = strings.Map(upperCaseAndUnderscore, k)
|
||||
if k == "PROXY" {
|
||||
// See Issue 16405
|
||||
continue
|
||||
}
|
||||
joinStr := ", "
|
||||
if k == "COOKIE" {
|
||||
joinStr = "; "
|
||||
|
|
|
|||
|
|
@ -35,15 +35,18 @@ func newRequest(httpreq string) *http.Request {
|
|||
return req
|
||||
}
|
||||
|
||||
func runCgiTest(t *testing.T, h *Handler, httpreq string, expectedMap map[string]string) *httptest.ResponseRecorder {
|
||||
func runCgiTest(t *testing.T, h *Handler,
|
||||
httpreq string,
|
||||
expectedMap map[string]string, checks ...func(reqInfo map[string]string)) *httptest.ResponseRecorder {
|
||||
rw := httptest.NewRecorder()
|
||||
req := newRequest(httpreq)
|
||||
h.ServeHTTP(rw, req)
|
||||
runResponseChecks(t, rw, expectedMap)
|
||||
runResponseChecks(t, rw, expectedMap, checks...)
|
||||
return rw
|
||||
}
|
||||
|
||||
func runResponseChecks(t *testing.T, rw *httptest.ResponseRecorder, expectedMap map[string]string) {
|
||||
func runResponseChecks(t *testing.T, rw *httptest.ResponseRecorder,
|
||||
expectedMap map[string]string, checks ...func(reqInfo map[string]string)) {
|
||||
// Make a map to hold the test map that the CGI returns.
|
||||
m := make(map[string]string)
|
||||
m["_body"] = rw.Body.String()
|
||||
|
|
@ -81,6 +84,9 @@ readlines:
|
|||
t.Errorf("for key %q got %q; expected %q", key, got, expected)
|
||||
}
|
||||
}
|
||||
for _, check := range checks {
|
||||
check(m)
|
||||
}
|
||||
}
|
||||
|
||||
var cgiTested, cgiWorks bool
|
||||
|
|
@ -236,6 +242,31 @@ func TestDupHeaders(t *testing.T) {
|
|||
expectedMap)
|
||||
}
|
||||
|
||||
// Issue 16405: CGI+http.Transport differing uses of HTTP_PROXY.
|
||||
// Verify we don't set the HTTP_PROXY environment variable.
|
||||
// Hope nobody was depending on it. It's not a known header, though.
|
||||
func TestDropProxyHeader(t *testing.T) {
|
||||
check(t)
|
||||
h := &Handler{
|
||||
Path: "testdata/test.cgi",
|
||||
}
|
||||
expectedMap := map[string]string{
|
||||
"env-REQUEST_URI": "/myscript/bar?a=b",
|
||||
"env-SCRIPT_FILENAME": "testdata/test.cgi",
|
||||
"env-HTTP_X_FOO": "a",
|
||||
}
|
||||
runCgiTest(t, h, "GET /myscript/bar?a=b HTTP/1.0\n"+
|
||||
"X-Foo: a\n"+
|
||||
"Proxy: should_be_stripped\n"+
|
||||
"Host: example.com\n\n",
|
||||
expectedMap,
|
||||
func(reqInfo map[string]string) {
|
||||
if v, ok := reqInfo["env-HTTP_PROXY"]; ok {
|
||||
t.Errorf("HTTP_PROXY = %q; should be absent", v)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestPathInfoNoRoot(t *testing.T) {
|
||||
check(t)
|
||||
h := &Handler{
|
||||
|
|
|
|||
|
|
@ -41,8 +41,8 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/http2/hpack"
|
||||
"golang.org/x/net/lex/httplex"
|
||||
"golang_org/x/net/http2/hpack"
|
||||
"golang_org/x/net/lex/httplex"
|
||||
)
|
||||
|
||||
// ClientConnPool manages a pool of HTTP/2 client connections.
|
||||
|
|
@ -85,7 +85,16 @@ const (
|
|||
http2noDialOnMiss = false
|
||||
)
|
||||
|
||||
func (p *http2clientConnPool) getClientConn(_ *Request, addr string, dialOnMiss bool) (*http2ClientConn, error) {
|
||||
func (p *http2clientConnPool) getClientConn(req *Request, addr string, dialOnMiss bool) (*http2ClientConn, error) {
|
||||
if http2isConnectionCloseRequest(req) && dialOnMiss {
|
||||
// It gets its own connection.
|
||||
const singleUse = true
|
||||
cc, err := p.t.dialClientConn(addr, singleUse)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cc, nil
|
||||
}
|
||||
p.mu.Lock()
|
||||
for _, cc := range p.conns[addr] {
|
||||
if cc.CanTakeNewRequest() {
|
||||
|
|
@ -128,7 +137,8 @@ func (p *http2clientConnPool) getStartDialLocked(addr string) *http2dialCall {
|
|||
|
||||
// run in its own goroutine.
|
||||
func (c *http2dialCall) dial(addr string) {
|
||||
c.res, c.err = c.p.t.dialClientConn(addr)
|
||||
const singleUse = false // shared conn
|
||||
c.res, c.err = c.p.t.dialClientConn(addr, singleUse)
|
||||
close(c.done)
|
||||
|
||||
c.p.mu.Lock()
|
||||
|
|
@ -1105,6 +1115,7 @@ func http2parseDataFrame(fh http2FrameHeader, payload []byte) (http2Frame, error
|
|||
var (
|
||||
http2errStreamID = errors.New("invalid stream ID")
|
||||
http2errDepStreamID = errors.New("invalid dependent stream ID")
|
||||
http2errPadLength = errors.New("pad length too large")
|
||||
)
|
||||
|
||||
func http2validStreamIDOrZero(streamID uint32) bool {
|
||||
|
|
@ -1118,18 +1129,40 @@ func http2validStreamID(streamID uint32) bool {
|
|||
// WriteData writes a DATA frame.
|
||||
//
|
||||
// It will perform exactly one Write to the underlying Writer.
|
||||
// It is the caller's responsibility to not call other Write methods concurrently.
|
||||
// It is the caller's responsibility not to violate the maximum frame size
|
||||
// and to not call other Write methods concurrently.
|
||||
func (f *http2Framer) WriteData(streamID uint32, endStream bool, data []byte) error {
|
||||
return f.WriteDataPadded(streamID, endStream, data, nil)
|
||||
}
|
||||
|
||||
// WriteData writes a DATA frame with optional padding.
|
||||
//
|
||||
// If pad is nil, the padding bit is not sent.
|
||||
// The length of pad must not exceed 255 bytes.
|
||||
//
|
||||
// It will perform exactly one Write to the underlying Writer.
|
||||
// It is the caller's responsibility not to violate the maximum frame size
|
||||
// and to not call other Write methods concurrently.
|
||||
func (f *http2Framer) WriteDataPadded(streamID uint32, endStream bool, data, pad []byte) error {
|
||||
if !http2validStreamID(streamID) && !f.AllowIllegalWrites {
|
||||
return http2errStreamID
|
||||
}
|
||||
if len(pad) > 255 {
|
||||
return http2errPadLength
|
||||
}
|
||||
var flags http2Flags
|
||||
if endStream {
|
||||
flags |= http2FlagDataEndStream
|
||||
}
|
||||
if pad != nil {
|
||||
flags |= http2FlagDataPadded
|
||||
}
|
||||
f.startWrite(http2FrameData, flags, streamID)
|
||||
if pad != nil {
|
||||
f.wbuf = append(f.wbuf, byte(len(pad)))
|
||||
}
|
||||
f.wbuf = append(f.wbuf, data...)
|
||||
f.wbuf = append(f.wbuf, pad...)
|
||||
return f.endWrite()
|
||||
}
|
||||
|
||||
|
|
@ -3803,6 +3836,9 @@ func (sc *http2serverConn) closeStream(st *http2stream, err error) {
|
|||
}
|
||||
delete(sc.streams, st.id)
|
||||
if p := st.body; p != nil {
|
||||
|
||||
sc.sendWindowUpdate(nil, p.Len())
|
||||
|
||||
p.CloseWithError(err)
|
||||
}
|
||||
st.cw.Close()
|
||||
|
|
@ -3879,36 +3915,51 @@ func (sc *http2serverConn) processSettingInitialWindowSize(val uint32) error {
|
|||
|
||||
func (sc *http2serverConn) processData(f *http2DataFrame) error {
|
||||
sc.serveG.check()
|
||||
data := f.Data()
|
||||
|
||||
id := f.Header().StreamID
|
||||
st, ok := sc.streams[id]
|
||||
if !ok || st.state != http2stateOpen || st.gotTrailerHeader {
|
||||
|
||||
if sc.inflow.available() < int32(f.Length) {
|
||||
return http2StreamError{id, http2ErrCodeFlowControl}
|
||||
}
|
||||
|
||||
sc.inflow.take(int32(f.Length))
|
||||
sc.sendWindowUpdate(nil, int(f.Length))
|
||||
|
||||
return http2StreamError{id, http2ErrCodeStreamClosed}
|
||||
}
|
||||
if st.body == nil {
|
||||
panic("internal error: should have a body in this state")
|
||||
}
|
||||
data := f.Data()
|
||||
|
||||
if st.declBodyBytes != -1 && st.bodyBytes+int64(len(data)) > st.declBodyBytes {
|
||||
st.body.CloseWithError(fmt.Errorf("sender tried to send more than declared Content-Length of %d bytes", st.declBodyBytes))
|
||||
return http2StreamError{id, http2ErrCodeStreamClosed}
|
||||
}
|
||||
if len(data) > 0 {
|
||||
if f.Length > 0 {
|
||||
|
||||
if int(st.inflow.available()) < len(data) {
|
||||
if st.inflow.available() < int32(f.Length) {
|
||||
return http2StreamError{id, http2ErrCodeFlowControl}
|
||||
}
|
||||
st.inflow.take(int32(len(data)))
|
||||
wrote, err := st.body.Write(data)
|
||||
if err != nil {
|
||||
return http2StreamError{id, http2ErrCodeStreamClosed}
|
||||
st.inflow.take(int32(f.Length))
|
||||
|
||||
if len(data) > 0 {
|
||||
wrote, err := st.body.Write(data)
|
||||
if err != nil {
|
||||
return http2StreamError{id, http2ErrCodeStreamClosed}
|
||||
}
|
||||
if wrote != len(data) {
|
||||
panic("internal error: bad Writer")
|
||||
}
|
||||
st.bodyBytes += int64(len(data))
|
||||
}
|
||||
if wrote != len(data) {
|
||||
panic("internal error: bad Writer")
|
||||
|
||||
if pad := int32(f.Length) - int32(len(data)); pad > 0 {
|
||||
sc.sendWindowUpdate32(nil, pad)
|
||||
sc.sendWindowUpdate32(st, pad)
|
||||
}
|
||||
st.bodyBytes += int64(len(data))
|
||||
}
|
||||
if f.StreamEnded() {
|
||||
st.endStream()
|
||||
|
|
@ -4919,27 +4970,29 @@ func (t *http2Transport) initConnPool() {
|
|||
// ClientConn is the state of a single HTTP/2 client connection to an
|
||||
// HTTP/2 server.
|
||||
type http2ClientConn struct {
|
||||
t *http2Transport
|
||||
tconn net.Conn // usually *tls.Conn, except specialized impls
|
||||
tlsState *tls.ConnectionState // nil only for specialized impls
|
||||
t *http2Transport
|
||||
tconn net.Conn // usually *tls.Conn, except specialized impls
|
||||
tlsState *tls.ConnectionState // nil only for specialized impls
|
||||
singleUse bool // whether being used for a single http.Request
|
||||
|
||||
// readLoop goroutine fields:
|
||||
readerDone chan struct{} // closed on error
|
||||
readerErr error // set before readerDone is closed
|
||||
|
||||
mu sync.Mutex // guards following
|
||||
cond *sync.Cond // hold mu; broadcast on flow/closed changes
|
||||
flow http2flow // our conn-level flow control quota (cs.flow is per stream)
|
||||
inflow http2flow // peer's conn-level flow control
|
||||
closed bool
|
||||
goAway *http2GoAwayFrame // if non-nil, the GoAwayFrame we received
|
||||
goAwayDebug string // goAway frame's debug data, retained as a string
|
||||
streams map[uint32]*http2clientStream // client-initiated
|
||||
nextStreamID uint32
|
||||
bw *bufio.Writer
|
||||
br *bufio.Reader
|
||||
fr *http2Framer
|
||||
lastActive time.Time
|
||||
mu sync.Mutex // guards following
|
||||
cond *sync.Cond // hold mu; broadcast on flow/closed changes
|
||||
flow http2flow // our conn-level flow control quota (cs.flow is per stream)
|
||||
inflow http2flow // peer's conn-level flow control
|
||||
closed bool
|
||||
wantSettingsAck bool // we sent a SETTINGS frame and haven't heard back
|
||||
goAway *http2GoAwayFrame // if non-nil, the GoAwayFrame we received
|
||||
goAwayDebug string // goAway frame's debug data, retained as a string
|
||||
streams map[uint32]*http2clientStream // client-initiated
|
||||
nextStreamID uint32
|
||||
bw *bufio.Writer
|
||||
br *bufio.Reader
|
||||
fr *http2Framer
|
||||
lastActive time.Time
|
||||
|
||||
// Settings from peer:
|
||||
maxFrameSize uint32
|
||||
|
|
@ -5117,7 +5170,7 @@ func http2shouldRetryRequest(req *Request, err error) bool {
|
|||
return err == http2errClientConnUnusable
|
||||
}
|
||||
|
||||
func (t *http2Transport) dialClientConn(addr string) (*http2ClientConn, error) {
|
||||
func (t *http2Transport) dialClientConn(addr string, singleUse bool) (*http2ClientConn, error) {
|
||||
host, _, err := net.SplitHostPort(addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
|
@ -5126,7 +5179,7 @@ func (t *http2Transport) dialClientConn(addr string) (*http2ClientConn, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return t.NewClientConn(tconn)
|
||||
return t.newClientConn(tconn, singleUse)
|
||||
}
|
||||
|
||||
func (t *http2Transport) newTLSConfig(host string) *tls.Config {
|
||||
|
|
@ -5187,13 +5240,13 @@ func (t *http2Transport) expectContinueTimeout() time.Duration {
|
|||
}
|
||||
|
||||
func (t *http2Transport) NewClientConn(c net.Conn) (*http2ClientConn, error) {
|
||||
return t.newClientConn(c, false)
|
||||
}
|
||||
|
||||
func (t *http2Transport) newClientConn(c net.Conn, singleUse bool) (*http2ClientConn, error) {
|
||||
if http2VerboseLogs {
|
||||
t.vlogf("http2: Transport creating client conn to %v", c.RemoteAddr())
|
||||
}
|
||||
if _, err := c.Write(http2clientPreface); err != nil {
|
||||
t.vlogf("client preface write error: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cc := &http2ClientConn{
|
||||
t: t,
|
||||
|
|
@ -5204,6 +5257,8 @@ func (t *http2Transport) NewClientConn(c net.Conn) (*http2ClientConn, error) {
|
|||
initialWindowSize: 65535,
|
||||
maxConcurrentStreams: 1000,
|
||||
streams: make(map[uint32]*http2clientStream),
|
||||
singleUse: singleUse,
|
||||
wantSettingsAck: true,
|
||||
}
|
||||
cc.cond = sync.NewCond(&cc.mu)
|
||||
cc.flow.add(int32(http2initialWindowSize))
|
||||
|
|
@ -5228,6 +5283,8 @@ func (t *http2Transport) NewClientConn(c net.Conn) (*http2ClientConn, error) {
|
|||
if max := t.maxHeaderListSize(); max != 0 {
|
||||
initialSettings = append(initialSettings, http2Setting{ID: http2SettingMaxHeaderListSize, Val: max})
|
||||
}
|
||||
|
||||
cc.bw.Write(http2clientPreface)
|
||||
cc.fr.WriteSettings(initialSettings...)
|
||||
cc.fr.WriteWindowUpdate(0, http2transportDefaultConnFlow)
|
||||
cc.inflow.add(http2transportDefaultConnFlow + http2initialWindowSize)
|
||||
|
|
@ -5236,32 +5293,6 @@ func (t *http2Transport) NewClientConn(c net.Conn) (*http2ClientConn, error) {
|
|||
return nil, cc.werr
|
||||
}
|
||||
|
||||
f, err := cc.fr.ReadFrame()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sf, ok := f.(*http2SettingsFrame)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("expected settings frame, got: %T", f)
|
||||
}
|
||||
cc.fr.WriteSettingsAck()
|
||||
cc.bw.Flush()
|
||||
|
||||
sf.ForeachSetting(func(s http2Setting) error {
|
||||
switch s.ID {
|
||||
case http2SettingMaxFrameSize:
|
||||
cc.maxFrameSize = s.Val
|
||||
case http2SettingMaxConcurrentStreams:
|
||||
cc.maxConcurrentStreams = s.Val
|
||||
case http2SettingInitialWindowSize:
|
||||
cc.initialWindowSize = s.Val
|
||||
default:
|
||||
|
||||
t.vlogf("Unhandled Setting: %v", s)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
go cc.readLoop()
|
||||
return cc, nil
|
||||
}
|
||||
|
|
@ -5288,6 +5319,9 @@ func (cc *http2ClientConn) CanTakeNewRequest() bool {
|
|||
}
|
||||
|
||||
func (cc *http2ClientConn) canTakeNewRequestLocked() bool {
|
||||
if cc.singleUse && cc.nextStreamID > 1 {
|
||||
return false
|
||||
}
|
||||
return cc.goAway == nil && !cc.closed &&
|
||||
int64(len(cc.streams)+1) < int64(cc.maxConcurrentStreams) &&
|
||||
cc.nextStreamID < 2147483647
|
||||
|
|
@ -5494,22 +5528,26 @@ func (cc *http2ClientConn) RoundTrip(req *Request) (*Response, error) {
|
|||
bodyWritten := false
|
||||
ctx := http2reqContext(req)
|
||||
|
||||
handleReadLoopResponse := func(re http2resAndError) (*Response, error) {
|
||||
res := re.res
|
||||
if re.err != nil || res.StatusCode > 299 {
|
||||
|
||||
bodyWriter.cancel()
|
||||
cs.abortRequestBodyWrite(http2errStopReqBodyWrite)
|
||||
}
|
||||
if re.err != nil {
|
||||
cc.forgetStreamID(cs.ID)
|
||||
return nil, re.err
|
||||
}
|
||||
res.Request = req
|
||||
res.TLS = cc.tlsState
|
||||
return res, nil
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case re := <-readLoopResCh:
|
||||
res := re.res
|
||||
if re.err != nil || res.StatusCode > 299 {
|
||||
|
||||
bodyWriter.cancel()
|
||||
cs.abortRequestBodyWrite(http2errStopReqBodyWrite)
|
||||
}
|
||||
if re.err != nil {
|
||||
cc.forgetStreamID(cs.ID)
|
||||
return nil, re.err
|
||||
}
|
||||
res.Request = req
|
||||
res.TLS = cc.tlsState
|
||||
return res, nil
|
||||
return handleReadLoopResponse(re)
|
||||
case <-respHeaderTimer:
|
||||
cc.forgetStreamID(cs.ID)
|
||||
if !hasBody || bodyWritten {
|
||||
|
|
@ -5541,6 +5579,12 @@ func (cc *http2ClientConn) RoundTrip(req *Request) (*Response, error) {
|
|||
|
||||
return nil, cs.resetErr
|
||||
case err := <-bodyWriter.resc:
|
||||
|
||||
select {
|
||||
case re := <-readLoopResCh:
|
||||
return handleReadLoopResponse(re)
|
||||
default:
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
@ -5648,26 +5692,24 @@ func (cs *http2clientStream) writeRequestBody(body io.Reader, bodyCloser io.Clos
|
|||
}
|
||||
}
|
||||
|
||||
cc.wmu.Lock()
|
||||
if !sentEnd {
|
||||
var trls []byte
|
||||
if hasTrailers {
|
||||
cc.mu.Lock()
|
||||
trls = cc.encodeTrailers(req)
|
||||
cc.mu.Unlock()
|
||||
}
|
||||
var trls []byte
|
||||
if !sentEnd && hasTrailers {
|
||||
cc.mu.Lock()
|
||||
defer cc.mu.Unlock()
|
||||
trls = cc.encodeTrailers(req)
|
||||
}
|
||||
|
||||
if len(trls) > 0 {
|
||||
err = cc.writeHeaders(cs.ID, true, trls)
|
||||
} else {
|
||||
err = cc.fr.WriteData(cs.ID, true, nil)
|
||||
}
|
||||
cc.wmu.Lock()
|
||||
defer cc.wmu.Unlock()
|
||||
|
||||
if len(trls) > 0 {
|
||||
err = cc.writeHeaders(cs.ID, true, trls)
|
||||
} else {
|
||||
err = cc.fr.WriteData(cs.ID, true, nil)
|
||||
}
|
||||
if ferr := cc.bw.Flush(); ferr != nil && err == nil {
|
||||
err = ferr
|
||||
}
|
||||
cc.wmu.Unlock()
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
@ -5896,6 +5938,14 @@ func (e http2GoAwayError) Error() string {
|
|||
e.LastStreamID, e.ErrCode, e.DebugData)
|
||||
}
|
||||
|
||||
func http2isEOFOrNetReadError(err error) bool {
|
||||
if err == io.EOF {
|
||||
return true
|
||||
}
|
||||
ne, ok := err.(*net.OpError)
|
||||
return ok && ne.Op == "read"
|
||||
}
|
||||
|
||||
func (rl *http2clientConnReadLoop) cleanup() {
|
||||
cc := rl.cc
|
||||
defer cc.tconn.Close()
|
||||
|
|
@ -5904,16 +5954,14 @@ func (rl *http2clientConnReadLoop) cleanup() {
|
|||
|
||||
err := cc.readerErr
|
||||
cc.mu.Lock()
|
||||
if err == io.EOF {
|
||||
if cc.goAway != nil {
|
||||
err = http2GoAwayError{
|
||||
LastStreamID: cc.goAway.LastStreamID,
|
||||
ErrCode: cc.goAway.ErrCode,
|
||||
DebugData: cc.goAwayDebug,
|
||||
}
|
||||
} else {
|
||||
err = io.ErrUnexpectedEOF
|
||||
if cc.goAway != nil && http2isEOFOrNetReadError(err) {
|
||||
err = http2GoAwayError{
|
||||
LastStreamID: cc.goAway.LastStreamID,
|
||||
ErrCode: cc.goAway.ErrCode,
|
||||
DebugData: cc.goAwayDebug,
|
||||
}
|
||||
} else if err == io.EOF {
|
||||
err = io.ErrUnexpectedEOF
|
||||
}
|
||||
for _, cs := range rl.activeRes {
|
||||
cs.bufPipe.CloseWithError(err)
|
||||
|
|
@ -5932,8 +5980,9 @@ func (rl *http2clientConnReadLoop) cleanup() {
|
|||
|
||||
func (rl *http2clientConnReadLoop) run() error {
|
||||
cc := rl.cc
|
||||
rl.closeWhenIdle = cc.t.disableKeepAlives()
|
||||
rl.closeWhenIdle = cc.t.disableKeepAlives() || cc.singleUse
|
||||
gotReply := false
|
||||
gotSettings := false
|
||||
for {
|
||||
f, err := cc.fr.ReadFrame()
|
||||
if err != nil {
|
||||
|
|
@ -5950,6 +5999,13 @@ func (rl *http2clientConnReadLoop) run() error {
|
|||
if http2VerboseLogs {
|
||||
cc.vlogf("http2: Transport received %s", http2summarizeFrame(f))
|
||||
}
|
||||
if !gotSettings {
|
||||
if _, ok := f.(*http2SettingsFrame); !ok {
|
||||
cc.logf("protocol error: received %T before a SETTINGS frame", f)
|
||||
return http2ConnectionError(http2ErrCodeProtocol)
|
||||
}
|
||||
gotSettings = true
|
||||
}
|
||||
maybeIdle := false
|
||||
|
||||
switch f := f.(type) {
|
||||
|
|
@ -6216,10 +6272,27 @@ var http2errClosedResponseBody = errors.New("http2: response body closed")
|
|||
|
||||
func (b http2transportResponseBody) Close() error {
|
||||
cs := b.cs
|
||||
if cs.bufPipe.Err() != io.EOF {
|
||||
cc := cs.cc
|
||||
|
||||
cs.cc.writeStreamReset(cs.ID, http2ErrCodeCancel, nil)
|
||||
serverSentStreamEnd := cs.bufPipe.Err() == io.EOF
|
||||
unread := cs.bufPipe.Len()
|
||||
|
||||
if unread > 0 || !serverSentStreamEnd {
|
||||
cc.mu.Lock()
|
||||
cc.wmu.Lock()
|
||||
if !serverSentStreamEnd {
|
||||
cc.fr.WriteRSTStream(cs.ID, http2ErrCodeCancel)
|
||||
}
|
||||
|
||||
if unread > 0 {
|
||||
cc.inflow.add(int32(unread))
|
||||
cc.fr.WriteWindowUpdate(0, uint32(unread))
|
||||
}
|
||||
cc.bw.Flush()
|
||||
cc.wmu.Unlock()
|
||||
cc.mu.Unlock()
|
||||
}
|
||||
|
||||
cs.bufPipe.BreakWithError(http2errClosedResponseBody)
|
||||
return nil
|
||||
}
|
||||
|
|
@ -6227,6 +6300,7 @@ func (b http2transportResponseBody) Close() error {
|
|||
func (rl *http2clientConnReadLoop) processData(f *http2DataFrame) error {
|
||||
cc := rl.cc
|
||||
cs := cc.streamByID(f.StreamID, f.StreamEnded())
|
||||
data := f.Data()
|
||||
if cs == nil {
|
||||
cc.mu.Lock()
|
||||
neverSent := cc.nextStreamID
|
||||
|
|
@ -6237,27 +6311,49 @@ func (rl *http2clientConnReadLoop) processData(f *http2DataFrame) error {
|
|||
return http2ConnectionError(http2ErrCodeProtocol)
|
||||
}
|
||||
|
||||
if f.Length > 0 {
|
||||
cc.mu.Lock()
|
||||
cc.inflow.add(int32(f.Length))
|
||||
cc.mu.Unlock()
|
||||
|
||||
cc.wmu.Lock()
|
||||
cc.fr.WriteWindowUpdate(0, uint32(f.Length))
|
||||
cc.bw.Flush()
|
||||
cc.wmu.Unlock()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if data := f.Data(); len(data) > 0 {
|
||||
if cs.bufPipe.b == nil {
|
||||
if f.Length > 0 {
|
||||
if len(data) > 0 && cs.bufPipe.b == nil {
|
||||
|
||||
cc.logf("http2: Transport received DATA frame for closed stream; closing connection")
|
||||
return http2ConnectionError(http2ErrCodeProtocol)
|
||||
}
|
||||
|
||||
cc.mu.Lock()
|
||||
if cs.inflow.available() >= int32(len(data)) {
|
||||
cs.inflow.take(int32(len(data)))
|
||||
if cs.inflow.available() >= int32(f.Length) {
|
||||
cs.inflow.take(int32(f.Length))
|
||||
} else {
|
||||
cc.mu.Unlock()
|
||||
return http2ConnectionError(http2ErrCodeFlowControl)
|
||||
}
|
||||
|
||||
if pad := int32(f.Length) - int32(len(data)); pad > 0 {
|
||||
cs.inflow.add(pad)
|
||||
cc.inflow.add(pad)
|
||||
cc.wmu.Lock()
|
||||
cc.fr.WriteWindowUpdate(0, uint32(pad))
|
||||
cc.fr.WriteWindowUpdate(cs.ID, uint32(pad))
|
||||
cc.bw.Flush()
|
||||
cc.wmu.Unlock()
|
||||
}
|
||||
cc.mu.Unlock()
|
||||
|
||||
if _, err := cs.bufPipe.Write(data); err != nil {
|
||||
rl.endStreamError(cs, err)
|
||||
return err
|
||||
if len(data) > 0 {
|
||||
if _, err := cs.bufPipe.Write(data); err != nil {
|
||||
rl.endStreamError(cs, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -6282,7 +6378,7 @@ func (rl *http2clientConnReadLoop) endStreamError(cs *http2clientStream, err err
|
|||
}
|
||||
cs.bufPipe.closeWithErrorAndCode(err, code)
|
||||
delete(rl.activeRes, cs.ID)
|
||||
if cs.req.Close || cs.req.Header.Get("Connection") == "close" {
|
||||
if http2isConnectionCloseRequest(cs.req) {
|
||||
rl.closeWhenIdle = true
|
||||
}
|
||||
}
|
||||
|
|
@ -6312,7 +6408,16 @@ func (rl *http2clientConnReadLoop) processSettings(f *http2SettingsFrame) error
|
|||
cc := rl.cc
|
||||
cc.mu.Lock()
|
||||
defer cc.mu.Unlock()
|
||||
return f.ForeachSetting(func(s http2Setting) error {
|
||||
|
||||
if f.IsAck() {
|
||||
if cc.wantSettingsAck {
|
||||
cc.wantSettingsAck = false
|
||||
return nil
|
||||
}
|
||||
return http2ConnectionError(http2ErrCodeProtocol)
|
||||
}
|
||||
|
||||
err := f.ForeachSetting(func(s http2Setting) error {
|
||||
switch s.ID {
|
||||
case http2SettingMaxFrameSize:
|
||||
cc.maxFrameSize = s.Val
|
||||
|
|
@ -6327,6 +6432,16 @@ func (rl *http2clientConnReadLoop) processSettings(f *http2SettingsFrame) error
|
|||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cc.wmu.Lock()
|
||||
defer cc.wmu.Unlock()
|
||||
|
||||
cc.fr.WriteSettingsAck()
|
||||
cc.bw.Flush()
|
||||
return cc.werr
|
||||
}
|
||||
|
||||
func (rl *http2clientConnReadLoop) processWindowUpdate(f *http2WindowUpdateFrame) error {
|
||||
|
|
@ -6538,6 +6653,12 @@ func (s http2bodyWriterState) scheduleBodyWrite() {
|
|||
}
|
||||
}
|
||||
|
||||
// isConnectionCloseRequest reports whether req should use its own
|
||||
// connection for a single request and then close the connection.
|
||||
func http2isConnectionCloseRequest(req *Request) bool {
|
||||
return req.Close || httplex.HeaderValuesContainsToken(req.Header["Connection"], "close")
|
||||
}
|
||||
|
||||
// writeFramer is implemented by any type that is used to write frames.
|
||||
type http2writeFramer interface {
|
||||
writeFrame(http2writeContext) error
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ package http
|
|||
import (
|
||||
"strings"
|
||||
|
||||
"golang.org/x/net/lex/httplex"
|
||||
"golang_org/x/net/lex/httplex"
|
||||
)
|
||||
|
||||
// maxInt64 is the effective "infinite" value for the Server and
|
||||
|
|
|
|||
|
|
@ -4716,3 +4716,14 @@ func BenchmarkCloseNotifier(b *testing.B) {
|
|||
}
|
||||
b.StopTimer()
|
||||
}
|
||||
|
||||
// Verify this doesn't race (Issue 16505)
|
||||
func TestConcurrentServerServe(t *testing.T) {
|
||||
for i := 0; i < 100; i++ {
|
||||
ln1 := &oneConnListener{conn: nil}
|
||||
ln2 := &oneConnListener{conn: nil}
|
||||
srv := Server{}
|
||||
go func() { srv.Serve(ln1) }()
|
||||
go func() { srv.Serve(ln2) }()
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ import (
|
|||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/lex/httplex"
|
||||
"golang_org/x/net/lex/httplex"
|
||||
)
|
||||
|
||||
// Errors used by the HTTP server.
|
||||
|
|
@ -2129,8 +2129,8 @@ type Server struct {
|
|||
ErrorLog *log.Logger
|
||||
|
||||
disableKeepAlives int32 // accessed atomically.
|
||||
nextProtoOnce sync.Once // guards initialization of TLSNextProto in Serve
|
||||
nextProtoErr error
|
||||
nextProtoOnce sync.Once // guards setupHTTP2_* init
|
||||
nextProtoErr error // result of http2.ConfigureServer if used
|
||||
}
|
||||
|
||||
// A ConnState represents the state of a client connection to a server.
|
||||
|
|
@ -2260,10 +2260,8 @@ func (srv *Server) Serve(l net.Listener) error {
|
|||
}
|
||||
var tempDelay time.Duration // how long to sleep on accept failure
|
||||
|
||||
if srv.shouldConfigureHTTP2ForServe() {
|
||||
if err := srv.setupHTTP2(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := srv.setupHTTP2_Serve(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// TODO: allow changing base context? can't imagine concrete
|
||||
|
|
@ -2408,7 +2406,7 @@ func (srv *Server) ListenAndServeTLS(certFile, keyFile string) error {
|
|||
|
||||
// Setup HTTP/2 before srv.Serve, to initialize srv.TLSConfig
|
||||
// before we clone it and create the TLS Listener.
|
||||
if err := srv.setupHTTP2(); err != nil {
|
||||
if err := srv.setupHTTP2_ListenAndServeTLS(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
|
@ -2436,14 +2434,36 @@ func (srv *Server) ListenAndServeTLS(certFile, keyFile string) error {
|
|||
return srv.Serve(tlsListener)
|
||||
}
|
||||
|
||||
func (srv *Server) setupHTTP2() error {
|
||||
// setupHTTP2_ListenAndServeTLS conditionally configures HTTP/2 on
|
||||
// srv and returns whether there was an error setting it up. If it is
|
||||
// not configured for policy reasons, nil is returned.
|
||||
func (srv *Server) setupHTTP2_ListenAndServeTLS() error {
|
||||
srv.nextProtoOnce.Do(srv.onceSetNextProtoDefaults)
|
||||
return srv.nextProtoErr
|
||||
}
|
||||
|
||||
// setupHTTP2_Serve is called from (*Server).Serve and conditionally
|
||||
// configures HTTP/2 on srv using a more conservative policy than
|
||||
// setupHTTP2_ListenAndServeTLS because Serve may be called
|
||||
// concurrently.
|
||||
//
|
||||
// The tests named TestTransportAutomaticHTTP2* and
|
||||
// TestConcurrentServerServe in server_test.go demonstrate some
|
||||
// of the supported use cases and motivations.
|
||||
func (srv *Server) setupHTTP2_Serve() error {
|
||||
srv.nextProtoOnce.Do(srv.onceSetNextProtoDefaults_Serve)
|
||||
return srv.nextProtoErr
|
||||
}
|
||||
|
||||
func (srv *Server) onceSetNextProtoDefaults_Serve() {
|
||||
if srv.shouldConfigureHTTP2ForServe() {
|
||||
srv.onceSetNextProtoDefaults()
|
||||
}
|
||||
}
|
||||
|
||||
// onceSetNextProtoDefaults configures HTTP/2, if the user hasn't
|
||||
// configured otherwise. (by setting srv.TLSNextProto non-nil)
|
||||
// It must only be called via srv.nextProtoOnce (use srv.setupHTTP2).
|
||||
// It must only be called via srv.nextProtoOnce (use srv.setupHTTP2_*).
|
||||
func (srv *Server) onceSetNextProtoDefaults() {
|
||||
if strings.Contains(os.Getenv("GODEBUG"), "http2server=0") {
|
||||
return
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ import (
|
|||
"strings"
|
||||
"sync"
|
||||
|
||||
"golang.org/x/net/lex/httplex"
|
||||
"golang_org/x/net/lex/httplex"
|
||||
)
|
||||
|
||||
// ErrLineTooLong is returned when reading request or response bodies
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/lex/httplex"
|
||||
"golang_org/x/net/lex/httplex"
|
||||
)
|
||||
|
||||
// DefaultTransport is the default implementation of Transport and is
|
||||
|
|
@ -251,6 +251,9 @@ func ProxyFromEnvironment(req *Request) (*url.URL, error) {
|
|||
}
|
||||
if proxy == "" {
|
||||
proxy = httpProxyEnv.Get()
|
||||
if proxy != "" && os.Getenv("REQUEST_METHOD") != "" {
|
||||
return nil, errors.New("net/http: refusing to use HTTP_PROXY value in CGI environment; see golang.org/s/cgihttpproxy")
|
||||
}
|
||||
}
|
||||
if proxy == "" {
|
||||
return nil, nil
|
||||
|
|
@ -380,6 +383,11 @@ func (t *Transport) RoundTrip(req *Request) (*Response, error) {
|
|||
return resp, nil
|
||||
}
|
||||
if !pconn.shouldRetryRequest(req, err) {
|
||||
// Issue 16465: return underlying net.Conn.Read error from peek,
|
||||
// as we've historically done.
|
||||
if e, ok := err.(transportReadFromServerError); ok {
|
||||
err = e.err
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
testHookRoundTripRetried()
|
||||
|
|
@ -412,11 +420,19 @@ func (pc *persistConn) shouldRetryRequest(req *Request, err error) bool {
|
|||
// first, per golang.org/issue/15723
|
||||
return false
|
||||
}
|
||||
if _, ok := err.(nothingWrittenError); ok {
|
||||
switch err.(type) {
|
||||
case nothingWrittenError:
|
||||
// We never wrote anything, so it's safe to retry.
|
||||
return true
|
||||
case transportReadFromServerError:
|
||||
// We got some non-EOF net.Conn.Read failure reading
|
||||
// the 1st response byte from the server.
|
||||
return true
|
||||
}
|
||||
if err == errServerClosedIdle || err == errServerClosedConn {
|
||||
if err == errServerClosedIdle {
|
||||
// The server replied with io.EOF while we were trying to
|
||||
// read the response. Probably an unfortunately keep-alive
|
||||
// timeout, just as the client was writing a request.
|
||||
return true
|
||||
}
|
||||
return false // conservatively
|
||||
|
|
@ -563,10 +579,25 @@ var (
|
|||
errCloseIdleConns = errors.New("http: CloseIdleConnections called")
|
||||
errReadLoopExiting = errors.New("http: persistConn.readLoop exiting")
|
||||
errServerClosedIdle = errors.New("http: server closed idle connection")
|
||||
errServerClosedConn = errors.New("http: server closed connection")
|
||||
errIdleConnTimeout = errors.New("http: idle connection timeout")
|
||||
)
|
||||
|
||||
// transportReadFromServerError is used by Transport.readLoop when the
|
||||
// 1 byte peek read fails and we're actually anticipating a response.
|
||||
// Usually this is just due to the inherent keep-alive shut down race,
|
||||
// where the server closed the connection at the same time the client
|
||||
// wrote. The underlying err field is usually io.EOF or some
|
||||
// ECONNRESET sort of thing which varies by platform. But it might be
|
||||
// the user's custom net.Conn.Read error too, so we carry it along for
|
||||
// them to return from Transport.RoundTrip.
|
||||
type transportReadFromServerError struct {
|
||||
err error
|
||||
}
|
||||
|
||||
func (e transportReadFromServerError) Error() string {
|
||||
return fmt.Sprintf("net/http: Transport failed to read from server: %v", e.err)
|
||||
}
|
||||
|
||||
func (t *Transport) putOrCloseIdleConn(pconn *persistConn) {
|
||||
if err := t.tryPutIdleConn(pconn); err != nil {
|
||||
pconn.close(err)
|
||||
|
|
@ -1290,7 +1321,10 @@ func (pc *persistConn) mapRoundTripErrorFromReadLoop(startBytesWritten int64, er
|
|||
if pc.isCanceled() {
|
||||
return errRequestCanceled
|
||||
}
|
||||
if err == errServerClosedIdle || err == errServerClosedConn {
|
||||
if err == errServerClosedIdle {
|
||||
return err
|
||||
}
|
||||
if _, ok := err.(transportReadFromServerError); ok {
|
||||
return err
|
||||
}
|
||||
if pc.isBroken() {
|
||||
|
|
@ -1311,7 +1345,11 @@ func (pc *persistConn) mapRoundTripErrorAfterClosed(startBytesWritten int64) err
|
|||
return errRequestCanceled
|
||||
}
|
||||
err := pc.closed
|
||||
if err == errServerClosedIdle || err == errServerClosedConn {
|
||||
if err == errServerClosedIdle {
|
||||
// Don't decorate
|
||||
return err
|
||||
}
|
||||
if _, ok := err.(transportReadFromServerError); ok {
|
||||
// Don't decorate
|
||||
return err
|
||||
}
|
||||
|
|
@ -1380,7 +1418,7 @@ func (pc *persistConn) readLoop() {
|
|||
if err == nil {
|
||||
resp, err = pc.readResponse(rc, trace)
|
||||
} else {
|
||||
err = errServerClosedConn
|
||||
err = transportReadFromServerError{err}
|
||||
closeErr = err
|
||||
}
|
||||
|
||||
|
|
@ -1781,6 +1819,7 @@ func (pc *persistConn) roundTrip(req *transportRequest) (resp *Response, err err
|
|||
var re responseAndError
|
||||
var respHeaderTimer <-chan time.Time
|
||||
cancelChan := req.Request.Cancel
|
||||
ctxDoneChan := req.Context().Done()
|
||||
WaitResponse:
|
||||
for {
|
||||
testHookWaitResLoop()
|
||||
|
|
@ -1812,9 +1851,11 @@ WaitResponse:
|
|||
case <-cancelChan:
|
||||
pc.t.CancelRequest(req.Request)
|
||||
cancelChan = nil
|
||||
case <-req.Context().Done():
|
||||
ctxDoneChan = nil
|
||||
case <-ctxDoneChan:
|
||||
pc.t.CancelRequest(req.Request)
|
||||
cancelChan = nil
|
||||
ctxDoneChan = nil
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -46,17 +46,22 @@ func TestTransportPersistConnReadLoopEOF(t *testing.T) {
|
|||
conn.Close() // simulate the server hanging up on the client
|
||||
|
||||
_, err = pc.roundTrip(treq)
|
||||
if err != errServerClosedConn && err != errServerClosedIdle {
|
||||
if !isTransportReadFromServerError(err) && err != errServerClosedIdle {
|
||||
t.Fatalf("roundTrip = %#v, %v; want errServerClosedConn or errServerClosedIdle", err, err)
|
||||
}
|
||||
|
||||
<-pc.closech
|
||||
err = pc.closed
|
||||
if err != errServerClosedConn && err != errServerClosedIdle {
|
||||
if !isTransportReadFromServerError(err) && err != errServerClosedIdle {
|
||||
t.Fatalf("pc.closed = %#v, %v; want errServerClosedConn or errServerClosedIdle", err, err)
|
||||
}
|
||||
}
|
||||
|
||||
func isTransportReadFromServerError(err error) bool {
|
||||
_, ok := err.(transportReadFromServerError)
|
||||
return ok
|
||||
}
|
||||
|
||||
func newLocalListener(t *testing.T) net.Listener {
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
|
|
|
|||
|
|
@ -2060,7 +2060,8 @@ type proxyFromEnvTest struct {
|
|||
|
||||
env string // HTTP_PROXY
|
||||
httpsenv string // HTTPS_PROXY
|
||||
noenv string // NO_RPXY
|
||||
noenv string // NO_PROXY
|
||||
reqmeth string // REQUEST_METHOD
|
||||
|
||||
want string
|
||||
wanterr error
|
||||
|
|
@ -2084,6 +2085,10 @@ func (t proxyFromEnvTest) String() string {
|
|||
space()
|
||||
fmt.Fprintf(&buf, "no_proxy=%q", t.noenv)
|
||||
}
|
||||
if t.reqmeth != "" {
|
||||
space()
|
||||
fmt.Fprintf(&buf, "request_method=%q", t.reqmeth)
|
||||
}
|
||||
req := "http://example.com"
|
||||
if t.req != "" {
|
||||
req = t.req
|
||||
|
|
@ -2107,6 +2112,12 @@ var proxyFromEnvTests = []proxyFromEnvTest{
|
|||
{req: "https://secure.tld/", env: "http.proxy.tld", httpsenv: "secure.proxy.tld", want: "http://secure.proxy.tld"},
|
||||
{req: "https://secure.tld/", env: "http.proxy.tld", httpsenv: "https://secure.proxy.tld", want: "https://secure.proxy.tld"},
|
||||
|
||||
// Issue 16405: don't use HTTP_PROXY in a CGI environment,
|
||||
// where HTTP_PROXY can be attacker-controlled.
|
||||
{env: "http://10.1.2.3:8080", reqmeth: "POST",
|
||||
want: "<nil>",
|
||||
wanterr: errors.New("net/http: refusing to use HTTP_PROXY value in CGI environment; see golang.org/s/cgihttpproxy")},
|
||||
|
||||
{want: "<nil>"},
|
||||
|
||||
{noenv: "example.com", req: "http://example.com/", env: "proxy", want: "<nil>"},
|
||||
|
|
@ -2122,6 +2133,7 @@ func TestProxyFromEnvironment(t *testing.T) {
|
|||
os.Setenv("HTTP_PROXY", tt.env)
|
||||
os.Setenv("HTTPS_PROXY", tt.httpsenv)
|
||||
os.Setenv("NO_PROXY", tt.noenv)
|
||||
os.Setenv("REQUEST_METHOD", tt.reqmeth)
|
||||
ResetCachedEnvironment()
|
||||
reqURL := tt.req
|
||||
if reqURL == "" {
|
||||
|
|
@ -3499,6 +3511,45 @@ func TestTransportIdleConnTimeout(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
type funcConn struct {
|
||||
net.Conn
|
||||
read func([]byte) (int, error)
|
||||
write func([]byte) (int, error)
|
||||
}
|
||||
|
||||
func (c funcConn) Read(p []byte) (int, error) { return c.read(p) }
|
||||
func (c funcConn) Write(p []byte) (int, error) { return c.write(p) }
|
||||
func (c funcConn) Close() error { return nil }
|
||||
|
||||
// Issue 16465: Transport.RoundTrip should return the raw net.Conn.Read error from Peek
|
||||
// back to the caller.
|
||||
func TestTransportReturnsPeekError(t *testing.T) {
|
||||
errValue := errors.New("specific error value")
|
||||
|
||||
wrote := make(chan struct{})
|
||||
var wroteOnce sync.Once
|
||||
|
||||
tr := &Transport{
|
||||
Dial: func(network, addr string) (net.Conn, error) {
|
||||
c := funcConn{
|
||||
read: func([]byte) (int, error) {
|
||||
<-wrote
|
||||
return 0, errValue
|
||||
},
|
||||
write: func(p []byte) (int, error) {
|
||||
wroteOnce.Do(func() { close(wrote) })
|
||||
return len(p), nil
|
||||
},
|
||||
}
|
||||
return c, nil
|
||||
},
|
||||
}
|
||||
_, err := tr.RoundTrip(httptest.NewRequest("GET", "http://fake.tld/", nil))
|
||||
if err != errValue {
|
||||
t.Errorf("error = %#v; want %v", err, errValue)
|
||||
}
|
||||
}
|
||||
|
||||
var errFakeRoundTrip = errors.New("fake roundtrip")
|
||||
|
||||
type funcRoundTripper func()
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ package net
|
|||
import (
|
||||
"syscall"
|
||||
|
||||
"golang.org/x/net/route"
|
||||
"golang_org/x/net/route"
|
||||
)
|
||||
|
||||
// If the ifindex is zero, interfaceTable returns mappings of all
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ package net
|
|||
import (
|
||||
"syscall"
|
||||
|
||||
"golang.org/x/net/route"
|
||||
"golang_org/x/net/route"
|
||||
)
|
||||
|
||||
func interfaceMessages(ifindex int) ([]route.Message, error) {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ package net
|
|||
import (
|
||||
"syscall"
|
||||
|
||||
"golang.org/x/net/route"
|
||||
"golang_org/x/net/route"
|
||||
)
|
||||
|
||||
func interfaceMessages(ifindex int) ([]route.Message, error) {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ package net
|
|||
import (
|
||||
"syscall"
|
||||
|
||||
"golang.org/x/net/route"
|
||||
"golang_org/x/net/route"
|
||||
)
|
||||
|
||||
func interfaceMessages(ifindex int) ([]route.Message, error) {
|
||||
|
|
|
|||
|
|
@ -8,6 +8,11 @@
|
|||
// AUTH RFC 2554
|
||||
// STARTTLS RFC 3207
|
||||
// Additional extensions may be handled by clients.
|
||||
//
|
||||
// The smtp package is frozen and not accepting new features.
|
||||
// Some external packages provide more functionality. See:
|
||||
//
|
||||
// https://godoc.org/?q=smtp
|
||||
package smtp
|
||||
|
||||
import (
|
||||
|
|
|
|||
|
|
@ -80,6 +80,7 @@
|
|||
package runtime
|
||||
|
||||
import (
|
||||
"runtime/internal/atomic"
|
||||
"runtime/internal/sys"
|
||||
"unsafe"
|
||||
)
|
||||
|
|
@ -176,7 +177,7 @@ func cgocallbackg(ctxt uintptr) {
|
|||
|
||||
func cgocallbackg1(ctxt uintptr) {
|
||||
gp := getg()
|
||||
if gp.m.needextram {
|
||||
if gp.m.needextram || atomic.Load(&extraMWaiters) > 0 {
|
||||
gp.m.needextram = false
|
||||
systemstack(newextram)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -32,13 +32,13 @@ TEXT runtime∕internal∕atomic·Loaduint(SB), NOSPLIT, $0-8
|
|||
TEXT runtime∕internal∕atomic·Storeuintptr(SB), NOSPLIT, $0-8
|
||||
JMP runtime∕internal∕atomic·Store(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xadduintptr(SB), NOSPLIT, $0-8
|
||||
TEXT runtime∕internal∕atomic·Xadduintptr(SB), NOSPLIT, $0-12
|
||||
JMP runtime∕internal∕atomic·Xadd(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Loadint64(SB), NOSPLIT, $0-16
|
||||
TEXT runtime∕internal∕atomic·Loadint64(SB), NOSPLIT, $0-12
|
||||
JMP runtime∕internal∕atomic·Load64(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-16
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-20
|
||||
JMP runtime∕internal∕atomic·Xadd64(SB)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ TEXT runtime∕internal∕atomic·Storeuintptr(SB), NOSPLIT, $0-16
|
|||
TEXT runtime∕internal∕atomic·Loadint64(SB), NOSPLIT, $0-16
|
||||
JMP runtime∕internal∕atomic·Load64(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-16
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-24
|
||||
JMP runtime∕internal∕atomic·Xadd64(SB)
|
||||
|
||||
// bool Casp(void **val, void *old, void *new)
|
||||
|
|
|
|||
|
|
@ -29,10 +29,10 @@ TEXT runtime∕internal∕atomic·Loaduintptr(SB), NOSPLIT, $0-12
|
|||
TEXT runtime∕internal∕atomic·Loaduint(SB), NOSPLIT, $0-12
|
||||
JMP runtime∕internal∕atomic·Load(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Storeuintptr(SB), NOSPLIT, $0-12
|
||||
TEXT runtime∕internal∕atomic·Storeuintptr(SB), NOSPLIT, $0-8
|
||||
JMP runtime∕internal∕atomic·Store(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Loadint64(SB), NOSPLIT, $0-24
|
||||
TEXT runtime∕internal∕atomic·Loadint64(SB), NOSPLIT, $0-16
|
||||
JMP runtime∕internal∕atomic·Load64(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-24
|
||||
|
|
|
|||
|
|
@ -61,11 +61,11 @@ TEXT runtime∕internal∕atomic·Loaduint(SB),NOSPLIT,$0-8
|
|||
TEXT runtime∕internal∕atomic·Storeuintptr(SB),NOSPLIT,$0-8
|
||||
B runtime∕internal∕atomic·Store(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xadduintptr(SB),NOSPLIT,$0-8
|
||||
TEXT runtime∕internal∕atomic·Xadduintptr(SB),NOSPLIT,$0-12
|
||||
B runtime∕internal∕atomic·Xadd(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Loadint64(SB),NOSPLIT,$0-16
|
||||
TEXT runtime∕internal∕atomic·Loadint64(SB),NOSPLIT,$0-12
|
||||
B runtime∕internal∕atomic·Load64(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB),NOSPLIT,$0-16
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB),NOSPLIT,$0-20
|
||||
B runtime∕internal∕atomic·Xadd64(SB)
|
||||
|
|
|
|||
|
|
@ -38,13 +38,13 @@ TEXT runtime∕internal∕atomic·Loaduint(SB), NOSPLIT, $-8-16
|
|||
TEXT runtime∕internal∕atomic·Storeuintptr(SB), NOSPLIT, $0-16
|
||||
B runtime∕internal∕atomic·Store64(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xadduintptr(SB), NOSPLIT, $0-16
|
||||
TEXT runtime∕internal∕atomic·Xadduintptr(SB), NOSPLIT, $0-24
|
||||
B runtime∕internal∕atomic·Xadd64(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Loadint64(SB), NOSPLIT, $0-16
|
||||
B runtime∕internal∕atomic·Load64(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-16
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-24
|
||||
B runtime∕internal∕atomic·Xadd64(SB)
|
||||
|
||||
// bool Casp(void **val, void *old, void *new)
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ TEXT runtime∕internal∕atomic·Xadduintptr(SB), NOSPLIT, $0-24
|
|||
TEXT runtime∕internal∕atomic·Loadint64(SB), NOSPLIT, $0-16
|
||||
BR runtime∕internal∕atomic·Load64(SB)
|
||||
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-16
|
||||
TEXT runtime∕internal∕atomic·Xaddint64(SB), NOSPLIT, $0-24
|
||||
BR runtime∕internal∕atomic·Xadd64(SB)
|
||||
|
||||
// bool casp(void **val, void *old, void *new)
|
||||
|
|
|
|||
|
|
@ -145,7 +145,7 @@ func writebarrierptr(dst *uintptr, src uintptr) {
|
|||
if !writeBarrier.needed {
|
||||
return
|
||||
}
|
||||
if src != 0 && src < sys.PhysPageSize {
|
||||
if src != 0 && src < minPhysPageSize {
|
||||
systemstack(func() {
|
||||
print("runtime: writebarrierptr *", dst, " = ", hex(src), "\n")
|
||||
throw("bad pointer in write barrier")
|
||||
|
|
@ -164,7 +164,7 @@ func writebarrierptr_nostore(dst *uintptr, src uintptr) {
|
|||
if !writeBarrier.needed {
|
||||
return
|
||||
}
|
||||
if src != 0 && src < sys.PhysPageSize {
|
||||
if src != 0 && src < minPhysPageSize {
|
||||
systemstack(func() { throw("bad pointer in write barrier") })
|
||||
}
|
||||
writebarrierptr_nostore1(dst, src)
|
||||
|
|
|
|||
|
|
@ -10,8 +10,8 @@ import (
|
|||
)
|
||||
|
||||
const (
|
||||
_PAGE_SIZE = sys.PhysPageSize
|
||||
_EACCES = 13
|
||||
_EACCES = 13
|
||||
_EINVAL = 22
|
||||
)
|
||||
|
||||
// NOTE: vec must be just 1 byte long here.
|
||||
|
|
@ -22,13 +22,19 @@ const (
|
|||
var addrspace_vec [1]byte
|
||||
|
||||
func addrspace_free(v unsafe.Pointer, n uintptr) bool {
|
||||
var chunk uintptr
|
||||
for off := uintptr(0); off < n; off += chunk {
|
||||
chunk = _PAGE_SIZE * uintptr(len(addrspace_vec))
|
||||
if chunk > (n - off) {
|
||||
chunk = n - off
|
||||
// Step by the minimum possible physical page size. This is
|
||||
// safe even if we have the wrong physical page size; mincore
|
||||
// will just return EINVAL for unaligned addresses.
|
||||
for off := uintptr(0); off < n; off += minPhysPageSize {
|
||||
// Use a length of 1 byte, which the kernel will round
|
||||
// up to one physical page regardless of the true
|
||||
// physical page size.
|
||||
errval := mincore(unsafe.Pointer(uintptr(v)+off), 1, &addrspace_vec[0])
|
||||
if errval == -_EINVAL {
|
||||
// Address is not a multiple of the physical
|
||||
// page size. That's fine.
|
||||
continue
|
||||
}
|
||||
errval := mincore(unsafe.Pointer(uintptr(v)+off), chunk, &addrspace_vec[0])
|
||||
// ENOMEM means unmapped, which is what we want.
|
||||
// Anything else we assume means the pages are mapped.
|
||||
if errval != -_ENOMEM {
|
||||
|
|
|
|||
|
|
@ -742,11 +742,10 @@ const gcCreditSlack = 2000
|
|||
// can accumulate on a P before updating gcController.assistTime.
|
||||
const gcAssistTimeSlack = 5000
|
||||
|
||||
// gcOverAssistBytes determines how many extra allocation bytes of
|
||||
// assist credit a GC assist builds up when an assist happens. This
|
||||
// amortizes the cost of an assist by pre-paying for this many bytes
|
||||
// of future allocations.
|
||||
const gcOverAssistBytes = 1 << 20
|
||||
// gcOverAssistWork determines how many extra units of scan work a GC
|
||||
// assist does when an assist happens. This amortizes the cost of an
|
||||
// assist by pre-paying for this many bytes of future allocations.
|
||||
const gcOverAssistWork = 64 << 10
|
||||
|
||||
var work struct {
|
||||
full uint64 // lock-free list of full blocks workbuf
|
||||
|
|
|
|||
|
|
@ -393,10 +393,15 @@ func gcAssistAlloc(gp *g) {
|
|||
}
|
||||
|
||||
// Compute the amount of scan work we need to do to make the
|
||||
// balance positive. We over-assist to build up credit for
|
||||
// future allocations and amortize the cost of assisting.
|
||||
debtBytes := -gp.gcAssistBytes + gcOverAssistBytes
|
||||
// balance positive. When the required amount of work is low,
|
||||
// we over-assist to build up credit for future allocations
|
||||
// and amortize the cost of assisting.
|
||||
debtBytes := -gp.gcAssistBytes
|
||||
scanWork := int64(gcController.assistWorkPerByte * float64(debtBytes))
|
||||
if scanWork < gcOverAssistWork {
|
||||
scanWork = gcOverAssistWork
|
||||
debtBytes = int64(gcController.assistBytesPerWork * float64(scanWork))
|
||||
}
|
||||
|
||||
retry:
|
||||
// Steal as much credit as we can from the background GC's
|
||||
|
|
|
|||
|
|
@ -14,6 +14,11 @@ import (
|
|||
"unsafe"
|
||||
)
|
||||
|
||||
// minPhysPageSize is a lower-bound on the physical page size. The
|
||||
// true physical page size may be larger than this. In contrast,
|
||||
// sys.PhysPageSize is an upper-bound on the physical page size.
|
||||
const minPhysPageSize = 4096
|
||||
|
||||
// Main malloc heap.
|
||||
// The heap itself is the "free[]" and "large" arrays,
|
||||
// but all the other global data is here too.
|
||||
|
|
|
|||
|
|
@ -4,8 +4,69 @@
|
|||
|
||||
// Package pprof writes runtime profiling data in the format expected
|
||||
// by the pprof visualization tool.
|
||||
//
|
||||
// Profiling a Go program
|
||||
//
|
||||
// The first step to profiling a Go program is to enable profiling.
|
||||
// Support for profiling benchmarks built with the standard testing
|
||||
// package is built into go test. For example, the following command
|
||||
// runs benchmarks in the current directory and writes the CPU and
|
||||
// memory profiles to cpu.prof and mem.prof:
|
||||
//
|
||||
// go test -cpuprofile cpu.prof -memprofile mem.prof -bench .
|
||||
//
|
||||
// To add equivalent profiling support to a standalone program, add
|
||||
// code like the following to your main function:
|
||||
//
|
||||
// var cpuprofile = flag.String("cpuprofile", "", "write cpu profile `file`")
|
||||
// var memprofile = flag.String("memprofile", "", "write memory profile to `file`")
|
||||
//
|
||||
// func main() {
|
||||
// flag.Parse()
|
||||
// if *cpuprofile != "" {
|
||||
// f, err := os.Create(*cpuprofile)
|
||||
// if err != nil {
|
||||
// log.Fatal("could not create CPU profile: ", err)
|
||||
// }
|
||||
// if err := pprof.StartCPUProfile(f); err != nil {
|
||||
// log.Fatal("could not start CPU profile: ", err)
|
||||
// }
|
||||
// defer pprof.StopCPUProfile()
|
||||
// }
|
||||
// ...
|
||||
// if *memprofile != "" {
|
||||
// f, err := os.Create(*memprofile)
|
||||
// if err != nil {
|
||||
// log.Fatal("could not create memory profile: ", err)
|
||||
// }
|
||||
// runtime.GC() // get up-to-date statistics
|
||||
// if err := pprof.WriteHeapProfile(f); err != nil {
|
||||
// log.Fatal("could not write memory profile: ", err)
|
||||
// }
|
||||
// f.Close()
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// There is also a standard HTTP interface to profiling data. Adding
|
||||
// the following line will install handlers under the /debug/pprof/
|
||||
// URL to download live profiles:
|
||||
//
|
||||
// import _ "net/http/pprof"
|
||||
//
|
||||
// See the net/http/pprof package for more details.
|
||||
//
|
||||
// Profiles can then be visualized with the pprof tool:
|
||||
//
|
||||
// go tool pprof cpu.prof
|
||||
//
|
||||
// There are many commands available from the pprof command line.
|
||||
// Commonly used commands include "top", which prints a summary of the
|
||||
// top program hot-spots, and "web", which opens an interactive graph
|
||||
// of hot-spots and their call graphs. Use "help" for information on
|
||||
// all pprof commands.
|
||||
//
|
||||
// For more information about pprof, see
|
||||
// http://github.com/google/pprof/.
|
||||
// https://github.com/google/pprof/blob/master/doc/pprof.md.
|
||||
package pprof
|
||||
|
||||
import (
|
||||
|
|
@ -353,12 +414,9 @@ func printStackRecord(w io.Writer, stk []uintptr, allFrames bool) {
|
|||
if name == "" {
|
||||
show = true
|
||||
fmt.Fprintf(w, "#\t%#x\n", frame.PC)
|
||||
} else {
|
||||
} else if name != "runtime.goexit" && (show || !strings.HasPrefix(name, "runtime.")) {
|
||||
// Hide runtime.goexit and any runtime functions at the beginning.
|
||||
// This is useful mainly for allocation traces.
|
||||
if name == "runtime.goexit" || !show && strings.HasPrefix(name, "runtime.") {
|
||||
continue
|
||||
}
|
||||
show = true
|
||||
fmt.Fprintf(w, "#\t%#x\t%s+%#x\t%s:%d\n", frame.PC, name, frame.PC-frame.Entry, frame.File, frame.Line)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -497,6 +497,10 @@ func TestBlockProfile(t *testing.T) {
|
|||
t.Fatalf("Bad profile header:\n%v", prof)
|
||||
}
|
||||
|
||||
if strings.HasSuffix(prof, "#\t0x0\n\n") {
|
||||
t.Errorf("Useless 0 suffix:\n%v", prof)
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
if !regexp.MustCompile(strings.Replace(test.re, "\t", "\t+", -1)).MatchString(prof) {
|
||||
t.Fatalf("Bad %v entry, expect:\n%v\ngot:\n%v", test.name, test.re, prof)
|
||||
|
|
|
|||
|
|
@ -1389,10 +1389,27 @@ func needm(x byte) {
|
|||
|
||||
var earlycgocallback = []byte("fatal error: cgo callback before cgo call\n")
|
||||
|
||||
// newextram allocates an m and puts it on the extra list.
|
||||
// newextram allocates m's and puts them on the extra list.
|
||||
// It is called with a working local m, so that it can do things
|
||||
// like call schedlock and allocate.
|
||||
func newextram() {
|
||||
c := atomic.Xchg(&extraMWaiters, 0)
|
||||
if c > 0 {
|
||||
for i := uint32(0); i < c; i++ {
|
||||
oneNewExtraM()
|
||||
}
|
||||
} else {
|
||||
// Make sure there is at least one extra M.
|
||||
mp := lockextra(true)
|
||||
unlockextra(mp)
|
||||
if mp == nil {
|
||||
oneNewExtraM()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// oneNewExtraM allocates an m and puts it on the extra list.
|
||||
func oneNewExtraM() {
|
||||
// Create extra goroutine locked to extra m.
|
||||
// The goroutine is the context in which the cgo callback will run.
|
||||
// The sched.pc will never be returned to, but setting it to
|
||||
|
|
@ -1485,6 +1502,7 @@ func getm() uintptr {
|
|||
}
|
||||
|
||||
var extram uintptr
|
||||
var extraMWaiters uint32
|
||||
|
||||
// lockextra locks the extra list and returns the list head.
|
||||
// The caller must unlock the list by storing a new list head
|
||||
|
|
@ -1495,6 +1513,7 @@ var extram uintptr
|
|||
func lockextra(nilokay bool) *m {
|
||||
const locked = 1
|
||||
|
||||
incr := false
|
||||
for {
|
||||
old := atomic.Loaduintptr(&extram)
|
||||
if old == locked {
|
||||
|
|
@ -1503,6 +1522,13 @@ func lockextra(nilokay bool) *m {
|
|||
continue
|
||||
}
|
||||
if old == 0 && !nilokay {
|
||||
if !incr {
|
||||
// Add 1 to the number of threads
|
||||
// waiting for an M.
|
||||
// This is cleared by newextram.
|
||||
atomic.Xadd(&extraMWaiters, 1)
|
||||
incr = true
|
||||
}
|
||||
usleep(1)
|
||||
continue
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,4 +4,4 @@ the LLVM project (http://llvm.org/git/compiler-rt.git).
|
|||
|
||||
To update the .syso files use golang.org/x/build/cmd/racebuild.
|
||||
|
||||
Current runtime is built on rev 9d79ea3416bfbe3acac50e47802ee9621bf53254.
|
||||
Current runtime is built on rev e35e7c00b5c7e7ee5e24d537b80cb0d34cebb038.
|
||||
|
|
|
|||
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
|
@ -221,3 +221,21 @@ func BenchmarkSyncLeak(b *testing.B) {
|
|||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func BenchmarkStackLeak(b *testing.B) {
|
||||
done := make(chan bool, 1)
|
||||
for i := 0; i < b.N; i++ {
|
||||
go func() {
|
||||
growStack(rand.Intn(100))
|
||||
done <- true
|
||||
}()
|
||||
<-done
|
||||
}
|
||||
}
|
||||
|
||||
func growStack(i int) {
|
||||
if i == 0 {
|
||||
return
|
||||
}
|
||||
growStack(i - 1)
|
||||
}
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -196,15 +196,16 @@ timeloop:
|
|||
|
||||
systime:
|
||||
// Fall back to system call (usually first call in this thread)
|
||||
LEAL 12(SP), AX // must be non-nil, unused
|
||||
LEAL 16(SP), AX // must be non-nil, unused
|
||||
MOVL AX, 4(SP)
|
||||
MOVL $0, 8(SP) // time zone pointer
|
||||
MOVL $0, 12(SP) // required as of Sierra; Issue 16570
|
||||
MOVL $116, AX
|
||||
INT $0x80
|
||||
CMPL AX, $0
|
||||
JNE inreg
|
||||
MOVL 12(SP), AX
|
||||
MOVL 16(SP), DX
|
||||
MOVL 16(SP), AX
|
||||
MOVL 20(SP), DX
|
||||
inreg:
|
||||
// sec is in AX, usec in DX
|
||||
// convert to DX:AX nsec
|
||||
|
|
|
|||
|
|
@ -157,6 +157,7 @@ systime:
|
|||
// Fall back to system call (usually first call in this thread).
|
||||
MOVQ SP, DI
|
||||
MOVQ $0, SI
|
||||
MOVQ $0, DX // required as of Sierra; Issue 16570
|
||||
MOVL $(0x2000000+116), AX
|
||||
SYSCALL
|
||||
CMPQ AX, $0
|
||||
|
|
@ -244,6 +245,7 @@ TEXT runtime·sigtramp(SB),NOSPLIT,$32
|
|||
MOVQ R8, 24(SP) // ctx
|
||||
MOVQ $runtime·sigtrampgo(SB), AX
|
||||
CALL AX
|
||||
INT $3 // not reached (see issue 16453)
|
||||
|
||||
TEXT runtime·mmap(SB),NOSPLIT,$0
|
||||
MOVQ addr+0(FP), DI // arg 1 addr
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue