4
0
mirror of https://github.com/cwinfo/matterbridge.git synced 2025-07-04 19:27:45 +00:00

Update mattermost library (#2152)

* Update mattermost library

* Fix linting
This commit is contained in:
Wim
2024-05-24 23:08:09 +02:00
committed by GitHub
parent 65d78e38af
commit d16645c952
1003 changed files with 89451 additions and 114025 deletions

354
vendor/github.com/hashicorp/errwrap/LICENSE generated vendored Normal file
View File

@ -0,0 +1,354 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

89
vendor/github.com/hashicorp/errwrap/README.md generated vendored Normal file
View File

@ -0,0 +1,89 @@
# errwrap
`errwrap` is a package for Go that formalizes the pattern of wrapping errors
and checking if an error contains another error.
There is a common pattern in Go of taking a returned `error` value and
then wrapping it (such as with `fmt.Errorf`) before returning it. The problem
with this pattern is that you completely lose the original `error` structure.
Arguably the _correct_ approach is that you should make a custom structure
implementing the `error` interface, and have the original error as a field
on that structure, such [as this example](http://golang.org/pkg/os/#PathError).
This is a good approach, but you have to know the entire chain of possible
rewrapping that happens, when you might just care about one.
`errwrap` formalizes this pattern (it doesn't matter what approach you use
above) by giving a single interface for wrapping errors, checking if a specific
error is wrapped, and extracting that error.
## Installation and Docs
Install using `go get github.com/hashicorp/errwrap`.
Full documentation is available at
http://godoc.org/github.com/hashicorp/errwrap
## Usage
#### Basic Usage
Below is a very basic example of its usage:
```go
// A function that always returns an error, but wraps it, like a real
// function might.
func tryOpen() error {
_, err := os.Open("/i/dont/exist")
if err != nil {
return errwrap.Wrapf("Doesn't exist: {{err}}", err)
}
return nil
}
func main() {
err := tryOpen()
// We can use the Contains helpers to check if an error contains
// another error. It is safe to do this with a nil error, or with
// an error that doesn't even use the errwrap package.
if errwrap.Contains(err, "does not exist") {
// Do something
}
if errwrap.ContainsType(err, new(os.PathError)) {
// Do something
}
// Or we can use the associated `Get` functions to just extract
// a specific error. This would return nil if that specific error doesn't
// exist.
perr := errwrap.GetType(err, new(os.PathError))
}
```
#### Custom Types
If you're already making custom types that properly wrap errors, then
you can get all the functionality of `errwraps.Contains` and such by
implementing the `Wrapper` interface with just one function. Example:
```go
type AppError {
Code ErrorCode
Err error
}
func (e *AppError) WrappedErrors() []error {
return []error{e.Err}
}
```
Now this works:
```go
err := &AppError{Err: fmt.Errorf("an error")}
if errwrap.ContainsType(err, fmt.Errorf("")) {
// This will work!
}
```

178
vendor/github.com/hashicorp/errwrap/errwrap.go generated vendored Normal file
View File

@ -0,0 +1,178 @@
// Package errwrap implements methods to formalize error wrapping in Go.
//
// All of the top-level functions that take an `error` are built to be able
// to take any error, not just wrapped errors. This allows you to use errwrap
// without having to type-check and type-cast everywhere.
package errwrap
import (
"errors"
"reflect"
"strings"
)
// WalkFunc is the callback called for Walk.
type WalkFunc func(error)
// Wrapper is an interface that can be implemented by custom types to
// have all the Contains, Get, etc. functions in errwrap work.
//
// When Walk reaches a Wrapper, it will call the callback for every
// wrapped error in addition to the wrapper itself. Since all the top-level
// functions in errwrap use Walk, this means that all those functions work
// with your custom type.
type Wrapper interface {
WrappedErrors() []error
}
// Wrap defines that outer wraps inner, returning an error type that
// can be cleanly used with the other methods in this package, such as
// Contains, GetAll, etc.
//
// This function won't modify the error message at all (the outer message
// will be used).
func Wrap(outer, inner error) error {
return &wrappedError{
Outer: outer,
Inner: inner,
}
}
// Wrapf wraps an error with a formatting message. This is similar to using
// `fmt.Errorf` to wrap an error. If you're using `fmt.Errorf` to wrap
// errors, you should replace it with this.
//
// format is the format of the error message. The string '{{err}}' will
// be replaced with the original error message.
//
// Deprecated: Use fmt.Errorf()
func Wrapf(format string, err error) error {
outerMsg := "<nil>"
if err != nil {
outerMsg = err.Error()
}
outer := errors.New(strings.Replace(
format, "{{err}}", outerMsg, -1))
return Wrap(outer, err)
}
// Contains checks if the given error contains an error with the
// message msg. If err is not a wrapped error, this will always return
// false unless the error itself happens to match this msg.
func Contains(err error, msg string) bool {
return len(GetAll(err, msg)) > 0
}
// ContainsType checks if the given error contains an error with
// the same concrete type as v. If err is not a wrapped error, this will
// check the err itself.
func ContainsType(err error, v interface{}) bool {
return len(GetAllType(err, v)) > 0
}
// Get is the same as GetAll but returns the deepest matching error.
func Get(err error, msg string) error {
es := GetAll(err, msg)
if len(es) > 0 {
return es[len(es)-1]
}
return nil
}
// GetType is the same as GetAllType but returns the deepest matching error.
func GetType(err error, v interface{}) error {
es := GetAllType(err, v)
if len(es) > 0 {
return es[len(es)-1]
}
return nil
}
// GetAll gets all the errors that might be wrapped in err with the
// given message. The order of the errors is such that the outermost
// matching error (the most recent wrap) is index zero, and so on.
func GetAll(err error, msg string) []error {
var result []error
Walk(err, func(err error) {
if err.Error() == msg {
result = append(result, err)
}
})
return result
}
// GetAllType gets all the errors that are the same type as v.
//
// The order of the return value is the same as described in GetAll.
func GetAllType(err error, v interface{}) []error {
var result []error
var search string
if v != nil {
search = reflect.TypeOf(v).String()
}
Walk(err, func(err error) {
var needle string
if err != nil {
needle = reflect.TypeOf(err).String()
}
if needle == search {
result = append(result, err)
}
})
return result
}
// Walk walks all the wrapped errors in err and calls the callback. If
// err isn't a wrapped error, this will be called once for err. If err
// is a wrapped error, the callback will be called for both the wrapper
// that implements error as well as the wrapped error itself.
func Walk(err error, cb WalkFunc) {
if err == nil {
return
}
switch e := err.(type) {
case *wrappedError:
cb(e.Outer)
Walk(e.Inner, cb)
case Wrapper:
cb(err)
for _, err := range e.WrappedErrors() {
Walk(err, cb)
}
case interface{ Unwrap() error }:
cb(err)
Walk(e.Unwrap(), cb)
default:
cb(err)
}
}
// wrappedError is an implementation of error that has both the
// outer and inner errors.
type wrappedError struct {
Outer error
Inner error
}
func (w *wrappedError) Error() string {
return w.Outer.Error()
}
func (w *wrappedError) WrappedErrors() []error {
return []error{w.Outer, w.Inner}
}
func (w *wrappedError) Unwrap() error {
return w.Inner
}

1
vendor/github.com/hashicorp/go-hclog/.gitignore generated vendored Normal file
View File

@ -0,0 +1 @@
.idea*

19
vendor/github.com/hashicorp/go-hclog/LICENSE generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2017 HashiCorp, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

149
vendor/github.com/hashicorp/go-hclog/README.md generated vendored Normal file
View File

@ -0,0 +1,149 @@
# go-hclog
[![Go Documentation](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)][godocs]
[godocs]: https://godoc.org/github.com/hashicorp/go-hclog
`go-hclog` is a package for Go that provides a simple key/value logging
interface for use in development and production environments.
It provides logging levels that provide decreased output based upon the
desired amount of output, unlike the standard library `log` package.
It provides `Printf` style logging of values via `hclog.Fmt()`.
It provides a human readable output mode for use in development as well as
JSON output mode for production.
## Stability Note
This library has reached 1.0 stability. Its API can be considered solidified
and promised through future versions.
## Installation and Docs
Install using `go get github.com/hashicorp/go-hclog`.
Full documentation is available at
http://godoc.org/github.com/hashicorp/go-hclog
## Usage
### Use the global logger
```go
hclog.Default().Info("hello world")
```
```text
2017-07-05T16:15:55.167-0700 [INFO ] hello world
```
(Note timestamps are removed in future examples for brevity.)
### Create a new logger
```go
appLogger := hclog.New(&hclog.LoggerOptions{
Name: "my-app",
Level: hclog.LevelFromString("DEBUG"),
})
```
### Emit an Info level message with 2 key/value pairs
```go
input := "5.5"
_, err := strconv.ParseInt(input, 10, 32)
if err != nil {
appLogger.Info("Invalid input for ParseInt", "input", input, "error", err)
}
```
```text
... [INFO ] my-app: Invalid input for ParseInt: input=5.5 error="strconv.ParseInt: parsing "5.5": invalid syntax"
```
### Create a new Logger for a major subsystem
```go
subsystemLogger := appLogger.Named("transport")
subsystemLogger.Info("we are transporting something")
```
```text
... [INFO ] my-app.transport: we are transporting something
```
Notice that logs emitted by `subsystemLogger` contain `my-app.transport`,
reflecting both the application and subsystem names.
### Create a new Logger with fixed key/value pairs
Using `With()` will include a specific key-value pair in all messages emitted
by that logger.
```go
requestID := "5fb446b6-6eba-821d-df1b-cd7501b6a363"
requestLogger := subsystemLogger.With("request", requestID)
requestLogger.Info("we are transporting a request")
```
```text
... [INFO ] my-app.transport: we are transporting a request: request=5fb446b6-6eba-821d-df1b-cd7501b6a363
```
This allows sub Loggers to be context specific without having to thread that
into all the callers.
### Using `hclog.Fmt()`
```go
totalBandwidth := 200
appLogger.Info("total bandwidth exceeded", "bandwidth", hclog.Fmt("%d GB/s", totalBandwidth))
```
```text
... [INFO ] my-app: total bandwidth exceeded: bandwidth="200 GB/s"
```
### Use this with code that uses the standard library logger
If you want to use the standard library's `log.Logger` interface you can wrap
`hclog.Logger` by calling the `StandardLogger()` method. This allows you to use
it with the familiar `Println()`, `Printf()`, etc. For example:
```go
stdLogger := appLogger.StandardLogger(&hclog.StandardLoggerOptions{
InferLevels: true,
})
// Printf() is provided by stdlib log.Logger interface, not hclog.Logger
stdLogger.Printf("[DEBUG] %+v", stdLogger)
```
```text
... [DEBUG] my-app: &{mu:{state:0 sema:0} prefix: flag:0 out:0xc42000a0a0 buf:[]}
```
Alternatively, you may configure the system-wide logger:
```go
// log the standard logger from 'import "log"'
log.SetOutput(appLogger.StandardWriter(&hclog.StandardLoggerOptions{InferLevels: true}))
log.SetPrefix("")
log.SetFlags(0)
log.Printf("[DEBUG] %d", 42)
```
```text
... [DEBUG] my-app: 42
```
Notice that if `appLogger` is initialized with the `INFO` log level, _and_ you
specify `InferLevels: true`, you will not see any output here. You must change
`appLogger` to `DEBUG` to see output. See the docs for more information.
If the log lines start with a timestamp you can use the
`InferLevelsWithTimestamp` option to try and ignore them. Please note that in order
for `InferLevelsWithTimestamp` to be relevant, `InferLevels` must be set to `true`.

44
vendor/github.com/hashicorp/go-hclog/colorize_unix.go generated vendored Normal file
View File

@ -0,0 +1,44 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
//go:build !windows
// +build !windows
package hclog
import (
"github.com/mattn/go-isatty"
)
// hasFD is used to check if the writer has an Fd value to check
// if it's a terminal.
type hasFD interface {
Fd() uintptr
}
// setColorization will mutate the values of this logger
// to appropriately configure colorization options. It provides
// a wrapper to the output stream on Windows systems.
func (l *intLogger) setColorization(opts *LoggerOptions) {
if opts.Color != AutoColor {
return
}
if sc, ok := l.writer.w.(SupportsColor); ok {
if !sc.SupportsColor() {
l.headerColor = ColorOff
l.writer.color = ColorOff
}
return
}
fi, ok := l.writer.w.(hasFD)
if !ok {
return
}
if !isatty.IsTerminal(fi.Fd()) {
l.headerColor = ColorOff
l.writer.color = ColorOff
}
}

View File

@ -0,0 +1,41 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
//go:build windows
// +build windows
package hclog
import (
"os"
colorable "github.com/mattn/go-colorable"
)
// setColorization will mutate the values of this logger
// to appropriately configure colorization options. It provides
// a wrapper to the output stream on Windows systems.
func (l *intLogger) setColorization(opts *LoggerOptions) {
if opts.Color == ColorOff {
return
}
fi, ok := l.writer.w.(*os.File)
if !ok {
l.writer.color = ColorOff
l.headerColor = ColorOff
return
}
cfi := colorable.NewColorable(fi)
// NewColorable detects if color is possible and if it's not, then it
// returns the original value. So we can test if we got the original
// value back to know if color is possible.
if cfi == fi {
l.writer.color = ColorOff
l.headerColor = ColorOff
} else {
l.writer.w = cfi
}
}

41
vendor/github.com/hashicorp/go-hclog/context.go generated vendored Normal file
View File

@ -0,0 +1,41 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
package hclog
import (
"context"
)
// WithContext inserts a logger into the context and is retrievable
// with FromContext. The optional args can be set with the same syntax as
// Logger.With to set fields on the inserted logger. This will not modify
// the logger argument in-place.
func WithContext(ctx context.Context, logger Logger, args ...interface{}) context.Context {
// While we could call logger.With even with zero args, we have this
// check to avoid unnecessary allocations around creating a copy of a
// logger.
if len(args) > 0 {
logger = logger.With(args...)
}
return context.WithValue(ctx, contextKey, logger)
}
// FromContext returns a logger from the context. This will return L()
// (the default logger) if no logger is found in the context. Therefore,
// this will never return a nil value.
func FromContext(ctx context.Context) Logger {
logger, _ := ctx.Value(contextKey).(Logger)
if logger == nil {
return L()
}
return logger
}
// Unexported new type so that our context key never collides with another.
type contextKeyType struct{}
// contextKey is the key used for the context to store the logger.
var contextKey = contextKeyType{}

74
vendor/github.com/hashicorp/go-hclog/exclude.go generated vendored Normal file
View File

@ -0,0 +1,74 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
package hclog
import (
"regexp"
"strings"
)
// ExcludeByMessage provides a simple way to build a list of log messages that
// can be queried and matched. This is meant to be used with the Exclude
// option on Options to suppress log messages. This does not hold any mutexs
// within itself, so normal usage would be to Add entries at setup and none after
// Exclude is going to be called. Exclude is called with a mutex held within
// the Logger, so that doesn't need to use a mutex. Example usage:
//
// f := new(ExcludeByMessage)
// f.Add("Noisy log message text")
// appLogger.Exclude = f.Exclude
type ExcludeByMessage struct {
messages map[string]struct{}
}
// Add a message to be filtered. Do not call this after Exclude is to be called
// due to concurrency issues.
func (f *ExcludeByMessage) Add(msg string) {
if f.messages == nil {
f.messages = make(map[string]struct{})
}
f.messages[msg] = struct{}{}
}
// Return true if the given message should be included
func (f *ExcludeByMessage) Exclude(level Level, msg string, args ...interface{}) bool {
_, ok := f.messages[msg]
return ok
}
// ExcludeByPrefix is a simple type to match a message string that has a common prefix.
type ExcludeByPrefix string
// Matches an message that starts with the prefix.
func (p ExcludeByPrefix) Exclude(level Level, msg string, args ...interface{}) bool {
return strings.HasPrefix(msg, string(p))
}
// ExcludeByRegexp takes a regexp and uses it to match a log message string. If it matches
// the log entry is excluded.
type ExcludeByRegexp struct {
Regexp *regexp.Regexp
}
// Exclude the log message if the message string matches the regexp
func (e ExcludeByRegexp) Exclude(level Level, msg string, args ...interface{}) bool {
return e.Regexp.MatchString(msg)
}
// ExcludeFuncs is a slice of functions that will called to see if a log entry
// should be filtered or not. It stops calling functions once at least one returns
// true.
type ExcludeFuncs []func(level Level, msg string, args ...interface{}) bool
// Calls each function until one of them returns true
func (ff ExcludeFuncs) Exclude(level Level, msg string, args ...interface{}) bool {
for _, f := range ff {
if f(level, msg, args...) {
return true
}
}
return false
}

67
vendor/github.com/hashicorp/go-hclog/global.go generated vendored Normal file
View File

@ -0,0 +1,67 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
package hclog
import (
"sync"
"time"
)
var (
protect sync.Once
def Logger
// DefaultOptions is used to create the Default logger. These are read
// only when the Default logger is created, so set them as soon as the
// process starts.
DefaultOptions = &LoggerOptions{
Level: DefaultLevel,
Output: DefaultOutput,
TimeFn: time.Now,
}
)
// Default returns a globally held logger. This can be a good starting
// place, and then you can use .With() and .Named() to create sub-loggers
// to be used in more specific contexts.
// The value of the Default logger can be set via SetDefault() or by
// changing the options in DefaultOptions.
//
// This method is goroutine safe, returning a global from memory, but
// care should be used if SetDefault() is called it random times
// in the program as that may result in race conditions and an unexpected
// Logger being returned.
func Default() Logger {
protect.Do(func() {
// If SetDefault was used before Default() was called, we need to
// detect that here.
if def == nil {
def = New(DefaultOptions)
}
})
return def
}
// L is a short alias for Default().
func L() Logger {
return Default()
}
// SetDefault changes the logger to be returned by Default()and L()
// to the one given. This allows packages to use the default logger
// and have higher level packages change it to match the execution
// environment. It returns any old default if there is one.
//
// NOTE: This is expected to be called early in the program to setup
// a default logger. As such, it does not attempt to make itself
// not racy with regard to the value of the default logger. Ergo
// if it is called in goroutines, you may experience race conditions
// with other goroutines retrieving the default logger. Basically,
// don't do that.
func SetDefault(log Logger) Logger {
old := def
def = log
return old
}

207
vendor/github.com/hashicorp/go-hclog/interceptlogger.go generated vendored Normal file
View File

@ -0,0 +1,207 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
package hclog
import (
"io"
"log"
"sync"
"sync/atomic"
)
var _ Logger = &interceptLogger{}
type interceptLogger struct {
Logger
mu *sync.Mutex
sinkCount *int32
Sinks map[SinkAdapter]struct{}
}
func NewInterceptLogger(opts *LoggerOptions) InterceptLogger {
l := newLogger(opts)
if l.callerOffset > 0 {
// extra frames for interceptLogger.{Warn,Info,Log,etc...}, and interceptLogger.log
l.callerOffset += 2
}
intercept := &interceptLogger{
Logger: l,
mu: new(sync.Mutex),
sinkCount: new(int32),
Sinks: make(map[SinkAdapter]struct{}),
}
atomic.StoreInt32(intercept.sinkCount, 0)
return intercept
}
func (i *interceptLogger) Log(level Level, msg string, args ...interface{}) {
i.log(level, msg, args...)
}
// log is used to make the caller stack frame lookup consistent. If Warn,Info,etc
// all called Log then direct calls to Log would have a different stack frame
// depth. By having all the methods call the same helper we ensure the stack
// frame depth is the same.
func (i *interceptLogger) log(level Level, msg string, args ...interface{}) {
i.Logger.Log(level, msg, args...)
if atomic.LoadInt32(i.sinkCount) == 0 {
return
}
i.mu.Lock()
defer i.mu.Unlock()
for s := range i.Sinks {
s.Accept(i.Name(), level, msg, i.retrieveImplied(args...)...)
}
}
// Emit the message and args at TRACE level to log and sinks
func (i *interceptLogger) Trace(msg string, args ...interface{}) {
i.log(Trace, msg, args...)
}
// Emit the message and args at DEBUG level to log and sinks
func (i *interceptLogger) Debug(msg string, args ...interface{}) {
i.log(Debug, msg, args...)
}
// Emit the message and args at INFO level to log and sinks
func (i *interceptLogger) Info(msg string, args ...interface{}) {
i.log(Info, msg, args...)
}
// Emit the message and args at WARN level to log and sinks
func (i *interceptLogger) Warn(msg string, args ...interface{}) {
i.log(Warn, msg, args...)
}
// Emit the message and args at ERROR level to log and sinks
func (i *interceptLogger) Error(msg string, args ...interface{}) {
i.log(Error, msg, args...)
}
func (i *interceptLogger) retrieveImplied(args ...interface{}) []interface{} {
top := i.Logger.ImpliedArgs()
cp := make([]interface{}, len(top)+len(args))
copy(cp, top)
copy(cp[len(top):], args)
return cp
}
// Create a new sub-Logger that a name descending from the current name.
// This is used to create a subsystem specific Logger.
// Registered sinks will subscribe to these messages as well.
func (i *interceptLogger) Named(name string) Logger {
return i.NamedIntercept(name)
}
// Create a new sub-Logger with an explicit name. This ignores the current
// name. This is used to create a standalone logger that doesn't fall
// within the normal hierarchy. Registered sinks will subscribe
// to these messages as well.
func (i *interceptLogger) ResetNamed(name string) Logger {
return i.ResetNamedIntercept(name)
}
// Create a new sub-Logger that a name decending from the current name.
// This is used to create a subsystem specific Logger.
// Registered sinks will subscribe to these messages as well.
func (i *interceptLogger) NamedIntercept(name string) InterceptLogger {
var sub interceptLogger
sub = *i
sub.Logger = i.Logger.Named(name)
return &sub
}
// Create a new sub-Logger with an explicit name. This ignores the current
// name. This is used to create a standalone logger that doesn't fall
// within the normal hierarchy. Registered sinks will subscribe
// to these messages as well.
func (i *interceptLogger) ResetNamedIntercept(name string) InterceptLogger {
var sub interceptLogger
sub = *i
sub.Logger = i.Logger.ResetNamed(name)
return &sub
}
// Return a sub-Logger for which every emitted log message will contain
// the given key/value pairs. This is used to create a context specific
// Logger.
func (i *interceptLogger) With(args ...interface{}) Logger {
var sub interceptLogger
sub = *i
sub.Logger = i.Logger.With(args...)
return &sub
}
// RegisterSink attaches a SinkAdapter to interceptLoggers sinks.
func (i *interceptLogger) RegisterSink(sink SinkAdapter) {
i.mu.Lock()
defer i.mu.Unlock()
i.Sinks[sink] = struct{}{}
atomic.AddInt32(i.sinkCount, 1)
}
// DeregisterSink removes a SinkAdapter from interceptLoggers sinks.
func (i *interceptLogger) DeregisterSink(sink SinkAdapter) {
i.mu.Lock()
defer i.mu.Unlock()
delete(i.Sinks, sink)
atomic.AddInt32(i.sinkCount, -1)
}
func (i *interceptLogger) StandardLoggerIntercept(opts *StandardLoggerOptions) *log.Logger {
return i.StandardLogger(opts)
}
func (i *interceptLogger) StandardLogger(opts *StandardLoggerOptions) *log.Logger {
if opts == nil {
opts = &StandardLoggerOptions{}
}
return log.New(i.StandardWriter(opts), "", 0)
}
func (i *interceptLogger) StandardWriterIntercept(opts *StandardLoggerOptions) io.Writer {
return i.StandardWriter(opts)
}
func (i *interceptLogger) StandardWriter(opts *StandardLoggerOptions) io.Writer {
return &stdlogAdapter{
log: i,
inferLevels: opts.InferLevels,
inferLevelsWithTimestamp: opts.InferLevelsWithTimestamp,
forceLevel: opts.ForceLevel,
}
}
func (i *interceptLogger) ResetOutput(opts *LoggerOptions) error {
if or, ok := i.Logger.(OutputResettable); ok {
return or.ResetOutput(opts)
} else {
return nil
}
}
func (i *interceptLogger) ResetOutputWithFlush(opts *LoggerOptions, flushable Flushable) error {
if or, ok := i.Logger.(OutputResettable); ok {
return or.ResetOutputWithFlush(opts, flushable)
} else {
return nil
}
}

1001
vendor/github.com/hashicorp/go-hclog/intlogger.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

412
vendor/github.com/hashicorp/go-hclog/logger.go generated vendored Normal file
View File

@ -0,0 +1,412 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
package hclog
import (
"io"
"log"
"os"
"strings"
"time"
)
var (
// DefaultOutput is used as the default log output.
DefaultOutput io.Writer = os.Stderr
// DefaultLevel is used as the default log level.
DefaultLevel = Info
)
// Level represents a log level.
type Level int32
const (
// NoLevel is a special level used to indicate that no level has been
// set and allow for a default to be used.
NoLevel Level = 0
// Trace is the most verbose level. Intended to be used for the tracing
// of actions in code, such as function enters/exits, etc.
Trace Level = 1
// Debug information for programmer low-level analysis.
Debug Level = 2
// Info information about steady state operations.
Info Level = 3
// Warn information about rare but handled events.
Warn Level = 4
// Error information about unrecoverable events.
Error Level = 5
// Off disables all logging output.
Off Level = 6
)
// Format is a simple convenience type for when formatting is required. When
// processing a value of this type, the logger automatically treats the first
// argument as a Printf formatting string and passes the rest as the values
// to be formatted. For example: L.Info(Fmt{"%d beans/day", beans}).
type Format []interface{}
// Fmt returns a Format type. This is a convenience function for creating a Format
// type.
func Fmt(str string, args ...interface{}) Format {
return append(Format{str}, args...)
}
// A simple shortcut to format numbers in hex when displayed with the normal
// text output. For example: L.Info("header value", Hex(17))
type Hex int
// A simple shortcut to format numbers in octal when displayed with the normal
// text output. For example: L.Info("perms", Octal(17))
type Octal int
// A simple shortcut to format numbers in binary when displayed with the normal
// text output. For example: L.Info("bits", Binary(17))
type Binary int
// A simple shortcut to format strings with Go quoting. Control and
// non-printable characters will be escaped with their backslash equivalents in
// output. Intended for untrusted or multiline strings which should be logged
// as concisely as possible.
type Quote string
// ColorOption expresses how the output should be colored, if at all.
type ColorOption uint8
const (
// ColorOff is the default coloration, and does not
// inject color codes into the io.Writer.
ColorOff ColorOption = iota
// AutoColor checks if the io.Writer is a tty,
// and if so enables coloring.
AutoColor
// ForceColor will enable coloring, regardless of whether
// the io.Writer is a tty or not.
ForceColor
)
// SupportsColor is an optional interface that can be implemented by the output
// value. If implemented and SupportsColor() returns true, then AutoColor will
// enable colorization.
type SupportsColor interface {
SupportsColor() bool
}
// LevelFromString returns a Level type for the named log level, or "NoLevel" if
// the level string is invalid. This facilitates setting the log level via
// config or environment variable by name in a predictable way.
func LevelFromString(levelStr string) Level {
// We don't care about case. Accept both "INFO" and "info".
levelStr = strings.ToLower(strings.TrimSpace(levelStr))
switch levelStr {
case "trace":
return Trace
case "debug":
return Debug
case "info":
return Info
case "warn":
return Warn
case "error":
return Error
case "off":
return Off
default:
return NoLevel
}
}
func (l Level) String() string {
switch l {
case Trace:
return "trace"
case Debug:
return "debug"
case Info:
return "info"
case Warn:
return "warn"
case Error:
return "error"
case NoLevel:
return "none"
case Off:
return "off"
default:
return "unknown"
}
}
// Logger describes the interface that must be implemented by all loggers.
type Logger interface {
// Args are alternating key, val pairs
// keys must be strings
// vals can be any type, but display is implementation specific
// Emit a message and key/value pairs at a provided log level
Log(level Level, msg string, args ...interface{})
// Emit a message and key/value pairs at the TRACE level
Trace(msg string, args ...interface{})
// Emit a message and key/value pairs at the DEBUG level
Debug(msg string, args ...interface{})
// Emit a message and key/value pairs at the INFO level
Info(msg string, args ...interface{})
// Emit a message and key/value pairs at the WARN level
Warn(msg string, args ...interface{})
// Emit a message and key/value pairs at the ERROR level
Error(msg string, args ...interface{})
// Indicate if TRACE logs would be emitted. This and the other Is* guards
// are used to elide expensive logging code based on the current level.
IsTrace() bool
// Indicate if DEBUG logs would be emitted. This and the other Is* guards
IsDebug() bool
// Indicate if INFO logs would be emitted. This and the other Is* guards
IsInfo() bool
// Indicate if WARN logs would be emitted. This and the other Is* guards
IsWarn() bool
// Indicate if ERROR logs would be emitted. This and the other Is* guards
IsError() bool
// ImpliedArgs returns With key/value pairs
ImpliedArgs() []interface{}
// Creates a sublogger that will always have the given key/value pairs
With(args ...interface{}) Logger
// Returns the Name of the logger
Name() string
// Create a logger that will prepend the name string on the front of all messages.
// If the logger already has a name, the new value will be appended to the current
// name. That way, a major subsystem can use this to decorate all it's own logs
// without losing context.
Named(name string) Logger
// Create a logger that will prepend the name string on the front of all messages.
// This sets the name of the logger to the value directly, unlike Named which honor
// the current name as well.
ResetNamed(name string) Logger
// Updates the level. This should affect all related loggers as well,
// unless they were created with IndependentLevels. If an
// implementation cannot update the level on the fly, it should no-op.
SetLevel(level Level)
// Returns the current level
GetLevel() Level
// Return a value that conforms to the stdlib log.Logger interface
StandardLogger(opts *StandardLoggerOptions) *log.Logger
// Return a value that conforms to io.Writer, which can be passed into log.SetOutput()
StandardWriter(opts *StandardLoggerOptions) io.Writer
}
// StandardLoggerOptions can be used to configure a new standard logger.
type StandardLoggerOptions struct {
// Indicate that some minimal parsing should be done on strings to try
// and detect their level and re-emit them.
// This supports the strings like [ERROR], [ERR] [TRACE], [WARN], [INFO],
// [DEBUG] and strip it off before reapplying it.
InferLevels bool
// Indicate that some minimal parsing should be done on strings to try
// and detect their level and re-emit them while ignoring possible
// timestamp values in the beginning of the string.
// This supports the strings like [ERROR], [ERR] [TRACE], [WARN], [INFO],
// [DEBUG] and strip it off before reapplying it.
// The timestamp detection may result in false positives and incomplete
// string outputs.
// InferLevelsWithTimestamp is only relevant if InferLevels is true.
InferLevelsWithTimestamp bool
// ForceLevel is used to force all output from the standard logger to be at
// the specified level. Similar to InferLevels, this will strip any level
// prefix contained in the logged string before applying the forced level.
// If set, this override InferLevels.
ForceLevel Level
}
type TimeFunction = func() time.Time
// LoggerOptions can be used to configure a new logger.
type LoggerOptions struct {
// Name of the subsystem to prefix logs with
Name string
// The threshold for the logger. Anything less severe is suppressed
Level Level
// Where to write the logs to. Defaults to os.Stderr if nil
Output io.Writer
// An optional Locker in case Output is shared. This can be a sync.Mutex or
// a NoopLocker if the caller wants control over output, e.g. for batching
// log lines.
Mutex Locker
// Control if the output should be in JSON.
JSONFormat bool
// Include file and line information in each log line
IncludeLocation bool
// AdditionalLocationOffset is the number of additional stack levels to skip
// when finding the file and line information for the log line
AdditionalLocationOffset int
// The time format to use instead of the default
TimeFormat string
// A function which is called to get the time object that is formatted using `TimeFormat`
TimeFn TimeFunction
// Control whether or not to display the time at all. This is required
// because setting TimeFormat to empty assumes the default format.
DisableTime bool
// Color the output. On Windows, colored logs are only available for io.Writers that
// are concretely instances of *os.File.
Color ColorOption
// Only color the header, not the body. This can help with readability of long messages.
ColorHeaderOnly bool
// Color the header and message body fields. This can help with readability
// of long messages with multiple fields.
ColorHeaderAndFields bool
// A function which is called with the log information and if it returns true the value
// should not be logged.
// This is useful when interacting with a system that you wish to suppress the log
// message for (because it's too noisy, etc)
Exclude func(level Level, msg string, args ...interface{}) bool
// IndependentLevels causes subloggers to be created with an independent
// copy of this logger's level. This means that using SetLevel on this
// logger will not affect any subloggers, and SetLevel on any subloggers
// will not affect the parent or sibling loggers.
IndependentLevels bool
// When set, changing the level of a logger effects only it's direct sub-loggers
// rather than all sub-loggers. For example:
// a := logger.Named("a")
// a.SetLevel(Error)
// b := a.Named("b")
// c := a.Named("c")
// b.GetLevel() => Error
// c.GetLevel() => Error
// b.SetLevel(Info)
// a.GetLevel() => Error
// b.GetLevel() => Info
// c.GetLevel() => Error
// a.SetLevel(Warn)
// a.GetLevel() => Warn
// b.GetLevel() => Warn
// c.GetLevel() => Warn
SyncParentLevel bool
// SubloggerHook registers a function that is called when a sublogger via
// Named, With, or ResetNamed is created. If defined, the function is passed
// the newly created Logger and the returned Logger is returned from the
// original function. This option allows customization via interception and
// wrapping of Logger instances.
SubloggerHook func(sub Logger) Logger
}
// InterceptLogger describes the interface for using a logger
// that can register different output sinks.
// This is useful for sending lower level log messages
// to a different output while keeping the root logger
// at a higher one.
type InterceptLogger interface {
// Logger is the root logger for an InterceptLogger
Logger
// RegisterSink adds a SinkAdapter to the InterceptLogger
RegisterSink(sink SinkAdapter)
// DeregisterSink removes a SinkAdapter from the InterceptLogger
DeregisterSink(sink SinkAdapter)
// Create a interceptlogger that will prepend the name string on the front of all messages.
// If the logger already has a name, the new value will be appended to the current
// name. That way, a major subsystem can use this to decorate all it's own logs
// without losing context.
NamedIntercept(name string) InterceptLogger
// Create a interceptlogger that will prepend the name string on the front of all messages.
// This sets the name of the logger to the value directly, unlike Named which honor
// the current name as well.
ResetNamedIntercept(name string) InterceptLogger
// Deprecated: use StandardLogger
StandardLoggerIntercept(opts *StandardLoggerOptions) *log.Logger
// Deprecated: use StandardWriter
StandardWriterIntercept(opts *StandardLoggerOptions) io.Writer
}
// SinkAdapter describes the interface that must be implemented
// in order to Register a new sink to an InterceptLogger
type SinkAdapter interface {
Accept(name string, level Level, msg string, args ...interface{})
}
// Flushable represents a method for flushing an output buffer. It can be used
// if Resetting the log to use a new output, in order to flush the writes to
// the existing output beforehand.
type Flushable interface {
Flush() error
}
// OutputResettable provides ways to swap the output in use at runtime
type OutputResettable interface {
// ResetOutput swaps the current output writer with the one given in the
// opts. Color options given in opts will be used for the new output.
ResetOutput(opts *LoggerOptions) error
// ResetOutputWithFlush swaps the current output writer with the one given
// in the opts, first calling Flush on the given Flushable. Color options
// given in opts will be used for the new output.
ResetOutputWithFlush(opts *LoggerOptions, flushable Flushable) error
}
// Locker is used for locking output. If not set when creating a logger, a
// sync.Mutex will be used internally.
type Locker interface {
// Lock is called when the output is going to be changed or written to
Lock()
// Unlock is called when the operation that called Lock() completes
Unlock()
}
// NoopLocker implements locker but does nothing. This is useful if the client
// wants tight control over locking, in order to provide grouping of log
// entries or other functionality.
type NoopLocker struct{}
// Lock does nothing
func (n NoopLocker) Lock() {}
// Unlock does nothing
func (n NoopLocker) Unlock() {}
var _ Locker = (*NoopLocker)(nil)

63
vendor/github.com/hashicorp/go-hclog/nulllogger.go generated vendored Normal file
View File

@ -0,0 +1,63 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
package hclog
import (
"io"
"io/ioutil"
"log"
)
// NewNullLogger instantiates a Logger for which all calls
// will succeed without doing anything.
// Useful for testing purposes.
func NewNullLogger() Logger {
return &nullLogger{}
}
type nullLogger struct{}
func (l *nullLogger) Log(level Level, msg string, args ...interface{}) {}
func (l *nullLogger) Trace(msg string, args ...interface{}) {}
func (l *nullLogger) Debug(msg string, args ...interface{}) {}
func (l *nullLogger) Info(msg string, args ...interface{}) {}
func (l *nullLogger) Warn(msg string, args ...interface{}) {}
func (l *nullLogger) Error(msg string, args ...interface{}) {}
func (l *nullLogger) IsTrace() bool { return false }
func (l *nullLogger) IsDebug() bool { return false }
func (l *nullLogger) IsInfo() bool { return false }
func (l *nullLogger) IsWarn() bool { return false }
func (l *nullLogger) IsError() bool { return false }
func (l *nullLogger) ImpliedArgs() []interface{} { return []interface{}{} }
func (l *nullLogger) With(args ...interface{}) Logger { return l }
func (l *nullLogger) Name() string { return "" }
func (l *nullLogger) Named(name string) Logger { return l }
func (l *nullLogger) ResetNamed(name string) Logger { return l }
func (l *nullLogger) SetLevel(level Level) {}
func (l *nullLogger) GetLevel() Level { return NoLevel }
func (l *nullLogger) StandardLogger(opts *StandardLoggerOptions) *log.Logger {
return log.New(l.StandardWriter(opts), "", log.LstdFlags)
}
func (l *nullLogger) StandardWriter(opts *StandardLoggerOptions) io.Writer {
return ioutil.Discard
}

109
vendor/github.com/hashicorp/go-hclog/stacktrace.go generated vendored Normal file
View File

@ -0,0 +1,109 @@
// Copyright (c) 2016 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package hclog
import (
"bytes"
"runtime"
"strconv"
"strings"
"sync"
)
var (
_stacktraceIgnorePrefixes = []string{
"runtime.goexit",
"runtime.main",
}
_stacktracePool = sync.Pool{
New: func() interface{} {
return newProgramCounters(64)
},
}
)
// CapturedStacktrace represents a stacktrace captured by a previous call
// to log.Stacktrace. If passed to a logging function, the stacktrace
// will be appended.
type CapturedStacktrace string
// Stacktrace captures a stacktrace of the current goroutine and returns
// it to be passed to a logging function.
func Stacktrace() CapturedStacktrace {
return CapturedStacktrace(takeStacktrace())
}
func takeStacktrace() string {
programCounters := _stacktracePool.Get().(*programCounters)
defer _stacktracePool.Put(programCounters)
var buffer bytes.Buffer
for {
// Skip the call to runtime.Counters and takeStacktrace so that the
// program counters start at the caller of takeStacktrace.
n := runtime.Callers(2, programCounters.pcs)
if n < cap(programCounters.pcs) {
programCounters.pcs = programCounters.pcs[:n]
break
}
// Don't put the too-short counter slice back into the pool; this lets
// the pool adjust if we consistently take deep stacktraces.
programCounters = newProgramCounters(len(programCounters.pcs) * 2)
}
i := 0
frames := runtime.CallersFrames(programCounters.pcs)
for frame, more := frames.Next(); more; frame, more = frames.Next() {
if shouldIgnoreStacktraceFunction(frame.Function) {
continue
}
if i != 0 {
buffer.WriteByte('\n')
}
i++
buffer.WriteString(frame.Function)
buffer.WriteByte('\n')
buffer.WriteByte('\t')
buffer.WriteString(frame.File)
buffer.WriteByte(':')
buffer.WriteString(strconv.Itoa(int(frame.Line)))
}
return buffer.String()
}
func shouldIgnoreStacktraceFunction(function string) bool {
for _, prefix := range _stacktraceIgnorePrefixes {
if strings.HasPrefix(function, prefix) {
return true
}
}
return false
}
type programCounters struct {
pcs []uintptr
}
func newProgramCounters(size int) *programCounters {
return &programCounters{make([]uintptr, size)}
}

113
vendor/github.com/hashicorp/go-hclog/stdlog.go generated vendored Normal file
View File

@ -0,0 +1,113 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
package hclog
import (
"bytes"
"log"
"regexp"
"strings"
)
// Regex to ignore characters commonly found in timestamp formats from the
// beginning of inputs.
var logTimestampRegexp = regexp.MustCompile(`^[\d\s\:\/\.\+-TZ]*`)
// Provides a io.Writer to shim the data out of *log.Logger
// and back into our Logger. This is basically the only way to
// build upon *log.Logger.
type stdlogAdapter struct {
log Logger
inferLevels bool
inferLevelsWithTimestamp bool
forceLevel Level
}
// Take the data, infer the levels if configured, and send it through
// a regular Logger.
func (s *stdlogAdapter) Write(data []byte) (int, error) {
str := string(bytes.TrimRight(data, " \t\n"))
if s.forceLevel != NoLevel {
// Use pickLevel to strip log levels included in the line since we are
// forcing the level
_, str := s.pickLevel(str)
// Log at the forced level
s.dispatch(str, s.forceLevel)
} else if s.inferLevels {
if s.inferLevelsWithTimestamp {
str = s.trimTimestamp(str)
}
level, str := s.pickLevel(str)
s.dispatch(str, level)
} else {
s.log.Info(str)
}
return len(data), nil
}
func (s *stdlogAdapter) dispatch(str string, level Level) {
switch level {
case Trace:
s.log.Trace(str)
case Debug:
s.log.Debug(str)
case Info:
s.log.Info(str)
case Warn:
s.log.Warn(str)
case Error:
s.log.Error(str)
default:
s.log.Info(str)
}
}
// Detect, based on conventions, what log level this is.
func (s *stdlogAdapter) pickLevel(str string) (Level, string) {
switch {
case strings.HasPrefix(str, "[DEBUG]"):
return Debug, strings.TrimSpace(str[7:])
case strings.HasPrefix(str, "[TRACE]"):
return Trace, strings.TrimSpace(str[7:])
case strings.HasPrefix(str, "[INFO]"):
return Info, strings.TrimSpace(str[6:])
case strings.HasPrefix(str, "[WARN]"):
return Warn, strings.TrimSpace(str[6:])
case strings.HasPrefix(str, "[ERROR]"):
return Error, strings.TrimSpace(str[7:])
case strings.HasPrefix(str, "[ERR]"):
return Error, strings.TrimSpace(str[5:])
default:
return Info, str
}
}
func (s *stdlogAdapter) trimTimestamp(str string) string {
idx := logTimestampRegexp.FindStringIndex(str)
return str[idx[1]:]
}
type logWriter struct {
l *log.Logger
}
func (l *logWriter) Write(b []byte) (int, error) {
l.l.Println(string(bytes.TrimRight(b, " \n\t")))
return len(b), nil
}
// Takes a standard library logger and returns a Logger that will write to it
func FromStandardLogger(l *log.Logger, opts *LoggerOptions) Logger {
var dl LoggerOptions = *opts
// Use the time format that log.Logger uses
dl.DisableTime = true
dl.Output = &logWriter{l}
return New(&dl)
}

85
vendor/github.com/hashicorp/go-hclog/writer.go generated vendored Normal file
View File

@ -0,0 +1,85 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MIT
package hclog
import (
"bytes"
"io"
)
type writer struct {
b bytes.Buffer
w io.Writer
color ColorOption
}
func newWriter(w io.Writer, color ColorOption) *writer {
return &writer{w: w, color: color}
}
func (w *writer) Flush(level Level) (err error) {
var unwritten = w.b.Bytes()
if w.color != ColorOff {
color := _levelToColor[level]
unwritten = []byte(color.Sprintf("%s", unwritten))
}
if lw, ok := w.w.(LevelWriter); ok {
_, err = lw.LevelWrite(level, unwritten)
} else {
_, err = w.w.Write(unwritten)
}
w.b.Reset()
return err
}
func (w *writer) Write(p []byte) (int, error) {
return w.b.Write(p)
}
func (w *writer) WriteByte(c byte) error {
return w.b.WriteByte(c)
}
func (w *writer) WriteString(s string) (int, error) {
return w.b.WriteString(s)
}
// LevelWriter is the interface that wraps the LevelWrite method.
type LevelWriter interface {
LevelWrite(level Level, p []byte) (n int, err error)
}
// LeveledWriter writes all log messages to the standard writer,
// except for log levels that are defined in the overrides map.
type LeveledWriter struct {
standard io.Writer
overrides map[Level]io.Writer
}
// NewLeveledWriter returns an initialized LeveledWriter.
//
// standard will be used as the default writer for all log levels,
// except for log levels that are defined in the overrides map.
func NewLeveledWriter(standard io.Writer, overrides map[Level]io.Writer) *LeveledWriter {
return &LeveledWriter{
standard: standard,
overrides: overrides,
}
}
// Write implements io.Writer.
func (lw *LeveledWriter) Write(p []byte) (int, error) {
return lw.standard.Write(p)
}
// LevelWrite implements LevelWriter.
func (lw *LeveledWriter) LevelWrite(level Level, p []byte) (int, error) {
w, ok := lw.overrides[level]
if !ok {
w = lw.standard
}
return w.Write(p)
}

353
vendor/github.com/hashicorp/go-multierror/LICENSE generated vendored Normal file
View File

@ -0,0 +1,353 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

31
vendor/github.com/hashicorp/go-multierror/Makefile generated vendored Normal file
View File

@ -0,0 +1,31 @@
TEST?=./...
default: test
# test runs the test suite and vets the code.
test: generate
@echo "==> Running tests..."
@go list $(TEST) \
| grep -v "/vendor/" \
| xargs -n1 go test -timeout=60s -parallel=10 ${TESTARGS}
# testrace runs the race checker
testrace: generate
@echo "==> Running tests (race)..."
@go list $(TEST) \
| grep -v "/vendor/" \
| xargs -n1 go test -timeout=60s -race ${TESTARGS}
# updatedeps installs all the dependencies needed to run and build.
updatedeps:
@sh -c "'${CURDIR}/scripts/deps.sh' '${NAME}'"
# generate runs `go generate` to build the dynamically generated source files.
generate:
@echo "==> Generating..."
@find . -type f -name '.DS_Store' -delete
@go list ./... \
| grep -v "/vendor/" \
| xargs -n1 go generate
.PHONY: default test testrace updatedeps generate

150
vendor/github.com/hashicorp/go-multierror/README.md generated vendored Normal file
View File

@ -0,0 +1,150 @@
# go-multierror
[![CircleCI](https://img.shields.io/circleci/build/github/hashicorp/go-multierror/master)](https://circleci.com/gh/hashicorp/go-multierror)
[![Go Reference](https://pkg.go.dev/badge/github.com/hashicorp/go-multierror.svg)](https://pkg.go.dev/github.com/hashicorp/go-multierror)
![GitHub go.mod Go version](https://img.shields.io/github/go-mod/go-version/hashicorp/go-multierror)
[circleci]: https://app.circleci.com/pipelines/github/hashicorp/go-multierror
[godocs]: https://pkg.go.dev/github.com/hashicorp/go-multierror
`go-multierror` is a package for Go that provides a mechanism for
representing a list of `error` values as a single `error`.
This allows a function in Go to return an `error` that might actually
be a list of errors. If the caller knows this, they can unwrap the
list and access the errors. If the caller doesn't know, the error
formats to a nice human-readable format.
`go-multierror` is fully compatible with the Go standard library
[errors](https://golang.org/pkg/errors/) package, including the
functions `As`, `Is`, and `Unwrap`. This provides a standardized approach
for introspecting on error values.
## Installation and Docs
Install using `go get github.com/hashicorp/go-multierror`.
Full documentation is available at
https://pkg.go.dev/github.com/hashicorp/go-multierror
### Requires go version 1.13 or newer
`go-multierror` requires go version 1.13 or newer. Go 1.13 introduced
[error wrapping](https://golang.org/doc/go1.13#error_wrapping), which
this library takes advantage of.
If you need to use an earlier version of go, you can use the
[v1.0.0](https://github.com/hashicorp/go-multierror/tree/v1.0.0)
tag, which doesn't rely on features in go 1.13.
If you see compile errors that look like the below, it's likely that
you're on an older version of go:
```
/go/src/github.com/hashicorp/go-multierror/multierror.go:112:9: undefined: errors.As
/go/src/github.com/hashicorp/go-multierror/multierror.go:117:9: undefined: errors.Is
```
## Usage
go-multierror is easy to use and purposely built to be unobtrusive in
existing Go applications/libraries that may not be aware of it.
**Building a list of errors**
The `Append` function is used to create a list of errors. This function
behaves a lot like the Go built-in `append` function: it doesn't matter
if the first argument is nil, a `multierror.Error`, or any other `error`,
the function behaves as you would expect.
```go
var result error
if err := step1(); err != nil {
result = multierror.Append(result, err)
}
if err := step2(); err != nil {
result = multierror.Append(result, err)
}
return result
```
**Customizing the formatting of the errors**
By specifying a custom `ErrorFormat`, you can customize the format
of the `Error() string` function:
```go
var result *multierror.Error
// ... accumulate errors here, maybe using Append
if result != nil {
result.ErrorFormat = func([]error) string {
return "errors!"
}
}
```
**Accessing the list of errors**
`multierror.Error` implements `error` so if the caller doesn't know about
multierror, it will work just fine. But if you're aware a multierror might
be returned, you can use type switches to access the list of errors:
```go
if err := something(); err != nil {
if merr, ok := err.(*multierror.Error); ok {
// Use merr.Errors
}
}
```
You can also use the standard [`errors.Unwrap`](https://golang.org/pkg/errors/#Unwrap)
function. This will continue to unwrap into subsequent errors until none exist.
**Extracting an error**
The standard library [`errors.As`](https://golang.org/pkg/errors/#As)
function can be used directly with a multierror to extract a specific error:
```go
// Assume err is a multierror value
err := somefunc()
// We want to know if "err" has a "RichErrorType" in it and extract it.
var errRich RichErrorType
if errors.As(err, &errRich) {
// It has it, and now errRich is populated.
}
```
**Checking for an exact error value**
Some errors are returned as exact errors such as the [`ErrNotExist`](https://golang.org/pkg/os/#pkg-variables)
error in the `os` package. You can check if this error is present by using
the standard [`errors.Is`](https://golang.org/pkg/errors/#Is) function.
```go
// Assume err is a multierror value
err := somefunc()
if errors.Is(err, os.ErrNotExist) {
// err contains os.ErrNotExist
}
```
**Returning a multierror only if there are errors**
If you build a `multierror.Error`, you can use the `ErrorOrNil` function
to return an `error` implementation only if there are errors to return:
```go
var result *multierror.Error
// ... accumulate errors here
// Return the `error` only if errors were added to the multierror, otherwise
// return nil since there are no errors.
return result.ErrorOrNil()
```

43
vendor/github.com/hashicorp/go-multierror/append.go generated vendored Normal file
View File

@ -0,0 +1,43 @@
package multierror
// Append is a helper function that will append more errors
// onto an Error in order to create a larger multi-error.
//
// If err is not a multierror.Error, then it will be turned into
// one. If any of the errs are multierr.Error, they will be flattened
// one level into err.
// Any nil errors within errs will be ignored. If err is nil, a new
// *Error will be returned.
func Append(err error, errs ...error) *Error {
switch err := err.(type) {
case *Error:
// Typed nils can reach here, so initialize if we are nil
if err == nil {
err = new(Error)
}
// Go through each error and flatten
for _, e := range errs {
switch e := e.(type) {
case *Error:
if e != nil {
err.Errors = append(err.Errors, e.Errors...)
}
default:
if e != nil {
err.Errors = append(err.Errors, e)
}
}
}
return err
default:
newErrs := make([]error, 0, len(errs)+1)
if err != nil {
newErrs = append(newErrs, err)
}
newErrs = append(newErrs, errs...)
return Append(&Error{}, newErrs...)
}
}

26
vendor/github.com/hashicorp/go-multierror/flatten.go generated vendored Normal file
View File

@ -0,0 +1,26 @@
package multierror
// Flatten flattens the given error, merging any *Errors together into
// a single *Error.
func Flatten(err error) error {
// If it isn't an *Error, just return the error as-is
if _, ok := err.(*Error); !ok {
return err
}
// Otherwise, make the result and flatten away!
flatErr := new(Error)
flatten(err, flatErr)
return flatErr
}
func flatten(err error, flatErr *Error) {
switch err := err.(type) {
case *Error:
for _, e := range err.Errors {
flatten(e, flatErr)
}
default:
flatErr.Errors = append(flatErr.Errors, err)
}
}

27
vendor/github.com/hashicorp/go-multierror/format.go generated vendored Normal file
View File

@ -0,0 +1,27 @@
package multierror
import (
"fmt"
"strings"
)
// ErrorFormatFunc is a function callback that is called by Error to
// turn the list of errors into a string.
type ErrorFormatFunc func([]error) string
// ListFormatFunc is a basic formatter that outputs the number of errors
// that occurred along with a bullet point list of the errors.
func ListFormatFunc(es []error) string {
if len(es) == 1 {
return fmt.Sprintf("1 error occurred:\n\t* %s\n\n", es[0])
}
points := make([]string, len(es))
for i, err := range es {
points[i] = fmt.Sprintf("* %s", err)
}
return fmt.Sprintf(
"%d errors occurred:\n\t%s\n\n",
len(es), strings.Join(points, "\n\t"))
}

38
vendor/github.com/hashicorp/go-multierror/group.go generated vendored Normal file
View File

@ -0,0 +1,38 @@
package multierror
import "sync"
// Group is a collection of goroutines which return errors that need to be
// coalesced.
type Group struct {
mutex sync.Mutex
err *Error
wg sync.WaitGroup
}
// Go calls the given function in a new goroutine.
//
// If the function returns an error it is added to the group multierror which
// is returned by Wait.
func (g *Group) Go(f func() error) {
g.wg.Add(1)
go func() {
defer g.wg.Done()
if err := f(); err != nil {
g.mutex.Lock()
g.err = Append(g.err, err)
g.mutex.Unlock()
}
}()
}
// Wait blocks until all function calls from the Go method have returned, then
// returns the multierror.
func (g *Group) Wait() *Error {
g.wg.Wait()
g.mutex.Lock()
defer g.mutex.Unlock()
return g.err
}

121
vendor/github.com/hashicorp/go-multierror/multierror.go generated vendored Normal file
View File

@ -0,0 +1,121 @@
package multierror
import (
"errors"
"fmt"
)
// Error is an error type to track multiple errors. This is used to
// accumulate errors in cases and return them as a single "error".
type Error struct {
Errors []error
ErrorFormat ErrorFormatFunc
}
func (e *Error) Error() string {
fn := e.ErrorFormat
if fn == nil {
fn = ListFormatFunc
}
return fn(e.Errors)
}
// ErrorOrNil returns an error interface if this Error represents
// a list of errors, or returns nil if the list of errors is empty. This
// function is useful at the end of accumulation to make sure that the value
// returned represents the existence of errors.
func (e *Error) ErrorOrNil() error {
if e == nil {
return nil
}
if len(e.Errors) == 0 {
return nil
}
return e
}
func (e *Error) GoString() string {
return fmt.Sprintf("*%#v", *e)
}
// WrappedErrors returns the list of errors that this Error is wrapping. It is
// an implementation of the errwrap.Wrapper interface so that multierror.Error
// can be used with that library.
//
// This method is not safe to be called concurrently. Unlike accessing the
// Errors field directly, this function also checks if the multierror is nil to
// prevent a null-pointer panic. It satisfies the errwrap.Wrapper interface.
func (e *Error) WrappedErrors() []error {
if e == nil {
return nil
}
return e.Errors
}
// Unwrap returns an error from Error (or nil if there are no errors).
// This error returned will further support Unwrap to get the next error,
// etc. The order will match the order of Errors in the multierror.Error
// at the time of calling.
//
// The resulting error supports errors.As/Is/Unwrap so you can continue
// to use the stdlib errors package to introspect further.
//
// This will perform a shallow copy of the errors slice. Any errors appended
// to this error after calling Unwrap will not be available until a new
// Unwrap is called on the multierror.Error.
func (e *Error) Unwrap() error {
// If we have no errors then we do nothing
if e == nil || len(e.Errors) == 0 {
return nil
}
// If we have exactly one error, we can just return that directly.
if len(e.Errors) == 1 {
return e.Errors[0]
}
// Shallow copy the slice
errs := make([]error, len(e.Errors))
copy(errs, e.Errors)
return chain(errs)
}
// chain implements the interfaces necessary for errors.Is/As/Unwrap to
// work in a deterministic way with multierror. A chain tracks a list of
// errors while accounting for the current represented error. This lets
// Is/As be meaningful.
//
// Unwrap returns the next error. In the cleanest form, Unwrap would return
// the wrapped error here but we can't do that if we want to properly
// get access to all the errors. Instead, users are recommended to use
// Is/As to get the correct error type out.
//
// Precondition: []error is non-empty (len > 0)
type chain []error
// Error implements the error interface
func (e chain) Error() string {
return e[0].Error()
}
// Unwrap implements errors.Unwrap by returning the next error in the
// chain or nil if there are no more errors.
func (e chain) Unwrap() error {
if len(e) == 1 {
return nil
}
return e[1:]
}
// As implements errors.As by attempting to map to the current value.
func (e chain) As(target interface{}) bool {
return errors.As(e[0], target)
}
// Is implements errors.Is by comparing the current value directly.
func (e chain) Is(target error) bool {
return errors.Is(e[0], target)
}

37
vendor/github.com/hashicorp/go-multierror/prefix.go generated vendored Normal file
View File

@ -0,0 +1,37 @@
package multierror
import (
"fmt"
"github.com/hashicorp/errwrap"
)
// Prefix is a helper function that will prefix some text
// to the given error. If the error is a multierror.Error, then
// it will be prefixed to each wrapped error.
//
// This is useful to use when appending multiple multierrors
// together in order to give better scoping.
func Prefix(err error, prefix string) error {
if err == nil {
return nil
}
format := fmt.Sprintf("%s {{err}}", prefix)
switch err := err.(type) {
case *Error:
// Typed nils can reach here, so initialize if we are nil
if err == nil {
err = new(Error)
}
// Wrap each of the errors
for i, e := range err.Errors {
err.Errors[i] = errwrap.Wrapf(format, e)
}
return err
default:
return errwrap.Wrapf(format, err)
}
}

16
vendor/github.com/hashicorp/go-multierror/sort.go generated vendored Normal file
View File

@ -0,0 +1,16 @@
package multierror
// Len implements sort.Interface function for length
func (err Error) Len() int {
return len(err.Errors)
}
// Swap implements sort.Interface function for swapping elements
func (err Error) Swap(i, j int) {
err.Errors[i], err.Errors[j] = err.Errors[j], err.Errors[i]
}
// Less implements sort.Interface function for determining order
func (err Error) Less(i, j int) bool {
return err.Errors[i].Error() < err.Errors[j].Error()
}

2
vendor/github.com/hashicorp/go-plugin/.gitignore generated vendored Normal file
View File

@ -0,0 +1,2 @@
.DS_Store
.idea

102
vendor/github.com/hashicorp/go-plugin/CHANGELOG.md generated vendored Normal file
View File

@ -0,0 +1,102 @@
## v1.6.0
CHANGES:
* plugin: Plugins written in other languages can optionally start to advertise whether they support gRPC broker multiplexing.
If the environment variable `PLUGIN_MULTIPLEX_GRPC` is set, it is safe to include a seventh field containing a boolean
value in the `|`-separated protocol negotiation line.
ENHANCEMENTS:
* Support muxing gRPC broker connections over a single listener [[GH-288](https://github.com/hashicorp/go-plugin/pull/288)]
* client: Configurable buffer size for reading plugin log lines [[GH-265](https://github.com/hashicorp/go-plugin/pull/265)]
* Use `buf` for proto generation [[GH-286](https://github.com/hashicorp/go-plugin/pull/286)]
* deps: bump golang.org/x/net to v0.17.0 [[GH-285](https://github.com/hashicorp/go-plugin/pull/285)]
* deps: bump golang.org/x/sys to v0.13.0 [[GH-285](https://github.com/hashicorp/go-plugin/pull/285)]
* deps: bump golang.org/x/text to v0.13.0 [[GH-285](https://github.com/hashicorp/go-plugin/pull/285)]
## v1.5.2
ENHANCEMENTS:
client: New `UnixSocketConfig.TempDir` option allows setting the directory to use when creating plugin-specific Unix socket directories [[GH-282](https://github.com/hashicorp/go-plugin/pull/282)]
## v1.5.1
BUGS:
* server: `PLUGIN_UNIX_SOCKET_DIR` is consistently used for gRPC broker sockets as well as the initial socket [[GH-277](https://github.com/hashicorp/go-plugin/pull/277)]
ENHANCEMENTS:
* client: New `UnixSocketConfig` option in `ClientConfig` to support making the client's Unix sockets group-writable [[GH-277](https://github.com/hashicorp/go-plugin/pull/277)]
## v1.5.0
ENHANCEMENTS:
* client: New `runner.Runner` interface to support clients providing custom plugin command runner implementations [[GH-270](https://github.com/hashicorp/go-plugin/pull/270)]
* Accessible via new `ClientConfig` field `RunnerFunc`, which is mutually exclusive with `Cmd` and `Reattach`
* Reattaching support via `ReattachConfig` field `ReattachFunc`
* client: New `ClientConfig` field `SkipHostEnv` allows omitting the client process' own environment variables from the plugin command's environment [[GH-270](https://github.com/hashicorp/go-plugin/pull/270)]
* client: Add `ID()` method to `Client` for retrieving the pid or other unique ID of a running plugin [[GH-272](https://github.com/hashicorp/go-plugin/pull/272)]
* server: Support setting the directory to create Unix sockets in with the env var `PLUGIN_UNIX_SOCKET_DIR` [[GH-270](https://github.com/hashicorp/go-plugin/pull/270)]
* server: Support setting group write permission and a custom group name or gid owner with the env var `PLUGIN_UNIX_SOCKET_GROUP` [[GH-270](https://github.com/hashicorp/go-plugin/pull/270)]
## v1.4.11-rc1
ENHANCEMENTS:
* deps: bump protoreflect to v1.15.1 [[GH-264](https://github.com/hashicorp/go-plugin/pull/264)]
## v1.4.10
BUG FIXES:
* additional notes: ensure to close files [[GH-241](https://github.com/hashicorp/go-plugin/pull/241)]
ENHANCEMENTS:
* deps: Remove direct dependency on golang.org/x/net [[GH-240](https://github.com/hashicorp/go-plugin/pull/240)]
## v1.4.9
ENHANCEMENTS:
* client: Remove log warning introduced in 1.4.5 when SecureConfig is nil. [[GH-238](https://github.com/hashicorp/go-plugin/pull/238)]
## v1.4.8
BUG FIXES:
* Fix windows build: [[GH-227](https://github.com/hashicorp/go-plugin/pull/227)]
## v1.4.7
ENHANCEMENTS:
* More detailed error message on plugin start failure: [[GH-223](https://github.com/hashicorp/go-plugin/pull/223)]
## v1.4.6
BUG FIXES:
* server: Prevent gRPC broker goroutine leak when using `GRPCServer` type `GracefulStop()` or `Stop()` methods [[GH-220](https://github.com/hashicorp/go-plugin/pull/220)]
## v1.4.5
ENHANCEMENTS:
* client: log warning when SecureConfig is nil [[GH-207](https://github.com/hashicorp/go-plugin/pull/207)]
## v1.4.4
ENHANCEMENTS:
* client: increase level of plugin exit logs [[GH-195](https://github.com/hashicorp/go-plugin/pull/195)]
BUG FIXES:
* Bidirectional communication: fix bidirectional communication when AutoMTLS is enabled [[GH-193](https://github.com/hashicorp/go-plugin/pull/193)]
* RPC: Trim a spurious log message for plugins using RPC [[GH-186](https://github.com/hashicorp/go-plugin/pull/186)]

355
vendor/github.com/hashicorp/go-plugin/LICENSE generated vendored Normal file
View File

@ -0,0 +1,355 @@
Copyright (c) 2016 HashiCorp, Inc.
Mozilla Public License, version 2.0
1. Definitions
1.1. “Contributor”
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. “Contributor Version”
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributors Contribution.
1.3. “Contribution”
means Covered Software of a particular Contributor.
1.4. “Covered Software”
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. “Incompatible With Secondary Licenses”
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of version
1.1 or earlier of the License, but not also under the terms of a
Secondary License.
1.6. “Executable Form”
means any form of the work other than Source Code Form.
1.7. “Larger Work”
means a work that combines Covered Software with other material, in a separate
file or files, that is not Covered Software.
1.8. “License”
means this document.
1.9. “Licensable”
means having the right to grant, to the maximum extent possible, whether at the
time of the initial grant or subsequently, any and all of the rights conveyed by
this License.
1.10. “Modifications”
means any of the following:
a. any file in Source Code Form that results from an addition to, deletion
from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. “Patent Claims” of a Contributor
means any patent claim(s), including without limitation, method, process,
and apparatus claims, in any patent Licensable by such Contributor that
would be infringed, but for the grant of the License, by the making,
using, selling, offering for sale, having made, import, or transfer of
either its Contributions or its Contributor Version.
1.12. “Secondary License”
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. “Source Code Form”
means the form of the work preferred for making modifications.
1.14. “You” (or “Your”)
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, “control” means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or as
part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its Contributions
or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become
effective for each Contribution on the date the Contributor first distributes
such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this
License. No additional rights or licenses will be implied from the distribution
or licensing of Covered Software under this License. Notwithstanding Section
2.1(b) above, no patent license is granted by a Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third partys
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of its
Contributions.
This License does not grant any rights in the trademarks, service marks, or
logos of any Contributor (except as may be necessary to comply with the
notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this License
(see Section 10.2) or under the terms of a Secondary License (if permitted
under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions
are its original creation(s) or it has sufficient rights to grant the
rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable
copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under the
terms of this License. You must inform recipients that the Source Code Form
of the Covered Software is governed by the terms of this License, and how
they can obtain a copy of this License. You may not attempt to alter or
restrict the recipients rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this License,
or sublicense it under different terms, provided that the license for
the Executable Form does not attempt to limit or alter the recipients
rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for the
Covered Software. If the Larger Work is a combination of Covered Software
with a work governed by one or more Secondary Licenses, and the Covered
Software is not Incompatible With Secondary Licenses, this License permits
You to additionally distribute such Covered Software under the terms of
such Secondary License(s), so that the recipient of the Larger Work may, at
their option, further distribute the Covered Software under the terms of
either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including
copyright notices, patent notices, disclaimers of warranty, or limitations
of liability) contained within the Source Code Form of the Covered
Software, except that You may alter any license notices to the extent
required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on behalf
of any Contributor. You must make it absolutely clear that any such
warranty, support, indemnity, or liability obligation is offered by You
alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute, judicial
order, or regulation then You must: (a) comply with the terms of this License
to the maximum extent possible; and (b) describe the limitations and the code
they affect. Such description must be placed in a text file included with all
distributions of the Covered Software under this License. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
if such Contributor fails to notify You of the non-compliance by some
reasonable means prior to 60 days after You have come back into compliance.
Moreover, Your grants from a particular Contributor are reinstated on an
ongoing basis if such Contributor notifies You of the non-compliance by
some reasonable means, this is the first time You have received notice of
non-compliance with this License from such Contributor, and You become
compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions, counter-claims,
and cross-claims) alleging that a Contributor Version directly or
indirectly infringes any patent, then the rights granted to You by any and
all Contributors for the Covered Software under Section 2.1 of this License
shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an “as is” basis, without
warranty of any kind, either expressed, implied, or statutory, including,
without limitation, warranties that the Covered Software is free of defects,
merchantable, fit for a particular purpose or non-infringing. The entire
risk as to the quality and performance of the Covered Software is with You.
Should any Covered Software prove defective in any respect, You (not any
Contributor) assume the cost of any necessary servicing, repair, or
correction. This disclaimer of warranty constitutes an essential part of this
License. No use of any Covered Software is authorized under this License
except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from such
partys negligence to the extent applicable law prohibits such limitation.
Some jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of
a jurisdiction where the defendant maintains its principal place of business
and such litigation shall be governed by laws of that jurisdiction, without
reference to its conflict-of-law provisions. Nothing in this Section shall
prevent a partys ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter
hereof. If any provision of this License is held to be unenforceable, such
provision shall be reformed only to the extent necessary to make it
enforceable. Any law or regulation which provides that the language of a
contract shall be construed against the drafter shall not be used to construe
this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of
the License under which You originally received the Covered Software, or
under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a modified
version of this License if you rename the license and remove any
references to the name of the license steward (except to note that such
modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then
You may include the notice in a location (such as a LICENSE file in a relevant
directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is “Incompatible
With Secondary Licenses”, as defined by
the Mozilla Public License, v. 2.0.

165
vendor/github.com/hashicorp/go-plugin/README.md generated vendored Normal file
View File

@ -0,0 +1,165 @@
# Go Plugin System over RPC
`go-plugin` is a Go (golang) plugin system over RPC. It is the plugin system
that has been in use by HashiCorp tooling for over 4 years. While initially
created for [Packer](https://www.packer.io), it is additionally in use by
[Terraform](https://www.terraform.io), [Nomad](https://www.nomadproject.io),
[Vault](https://www.vaultproject.io),
[Boundary](https://www.boundaryproject.io),
and [Waypoint](https://www.waypointproject.io).
While the plugin system is over RPC, it is currently only designed to work
over a local [reliable] network. Plugins over a real network are not supported
and will lead to unexpected behavior.
This plugin system has been used on millions of machines across many different
projects and has proven to be battle hardened and ready for production use.
## Features
The HashiCorp plugin system supports a number of features:
**Plugins are Go interface implementations.** This makes writing and consuming
plugins feel very natural. To a plugin author: you just implement an
interface as if it were going to run in the same process. For a plugin user:
you just use and call functions on an interface as if it were in the same
process. This plugin system handles the communication in between.
**Cross-language support.** Plugins can be written (and consumed) by
almost every major language. This library supports serving plugins via
[gRPC](http://www.grpc.io). gRPC-based plugins enable plugins to be written
in any language.
**Complex arguments and return values are supported.** This library
provides APIs for handling complex arguments and return values such
as interfaces, `io.Reader/Writer`, etc. We do this by giving you a library
(`MuxBroker`) for creating new connections between the client/server to
serve additional interfaces or transfer raw data.
**Bidirectional communication.** Because the plugin system supports
complex arguments, the host process can send it interface implementations
and the plugin can call back into the host process.
**Built-in Logging.** Any plugins that use the `log` standard library
will have log data automatically sent to the host process. The host
process will mirror this output prefixed with the path to the plugin
binary. This makes debugging with plugins simple. If the host system
uses [hclog](https://github.com/hashicorp/go-hclog) then the log data
will be structured. If the plugin also uses hclog, logs from the plugin
will be sent to the host hclog and be structured.
**Protocol Versioning.** A very basic "protocol version" is supported that
can be incremented to invalidate any previous plugins. This is useful when
interface signatures are changing, protocol level changes are necessary,
etc. When a protocol version is incompatible, a human friendly error
message is shown to the end user.
**Stdout/Stderr Syncing.** While plugins are subprocesses, they can continue
to use stdout/stderr as usual and the output will get mirrored back to
the host process. The host process can control what `io.Writer` these
streams go to to prevent this from happening.
**TTY Preservation.** Plugin subprocesses are connected to the identical
stdin file descriptor as the host process, allowing software that requires
a TTY to work. For example, a plugin can execute `ssh` and even though there
are multiple subprocesses and RPC happening, it will look and act perfectly
to the end user.
**Host upgrade while a plugin is running.** Plugins can be "reattached"
so that the host process can be upgraded while the plugin is still running.
This requires the host/plugin to know this is possible and daemonize
properly. `NewClient` takes a `ReattachConfig` to determine if and how to
reattach.
**Cryptographically Secure Plugins.** Plugins can be verified with an expected
checksum and RPC communications can be configured to use TLS. The host process
must be properly secured to protect this configuration.
## Architecture
The HashiCorp plugin system works by launching subprocesses and communicating
over RPC (using standard `net/rpc` or [gRPC](http://www.grpc.io)). A single
connection is made between any plugin and the host process. For net/rpc-based
plugins, we use a [connection multiplexing](https://github.com/hashicorp/yamux)
library to multiplex any other connections on top. For gRPC-based plugins,
the HTTP2 protocol handles multiplexing.
This architecture has a number of benefits:
* Plugins can't crash your host process: A panic in a plugin doesn't
panic the plugin user.
* Plugins are very easy to write: just write a Go application and `go build`.
Or use any other language to write a gRPC server with a tiny amount of
boilerplate to support go-plugin.
* Plugins are very easy to install: just put the binary in a location where
the host will find it (depends on the host but this library also provides
helpers), and the plugin host handles the rest.
* Plugins can be relatively secure: The plugin only has access to the
interfaces and args given to it, not to the entire memory space of the
process. Additionally, go-plugin can communicate with the plugin over
TLS.
## Usage
To use the plugin system, you must take the following steps. These are
high-level steps that must be done. Examples are available in the
`examples/` directory.
1. Choose the interface(s) you want to expose for plugins.
2. For each interface, implement an implementation of that interface
that communicates over a `net/rpc` connection or over a
[gRPC](http://www.grpc.io) connection or both. You'll have to implement
both a client and server implementation.
3. Create a `Plugin` implementation that knows how to create the RPC
client/server for a given plugin type.
4. Plugin authors call `plugin.Serve` to serve a plugin from the
`main` function.
5. Plugin users use `plugin.Client` to launch a subprocess and request
an interface implementation over RPC.
That's it! In practice, step 2 is the most tedious and time consuming step.
Even so, it isn't very difficult and you can see examples in the `examples/`
directory as well as throughout our various open source projects.
For complete API documentation, see [GoDoc](https://godoc.org/github.com/hashicorp/go-plugin).
## Roadmap
Our plugin system is constantly evolving. As we use the plugin system for
new projects or for new features in existing projects, we constantly find
improvements we can make.
At this point in time, the roadmap for the plugin system is:
**Semantic Versioning.** Plugins will be able to implement a semantic version.
This plugin system will give host processes a system for constraining
versions. This is in addition to the protocol versioning already present
which is more for larger underlying changes.
## What About Shared Libraries?
When we started using plugins (late 2012, early 2013), plugins over RPC
were the only option since Go didn't support dynamic library loading. Today,
Go supports the [plugin](https://golang.org/pkg/plugin/) standard library with
a number of limitations. Since 2012, our plugin system has stabilized
from tens of millions of users using it, and has many benefits we've come to
value greatly.
For example, we use this plugin system in
[Vault](https://www.vaultproject.io) where dynamic library loading is
not acceptable for security reasons. That is an extreme
example, but we believe our library system has more upsides than downsides
over dynamic library loading and since we've had it built and tested for years,
we'll continue to use it.
Shared libraries have one major advantage over our system which is much
higher performance. In real world scenarios across our various tools,
we've never required any more performance out of our plugin system and it
has seen very high throughput, so this isn't a concern for us at the moment.

14
vendor/github.com/hashicorp/go-plugin/buf.gen.yaml generated vendored Normal file
View File

@ -0,0 +1,14 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0
version: v1
plugins:
- plugin: buf.build/protocolbuffers/go
out: .
opt:
- paths=source_relative
- plugin: buf.build/grpc/go:v1.3.0
out: .
opt:
- paths=source_relative
- require_unimplemented_servers=false

7
vendor/github.com/hashicorp/go-plugin/buf.yaml generated vendored Normal file
View File

@ -0,0 +1,7 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0
version: v1
build:
excludes:
- examples/

1239
vendor/github.com/hashicorp/go-plugin/client.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

16
vendor/github.com/hashicorp/go-plugin/constants.go generated vendored Normal file
View File

@ -0,0 +1,16 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
const (
// EnvUnixSocketDir specifies the directory that _plugins_ should create unix
// sockets in. Does not affect client behavior.
EnvUnixSocketDir = "PLUGIN_UNIX_SOCKET_DIR"
// EnvUnixSocketGroup specifies the owning, writable group to set for Unix
// sockets created by _plugins_. Does not affect client behavior.
EnvUnixSocketGroup = "PLUGIN_UNIX_SOCKET_GROUP"
envMultiplexGRPC = "PLUGIN_MULTIPLEX_GRPC"
)

31
vendor/github.com/hashicorp/go-plugin/discover.go generated vendored Normal file
View File

@ -0,0 +1,31 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"path/filepath"
)
// Discover discovers plugins that are in a given directory.
//
// The directory doesn't need to be absolute. For example, "." will work fine.
//
// This currently assumes any file matching the glob is a plugin.
// In the future this may be smarter about checking that a file is
// executable and so on.
//
// TODO: test
func Discover(glob, dir string) ([]string, error) {
var err error
// Make the directory absolute if it isn't already
if !filepath.IsAbs(dir) {
dir, err = filepath.Abs(dir)
if err != nil {
return nil, err
}
}
return filepath.Glob(filepath.Join(dir, glob))
}

27
vendor/github.com/hashicorp/go-plugin/error.go generated vendored Normal file
View File

@ -0,0 +1,27 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
// This is a type that wraps error types so that they can be messaged
// across RPC channels. Since "error" is an interface, we can't always
// gob-encode the underlying structure. This is a valid error interface
// implementer that we will push across.
type BasicError struct {
Message string
}
// NewBasicError is used to create a BasicError.
//
// err is allowed to be nil.
func NewBasicError(err error) *BasicError {
if err == nil {
return nil
}
return &BasicError{err.Error()}
}
func (e *BasicError) Error() string {
return e.Message
}

654
vendor/github.com/hashicorp/go-plugin/grpc_broker.go generated vendored Normal file
View File

@ -0,0 +1,654 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"context"
"crypto/tls"
"errors"
"fmt"
"log"
"net"
"sync"
"sync/atomic"
"time"
"github.com/hashicorp/go-plugin/internal/grpcmux"
"github.com/hashicorp/go-plugin/internal/plugin"
"github.com/hashicorp/go-plugin/runner"
"github.com/oklog/run"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
)
// streamer interface is used in the broker to send/receive connection
// information.
type streamer interface {
Send(*plugin.ConnInfo) error
Recv() (*plugin.ConnInfo, error)
Close()
}
// sendErr is used to pass errors back during a send.
type sendErr struct {
i *plugin.ConnInfo
ch chan error
}
// gRPCBrokerServer is used by the plugin to start a stream and to send
// connection information to/from the plugin. Implements GRPCBrokerServer and
// streamer interfaces.
type gRPCBrokerServer struct {
plugin.UnimplementedGRPCBrokerServer
// send is used to send connection info to the gRPC stream.
send chan *sendErr
// recv is used to receive connection info from the gRPC stream.
recv chan *plugin.ConnInfo
// quit closes down the stream.
quit chan struct{}
// o is used to ensure we close the quit channel only once.
o sync.Once
}
func newGRPCBrokerServer() *gRPCBrokerServer {
return &gRPCBrokerServer{
send: make(chan *sendErr),
recv: make(chan *plugin.ConnInfo),
quit: make(chan struct{}),
}
}
// StartStream implements the GRPCBrokerServer interface and will block until
// the quit channel is closed or the context reports Done. The stream will pass
// connection information to/from the client.
func (s *gRPCBrokerServer) StartStream(stream plugin.GRPCBroker_StartStreamServer) error {
doneCh := stream.Context().Done()
defer s.Close()
// Proccess send stream
go func() {
for {
select {
case <-doneCh:
return
case <-s.quit:
return
case se := <-s.send:
err := stream.Send(se.i)
se.ch <- err
}
}
}()
// Process receive stream
for {
i, err := stream.Recv()
if err != nil {
return err
}
select {
case <-doneCh:
return nil
case <-s.quit:
return nil
case s.recv <- i:
}
}
return nil
}
// Send is used by the GRPCBroker to pass connection information into the stream
// to the client.
func (s *gRPCBrokerServer) Send(i *plugin.ConnInfo) error {
ch := make(chan error)
defer close(ch)
select {
case <-s.quit:
return errors.New("broker closed")
case s.send <- &sendErr{
i: i,
ch: ch,
}:
}
return <-ch
}
// Recv is used by the GRPCBroker to pass connection information that has been
// sent from the client from the stream to the broker.
func (s *gRPCBrokerServer) Recv() (*plugin.ConnInfo, error) {
select {
case <-s.quit:
return nil, errors.New("broker closed")
case i := <-s.recv:
return i, nil
}
}
// Close closes the quit channel, shutting down the stream.
func (s *gRPCBrokerServer) Close() {
s.o.Do(func() {
close(s.quit)
})
}
// gRPCBrokerClientImpl is used by the client to start a stream and to send
// connection information to/from the client. Implements GRPCBrokerClient and
// streamer interfaces.
type gRPCBrokerClientImpl struct {
// client is the underlying GRPC client used to make calls to the server.
client plugin.GRPCBrokerClient
// send is used to send connection info to the gRPC stream.
send chan *sendErr
// recv is used to receive connection info from the gRPC stream.
recv chan *plugin.ConnInfo
// quit closes down the stream.
quit chan struct{}
// o is used to ensure we close the quit channel only once.
o sync.Once
}
func newGRPCBrokerClient(conn *grpc.ClientConn) *gRPCBrokerClientImpl {
return &gRPCBrokerClientImpl{
client: plugin.NewGRPCBrokerClient(conn),
send: make(chan *sendErr),
recv: make(chan *plugin.ConnInfo),
quit: make(chan struct{}),
}
}
// StartStream implements the GRPCBrokerClient interface and will block until
// the quit channel is closed or the context reports Done. The stream will pass
// connection information to/from the plugin.
func (s *gRPCBrokerClientImpl) StartStream() error {
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
defer s.Close()
stream, err := s.client.StartStream(ctx)
if err != nil {
return err
}
doneCh := stream.Context().Done()
go func() {
for {
select {
case <-doneCh:
return
case <-s.quit:
return
case se := <-s.send:
err := stream.Send(se.i)
se.ch <- err
}
}
}()
for {
i, err := stream.Recv()
if err != nil {
return err
}
select {
case <-doneCh:
return nil
case <-s.quit:
return nil
case s.recv <- i:
}
}
return nil
}
// Send is used by the GRPCBroker to pass connection information into the stream
// to the plugin.
func (s *gRPCBrokerClientImpl) Send(i *plugin.ConnInfo) error {
ch := make(chan error)
defer close(ch)
select {
case <-s.quit:
return errors.New("broker closed")
case s.send <- &sendErr{
i: i,
ch: ch,
}:
}
return <-ch
}
// Recv is used by the GRPCBroker to pass connection information that has been
// sent from the plugin to the broker.
func (s *gRPCBrokerClientImpl) Recv() (*plugin.ConnInfo, error) {
select {
case <-s.quit:
return nil, errors.New("broker closed")
case i := <-s.recv:
return i, nil
}
}
// Close closes the quit channel, shutting down the stream.
func (s *gRPCBrokerClientImpl) Close() {
s.o.Do(func() {
close(s.quit)
})
}
// GRPCBroker is responsible for brokering connections by unique ID.
//
// It is used by plugins to create multiple gRPC connections and data
// streams between the plugin process and the host process.
//
// This allows a plugin to request a channel with a specific ID to connect to
// or accept a connection from, and the broker handles the details of
// holding these channels open while they're being negotiated.
//
// The Plugin interface has access to these for both Server and Client.
// The broker can be used by either (optionally) to reserve and connect to
// new streams. This is useful for complex args and return values,
// or anything else you might need a data stream for.
type GRPCBroker struct {
nextId uint32
streamer streamer
tls *tls.Config
doneCh chan struct{}
o sync.Once
clientStreams map[uint32]*gRPCBrokerPending
serverStreams map[uint32]*gRPCBrokerPending
unixSocketCfg UnixSocketConfig
addrTranslator runner.AddrTranslator
dialMutex sync.Mutex
muxer grpcmux.GRPCMuxer
sync.Mutex
}
type gRPCBrokerPending struct {
ch chan *plugin.ConnInfo
doneCh chan struct{}
once sync.Once
}
func newGRPCBroker(s streamer, tls *tls.Config, unixSocketCfg UnixSocketConfig, addrTranslator runner.AddrTranslator, muxer grpcmux.GRPCMuxer) *GRPCBroker {
return &GRPCBroker{
streamer: s,
tls: tls,
doneCh: make(chan struct{}),
clientStreams: make(map[uint32]*gRPCBrokerPending),
serverStreams: make(map[uint32]*gRPCBrokerPending),
muxer: muxer,
unixSocketCfg: unixSocketCfg,
addrTranslator: addrTranslator,
}
}
// Accept accepts a connection by ID.
//
// This should not be called multiple times with the same ID at one time.
func (b *GRPCBroker) Accept(id uint32) (net.Listener, error) {
if b.muxer.Enabled() {
p := b.getServerStream(id)
go func() {
err := b.listenForKnocks(id)
if err != nil {
log.Printf("[ERR]: error listening for knocks, id: %d, error: %s", id, err)
}
}()
ln, err := b.muxer.Listener(id, p.doneCh)
if err != nil {
return nil, err
}
ln = &rmListener{
Listener: ln,
close: func() error {
// We could have multiple listeners on the same ID, so use sync.Once
// for closing doneCh to ensure we don't get a panic.
p.once.Do(func() {
close(p.doneCh)
})
b.Lock()
defer b.Unlock()
// No longer need to listen for knocks once the listener is closed.
delete(b.serverStreams, id)
return nil
},
}
return ln, nil
}
listener, err := serverListener(b.unixSocketCfg)
if err != nil {
return nil, err
}
advertiseNet := listener.Addr().Network()
advertiseAddr := listener.Addr().String()
if b.addrTranslator != nil {
advertiseNet, advertiseAddr, err = b.addrTranslator.HostToPlugin(advertiseNet, advertiseAddr)
if err != nil {
return nil, err
}
}
err = b.streamer.Send(&plugin.ConnInfo{
ServiceId: id,
Network: advertiseNet,
Address: advertiseAddr,
})
if err != nil {
return nil, err
}
return listener, nil
}
// AcceptAndServe is used to accept a specific stream ID and immediately
// serve a gRPC server on that stream ID. This is used to easily serve
// complex arguments. Each AcceptAndServe call opens a new listener socket and
// sends the connection info down the stream to the dialer. Since a new
// connection is opened every call, these calls should be used sparingly.
// Multiple gRPC server implementations can be registered to a single
// AcceptAndServe call.
func (b *GRPCBroker) AcceptAndServe(id uint32, newGRPCServer func([]grpc.ServerOption) *grpc.Server) {
ln, err := b.Accept(id)
if err != nil {
log.Printf("[ERR] plugin: plugin acceptAndServe error: %s", err)
return
}
defer ln.Close()
var opts []grpc.ServerOption
if b.tls != nil {
opts = []grpc.ServerOption{grpc.Creds(credentials.NewTLS(b.tls))}
}
server := newGRPCServer(opts)
// Here we use a run group to close this goroutine if the server is shutdown
// or the broker is shutdown.
var g run.Group
{
// Serve on the listener, if shutting down call GracefulStop.
g.Add(func() error {
return server.Serve(ln)
}, func(err error) {
server.GracefulStop()
})
}
{
// block on the closeCh or the doneCh. If we are shutting down close the
// closeCh.
closeCh := make(chan struct{})
g.Add(func() error {
select {
case <-b.doneCh:
case <-closeCh:
}
return nil
}, func(err error) {
close(closeCh)
})
}
// Block until we are done
g.Run()
}
// Close closes the stream and all servers.
func (b *GRPCBroker) Close() error {
b.streamer.Close()
b.o.Do(func() {
close(b.doneCh)
})
return nil
}
func (b *GRPCBroker) listenForKnocks(id uint32) error {
p := b.getServerStream(id)
for {
select {
case msg := <-p.ch:
// Shouldn't be possible.
if msg.ServiceId != id {
return fmt.Errorf("knock received with wrong service ID; expected %d but got %d", id, msg.ServiceId)
}
// Also shouldn't be possible.
if msg.Knock == nil || !msg.Knock.Knock || msg.Knock.Ack {
return fmt.Errorf("knock received for service ID %d with incorrect values; knock=%+v", id, msg.Knock)
}
// Successful knock, open the door for the given ID.
var ackError string
err := b.muxer.AcceptKnock(id)
if err != nil {
ackError = err.Error()
}
// Send back an acknowledgement to allow the client to start dialling.
err = b.streamer.Send(&plugin.ConnInfo{
ServiceId: id,
Knock: &plugin.ConnInfo_Knock{
Knock: true,
Ack: true,
Error: ackError,
},
})
if err != nil {
return fmt.Errorf("error sending back knock acknowledgement: %w", err)
}
case <-p.doneCh:
return nil
}
}
}
func (b *GRPCBroker) knock(id uint32) error {
// Send a knock.
err := b.streamer.Send(&plugin.ConnInfo{
ServiceId: id,
Knock: &plugin.ConnInfo_Knock{
Knock: true,
},
})
if err != nil {
return err
}
// Wait for the ack.
p := b.getClientStream(id)
select {
case msg := <-p.ch:
if msg.ServiceId != id {
return fmt.Errorf("handshake failed for multiplexing on id %d; got response for %d", id, msg.ServiceId)
}
if msg.Knock == nil || !msg.Knock.Knock || !msg.Knock.Ack {
return fmt.Errorf("handshake failed for multiplexing on id %d; expected knock and ack, but got %+v", id, msg.Knock)
}
if msg.Knock.Error != "" {
return fmt.Errorf("failed to knock for id %d: %s", id, msg.Knock.Error)
}
case <-time.After(5 * time.Second):
return fmt.Errorf("timeout waiting for multiplexing knock handshake on id %d", id)
}
return nil
}
func (b *GRPCBroker) muxDial(id uint32) func(string, time.Duration) (net.Conn, error) {
return func(string, time.Duration) (net.Conn, error) {
b.dialMutex.Lock()
defer b.dialMutex.Unlock()
// Tell the other side the listener ID it should give the next stream to.
err := b.knock(id)
if err != nil {
return nil, fmt.Errorf("failed to knock before dialling client: %w", err)
}
conn, err := b.muxer.Dial()
if err != nil {
return nil, err
}
return conn, nil
}
}
// Dial opens a connection by ID.
func (b *GRPCBroker) Dial(id uint32) (conn *grpc.ClientConn, err error) {
if b.muxer.Enabled() {
return dialGRPCConn(b.tls, b.muxDial(id))
}
var c *plugin.ConnInfo
// Open the stream
p := b.getClientStream(id)
select {
case c = <-p.ch:
close(p.doneCh)
case <-time.After(5 * time.Second):
return nil, fmt.Errorf("timeout waiting for connection info")
}
network, address := c.Network, c.Address
if b.addrTranslator != nil {
network, address, err = b.addrTranslator.PluginToHost(network, address)
if err != nil {
return nil, err
}
}
var addr net.Addr
switch network {
case "tcp":
addr, err = net.ResolveTCPAddr("tcp", address)
case "unix":
addr, err = net.ResolveUnixAddr("unix", address)
default:
err = fmt.Errorf("Unknown address type: %s", c.Address)
}
if err != nil {
return nil, err
}
return dialGRPCConn(b.tls, netAddrDialer(addr))
}
// NextId returns a unique ID to use next.
//
// It is possible for very long-running plugin hosts to wrap this value,
// though it would require a very large amount of calls. In practice
// we've never seen it happen.
func (m *GRPCBroker) NextId() uint32 {
return atomic.AddUint32(&m.nextId, 1)
}
// Run starts the brokering and should be executed in a goroutine, since it
// blocks forever, or until the session closes.
//
// Uses of GRPCBroker never need to call this. It is called internally by
// the plugin host/client.
func (m *GRPCBroker) Run() {
for {
msg, err := m.streamer.Recv()
if err != nil {
// Once we receive an error, just exit
break
}
// Initialize the waiter
var p *gRPCBrokerPending
if msg.Knock != nil && msg.Knock.Knock && !msg.Knock.Ack {
p = m.getServerStream(msg.ServiceId)
// The server side doesn't close the channel immediately as it needs
// to continuously listen for knocks.
} else {
p = m.getClientStream(msg.ServiceId)
go m.timeoutWait(msg.ServiceId, p)
}
select {
case p.ch <- msg:
default:
}
}
}
// getClientStream is a buffer to receive new connection info and knock acks
// by stream ID.
func (m *GRPCBroker) getClientStream(id uint32) *gRPCBrokerPending {
m.Lock()
defer m.Unlock()
p, ok := m.clientStreams[id]
if ok {
return p
}
m.clientStreams[id] = &gRPCBrokerPending{
ch: make(chan *plugin.ConnInfo, 1),
doneCh: make(chan struct{}),
}
return m.clientStreams[id]
}
// getServerStream is a buffer to receive knocks to a multiplexed stream ID
// that its side is listening on. Not used unless multiplexing is enabled.
func (m *GRPCBroker) getServerStream(id uint32) *gRPCBrokerPending {
m.Lock()
defer m.Unlock()
p, ok := m.serverStreams[id]
if ok {
return p
}
m.serverStreams[id] = &gRPCBrokerPending{
ch: make(chan *plugin.ConnInfo, 1),
doneCh: make(chan struct{}),
}
return m.serverStreams[id]
}
func (m *GRPCBroker) timeoutWait(id uint32, p *gRPCBrokerPending) {
// Wait for the stream to either be picked up and connected, or
// for a timeout.
select {
case <-p.doneCh:
case <-time.After(5 * time.Second):
}
m.Lock()
defer m.Unlock()
// Delete the stream so no one else can grab it
delete(m.clientStreams, id)
}

134
vendor/github.com/hashicorp/go-plugin/grpc_client.go generated vendored Normal file
View File

@ -0,0 +1,134 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"context"
"crypto/tls"
"fmt"
"math"
"net"
"time"
"github.com/hashicorp/go-plugin/internal/plugin"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/health/grpc_health_v1"
)
func dialGRPCConn(tls *tls.Config, dialer func(string, time.Duration) (net.Conn, error), dialOpts ...grpc.DialOption) (*grpc.ClientConn, error) {
// Build dialing options.
opts := make([]grpc.DialOption, 0)
// We use a custom dialer so that we can connect over unix domain sockets.
opts = append(opts, grpc.WithDialer(dialer))
// Fail right away
opts = append(opts, grpc.FailOnNonTempDialError(true))
// If we have no TLS configuration set, we need to explicitly tell grpc
// that we're connecting with an insecure connection.
if tls == nil {
opts = append(opts, grpc.WithInsecure())
} else {
opts = append(opts, grpc.WithTransportCredentials(
credentials.NewTLS(tls)))
}
opts = append(opts,
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(math.MaxInt32)),
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(math.MaxInt32)))
// Add our custom options if we have any
opts = append(opts, dialOpts...)
// Connect. Note the first parameter is unused because we use a custom
// dialer that has the state to see the address.
conn, err := grpc.Dial("unused", opts...)
if err != nil {
return nil, err
}
return conn, nil
}
// newGRPCClient creates a new GRPCClient. The Client argument is expected
// to be successfully started already with a lock held.
func newGRPCClient(doneCtx context.Context, c *Client) (*GRPCClient, error) {
conn, err := dialGRPCConn(c.config.TLSConfig, c.dialer, c.config.GRPCDialOptions...)
if err != nil {
return nil, err
}
muxer, err := c.getGRPCMuxer(c.address)
if err != nil {
return nil, err
}
// Start the broker.
brokerGRPCClient := newGRPCBrokerClient(conn)
broker := newGRPCBroker(brokerGRPCClient, c.config.TLSConfig, c.unixSocketCfg, c.runner, muxer)
go broker.Run()
go brokerGRPCClient.StartStream()
// Start the stdio client
stdioClient, err := newGRPCStdioClient(doneCtx, c.logger.Named("stdio"), conn)
if err != nil {
return nil, err
}
go stdioClient.Run(c.config.SyncStdout, c.config.SyncStderr)
cl := &GRPCClient{
Conn: conn,
Plugins: c.config.Plugins,
doneCtx: doneCtx,
broker: broker,
controller: plugin.NewGRPCControllerClient(conn),
}
return cl, nil
}
// GRPCClient connects to a GRPCServer over gRPC to dispense plugin types.
type GRPCClient struct {
Conn *grpc.ClientConn
Plugins map[string]Plugin
doneCtx context.Context
broker *GRPCBroker
controller plugin.GRPCControllerClient
}
// ClientProtocol impl.
func (c *GRPCClient) Close() error {
c.broker.Close()
c.controller.Shutdown(c.doneCtx, &plugin.Empty{})
return c.Conn.Close()
}
// ClientProtocol impl.
func (c *GRPCClient) Dispense(name string) (interface{}, error) {
raw, ok := c.Plugins[name]
if !ok {
return nil, fmt.Errorf("unknown plugin type: %s", name)
}
p, ok := raw.(GRPCPlugin)
if !ok {
return nil, fmt.Errorf("plugin %q doesn't support gRPC", name)
}
return p.GRPCClient(c.doneCtx, c.broker, c.Conn)
}
// ClientProtocol impl.
func (c *GRPCClient) Ping() error {
client := grpc_health_v1.NewHealthClient(c.Conn)
_, err := client.Check(context.Background(), &grpc_health_v1.HealthCheckRequest{
Service: GRPCServiceName,
})
return err
}

View File

@ -0,0 +1,26 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"context"
"github.com/hashicorp/go-plugin/internal/plugin"
)
// GRPCControllerServer handles shutdown calls to terminate the server when the
// plugin client is closed.
type grpcControllerServer struct {
server *GRPCServer
}
// Shutdown stops the grpc server. It first will attempt a graceful stop, then a
// full stop on the server.
func (s *grpcControllerServer) Shutdown(ctx context.Context, _ *plugin.Empty) (*plugin.Empty, error) {
resp := &plugin.Empty{}
// TODO: figure out why GracefullStop doesn't work.
s.server.Stop()
return resp, nil
}

167
vendor/github.com/hashicorp/go-plugin/grpc_server.go generated vendored Normal file
View File

@ -0,0 +1,167 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"bytes"
"crypto/tls"
"encoding/json"
"fmt"
"io"
"net"
hclog "github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin/internal/grpcmux"
"github.com/hashicorp/go-plugin/internal/plugin"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/health"
"google.golang.org/grpc/health/grpc_health_v1"
"google.golang.org/grpc/reflection"
)
// GRPCServiceName is the name of the service that the health check should
// return as passing.
const GRPCServiceName = "plugin"
// DefaultGRPCServer can be used with the "GRPCServer" field for Server
// as a default factory method to create a gRPC server with no extra options.
func DefaultGRPCServer(opts []grpc.ServerOption) *grpc.Server {
return grpc.NewServer(opts...)
}
// GRPCServer is a ServerType implementation that serves plugins over
// gRPC. This allows plugins to easily be written for other languages.
//
// The GRPCServer outputs a custom configuration as a base64-encoded
// JSON structure represented by the GRPCServerConfig config structure.
type GRPCServer struct {
// Plugins are the list of plugins to serve.
Plugins map[string]Plugin
// Server is the actual server that will accept connections. This
// will be used for plugin registration as well.
Server func([]grpc.ServerOption) *grpc.Server
// TLS should be the TLS configuration if available. If this is nil,
// the connection will not have transport security.
TLS *tls.Config
// DoneCh is the channel that is closed when this server has exited.
DoneCh chan struct{}
// Stdout/StderrLis are the readers for stdout/stderr that will be copied
// to the stdout/stderr connection that is output.
Stdout io.Reader
Stderr io.Reader
config GRPCServerConfig
server *grpc.Server
broker *GRPCBroker
stdioServer *grpcStdioServer
logger hclog.Logger
muxer *grpcmux.GRPCServerMuxer
}
// ServerProtocol impl.
func (s *GRPCServer) Init() error {
// Create our server
var opts []grpc.ServerOption
if s.TLS != nil {
opts = append(opts, grpc.Creds(credentials.NewTLS(s.TLS)))
}
s.server = s.Server(opts)
// Register the health service
healthCheck := health.NewServer()
healthCheck.SetServingStatus(
GRPCServiceName, grpc_health_v1.HealthCheckResponse_SERVING)
grpc_health_v1.RegisterHealthServer(s.server, healthCheck)
// Register the reflection service
reflection.Register(s.server)
// Register the broker service
brokerServer := newGRPCBrokerServer()
plugin.RegisterGRPCBrokerServer(s.server, brokerServer)
s.broker = newGRPCBroker(brokerServer, s.TLS, unixSocketConfigFromEnv(), nil, s.muxer)
go s.broker.Run()
// Register the controller
controllerServer := &grpcControllerServer{server: s}
plugin.RegisterGRPCControllerServer(s.server, controllerServer)
// Register the stdio service
s.stdioServer = newGRPCStdioServer(s.logger, s.Stdout, s.Stderr)
plugin.RegisterGRPCStdioServer(s.server, s.stdioServer)
// Register all our plugins onto the gRPC server.
for k, raw := range s.Plugins {
p, ok := raw.(GRPCPlugin)
if !ok {
return fmt.Errorf("%q is not a GRPC-compatible plugin", k)
}
if err := p.GRPCServer(s.broker, s.server); err != nil {
return fmt.Errorf("error registering %q: %s", k, err)
}
}
return nil
}
// Stop calls Stop on the underlying grpc.Server and Close on the underlying
// grpc.Broker if present.
func (s *GRPCServer) Stop() {
s.server.Stop()
if s.broker != nil {
s.broker.Close()
s.broker = nil
}
}
// GracefulStop calls GracefulStop on the underlying grpc.Server and Close on
// the underlying grpc.Broker if present.
func (s *GRPCServer) GracefulStop() {
s.server.GracefulStop()
if s.broker != nil {
s.broker.Close()
s.broker = nil
}
}
// Config is the GRPCServerConfig encoded as JSON then base64.
func (s *GRPCServer) Config() string {
// Create a buffer that will contain our final contents
var buf bytes.Buffer
// Wrap the base64 encoding with JSON encoding.
if err := json.NewEncoder(&buf).Encode(s.config); err != nil {
// We panic since ths shouldn't happen under any scenario. We
// carefully control the structure being encoded here and it should
// always be successful.
panic(err)
}
return buf.String()
}
func (s *GRPCServer) Serve(lis net.Listener) {
defer close(s.DoneCh)
err := s.server.Serve(lis)
if err != nil {
s.logger.Error("grpc server", "error", err)
}
}
// GRPCServerConfig is the extra configuration passed along for consumers
// to facilitate using GRPC plugins.
type GRPCServerConfig struct {
StdoutAddr string `json:"stdout_addr"`
StderrAddr string `json:"stderr_addr"`
}

210
vendor/github.com/hashicorp/go-plugin/grpc_stdio.go generated vendored Normal file
View File

@ -0,0 +1,210 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"bufio"
"bytes"
"context"
"io"
empty "github.com/golang/protobuf/ptypes/empty"
hclog "github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin/internal/plugin"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// grpcStdioBuffer is the buffer size we try to fill when sending a chunk of
// stdio data. This is currently 1 KB for no reason other than that seems like
// enough (stdio data isn't that common) and is fairly low.
const grpcStdioBuffer = 1 * 1024
// grpcStdioServer implements the Stdio service and streams stdiout/stderr.
type grpcStdioServer struct {
stdoutCh <-chan []byte
stderrCh <-chan []byte
}
// newGRPCStdioServer creates a new grpcStdioServer and starts the stream
// copying for the given out and err readers.
//
// This must only be called ONCE per srcOut, srcErr.
func newGRPCStdioServer(log hclog.Logger, srcOut, srcErr io.Reader) *grpcStdioServer {
stdoutCh := make(chan []byte)
stderrCh := make(chan []byte)
// Begin copying the streams
go copyChan(log, stdoutCh, srcOut)
go copyChan(log, stderrCh, srcErr)
// Construct our server
return &grpcStdioServer{
stdoutCh: stdoutCh,
stderrCh: stderrCh,
}
}
// StreamStdio streams our stdout/err as the response.
func (s *grpcStdioServer) StreamStdio(
_ *empty.Empty,
srv plugin.GRPCStdio_StreamStdioServer,
) error {
// Share the same data value between runs. Sending this over the wire
// marshals it so we can reuse this.
var data plugin.StdioData
for {
// Read our data
select {
case data.Data = <-s.stdoutCh:
data.Channel = plugin.StdioData_STDOUT
case data.Data = <-s.stderrCh:
data.Channel = plugin.StdioData_STDERR
case <-srv.Context().Done():
return nil
}
// Not sure if this is possible, but if we somehow got here and
// we didn't populate any data at all, then just continue.
if len(data.Data) == 0 {
continue
}
// Send our data to the client.
if err := srv.Send(&data); err != nil {
return err
}
}
}
// grpcStdioClient wraps the stdio service as a client to copy
// the stdio data to output writers.
type grpcStdioClient struct {
log hclog.Logger
stdioClient plugin.GRPCStdio_StreamStdioClient
}
// newGRPCStdioClient creates a grpcStdioClient. This will perform the
// initial connection to the stdio service. If the stdio service is unavailable
// then this will be a no-op. This allows this to work without error for
// plugins that don't support this.
func newGRPCStdioClient(
ctx context.Context,
log hclog.Logger,
conn *grpc.ClientConn,
) (*grpcStdioClient, error) {
client := plugin.NewGRPCStdioClient(conn)
// Connect immediately to the endpoint
stdioClient, err := client.StreamStdio(ctx, &empty.Empty{})
// If we get an Unavailable or Unimplemented error, this means that the plugin isn't
// updated and linking to the latest version of go-plugin that supports
// this. We fall back to the previous behavior of just not syncing anything.
if status.Code(err) == codes.Unavailable || status.Code(err) == codes.Unimplemented {
log.Warn("stdio service not available, stdout/stderr syncing unavailable")
stdioClient = nil
err = nil
}
if err != nil {
return nil, err
}
return &grpcStdioClient{
log: log,
stdioClient: stdioClient,
}, nil
}
// Run starts the loop that receives stdio data and writes it to the given
// writers. This blocks and should be run in a goroutine.
func (c *grpcStdioClient) Run(stdout, stderr io.Writer) {
// This will be nil if stdio is not supported by the plugin
if c.stdioClient == nil {
c.log.Warn("stdio service unavailable, run will do nothing")
return
}
for {
c.log.Trace("waiting for stdio data")
data, err := c.stdioClient.Recv()
if err != nil {
if err == io.EOF ||
status.Code(err) == codes.Unavailable ||
status.Code(err) == codes.Canceled ||
status.Code(err) == codes.Unimplemented ||
err == context.Canceled {
c.log.Debug("received EOF, stopping recv loop", "err", err)
return
}
c.log.Error("error receiving data", "err", err)
return
}
// Determine our output writer based on channel
var w io.Writer
switch data.Channel {
case plugin.StdioData_STDOUT:
w = stdout
case plugin.StdioData_STDERR:
w = stderr
default:
c.log.Warn("unknown channel, dropping", "channel", data.Channel)
continue
}
// Write! In the event of an error we just continue.
if c.log.IsTrace() {
c.log.Trace("received data", "channel", data.Channel.String(), "len", len(data.Data))
}
if _, err := io.Copy(w, bytes.NewReader(data.Data)); err != nil {
c.log.Error("failed to copy all bytes", "err", err)
}
}
}
// copyChan copies an io.Reader into a channel.
func copyChan(log hclog.Logger, dst chan<- []byte, src io.Reader) {
bufsrc := bufio.NewReader(src)
for {
// Make our data buffer. We allocate a new one per loop iteration
// so that we can send it over the channel.
var data [1024]byte
// Read the data, this will block until data is available
n, err := bufsrc.Read(data[:])
// We have to check if we have data BEFORE err != nil. The bufio
// docs guarantee n == 0 on EOF but its better to be safe here.
if n > 0 {
// We have data! Send it on the channel. This will block if there
// is no reader on the other side. We expect that go-plugin will
// connect immediately to the stdio server to drain this so we want
// this block to happen for backpressure.
dst <- data[:n]
}
// If we hit EOF we're done copying
if err == io.EOF {
log.Debug("stdio EOF, exiting copy loop")
return
}
// Any other error we just exit the loop. We don't expect there to
// be errors since our use case for this is reading/writing from
// a in-process pipe (os.Pipe).
if err != nil {
log.Warn("error copying stdio data, stopping copy", "err", err)
return
}
}
}

View File

@ -0,0 +1,16 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package cmdrunner
// addrTranslator implements stateless identity functions, as the host and plugin
// run in the same context wrt Unix and network addresses.
type addrTranslator struct{}
func (*addrTranslator) PluginToHost(pluginNet, pluginAddr string) (string, string, error) {
return pluginNet, pluginAddr, nil
}
func (*addrTranslator) HostToPlugin(hostNet, hostAddr string) (string, string, error) {
return hostNet, hostAddr, nil
}

View File

@ -0,0 +1,63 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package cmdrunner
import (
"context"
"fmt"
"net"
"os"
"github.com/hashicorp/go-plugin/runner"
)
// ReattachFunc returns a function that allows reattaching to a plugin running
// as a plain process. The process may or may not be a child process.
func ReattachFunc(pid int, addr net.Addr) runner.ReattachFunc {
return func() (runner.AttachedRunner, error) {
p, err := os.FindProcess(pid)
if err != nil {
// On Unix systems, FindProcess never returns an error.
// On Windows, for non-existent pids it returns:
// os.SyscallError - 'OpenProcess: the paremter is incorrect'
return nil, ErrProcessNotFound
}
// Attempt to connect to the addr since on Unix systems FindProcess
// doesn't actually return an error if it can't find the process.
conn, err := net.Dial(addr.Network(), addr.String())
if err != nil {
p.Kill()
return nil, ErrProcessNotFound
}
conn.Close()
return &CmdAttachedRunner{
pid: pid,
process: p,
}, nil
}
}
// CmdAttachedRunner is mostly a subset of CmdRunner, except the Wait function
// does not assume the process is a child of the host process, and so uses a
// different implementation to wait on the process.
type CmdAttachedRunner struct {
pid int
process *os.Process
addrTranslator
}
func (c *CmdAttachedRunner) Wait(_ context.Context) error {
return pidWait(c.pid)
}
func (c *CmdAttachedRunner) Kill(_ context.Context) error {
return c.process.Kill()
}
func (c *CmdAttachedRunner) ID() string {
return fmt.Sprintf("%d", c.pid)
}

View File

@ -0,0 +1,129 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package cmdrunner
import (
"context"
"errors"
"fmt"
"io"
"os"
"os/exec"
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin/runner"
)
var (
_ runner.Runner = (*CmdRunner)(nil)
// ErrProcessNotFound is returned when a client is instantiated to
// reattach to an existing process and it isn't found.
ErrProcessNotFound = errors.New("Reattachment process not found")
)
const unrecognizedRemotePluginMessage = `This usually means
the plugin was not compiled for this architecture,
the plugin is missing dynamic-link libraries necessary to run,
the plugin is not executable by this process due to file permissions, or
the plugin failed to negotiate the initial go-plugin protocol handshake
%s`
// CmdRunner implements the runner.Runner interface. It mostly just passes through
// to exec.Cmd methods.
type CmdRunner struct {
logger hclog.Logger
cmd *exec.Cmd
stdout io.ReadCloser
stderr io.ReadCloser
// Cmd info is persisted early, since the process information will be removed
// after Kill is called.
path string
pid int
addrTranslator
}
// NewCmdRunner returns an implementation of runner.Runner for running a plugin
// as a subprocess. It must be passed a cmd that hasn't yet been started.
func NewCmdRunner(logger hclog.Logger, cmd *exec.Cmd) (*CmdRunner, error) {
stdout, err := cmd.StdoutPipe()
if err != nil {
return nil, err
}
stderr, err := cmd.StderrPipe()
if err != nil {
return nil, err
}
return &CmdRunner{
logger: logger,
cmd: cmd,
stdout: stdout,
stderr: stderr,
path: cmd.Path,
}, nil
}
func (c *CmdRunner) Start(_ context.Context) error {
c.logger.Debug("starting plugin", "path", c.cmd.Path, "args", c.cmd.Args)
err := c.cmd.Start()
if err != nil {
return err
}
c.pid = c.cmd.Process.Pid
c.logger.Debug("plugin started", "path", c.path, "pid", c.pid)
return nil
}
func (c *CmdRunner) Wait(_ context.Context) error {
return c.cmd.Wait()
}
func (c *CmdRunner) Kill(_ context.Context) error {
if c.cmd.Process != nil {
err := c.cmd.Process.Kill()
// Swallow ErrProcessDone, we support calling Kill multiple times.
if !errors.Is(err, os.ErrProcessDone) {
return err
}
return nil
}
return nil
}
func (c *CmdRunner) Stdout() io.ReadCloser {
return c.stdout
}
func (c *CmdRunner) Stderr() io.ReadCloser {
return c.stderr
}
func (c *CmdRunner) Name() string {
return c.path
}
func (c *CmdRunner) ID() string {
return fmt.Sprintf("%d", c.pid)
}
// peTypes is a list of Portable Executable (PE) machine types from https://learn.microsoft.com/en-us/windows/win32/debug/pe-format
// mapped to GOARCH types. It is not comprehensive, and only includes machine types that Go supports.
var peTypes = map[uint16]string{
0x14c: "386",
0x1c0: "arm",
0x6264: "loong64",
0x8664: "amd64",
0xaa64: "arm64",
}
func (c *CmdRunner) Diagnose(_ context.Context) string {
return fmt.Sprintf(unrecognizedRemotePluginMessage, additionalNotesAboutCommand(c.cmd.Path))
}

View File

@ -0,0 +1,70 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
//go:build !windows
// +build !windows
package cmdrunner
import (
"debug/elf"
"debug/macho"
"debug/pe"
"fmt"
"os"
"os/user"
"runtime"
"strconv"
"syscall"
)
// additionalNotesAboutCommand tries to get additional information about a command that might help diagnose
// why it won't run correctly. It runs as a best effort only.
func additionalNotesAboutCommand(path string) string {
notes := ""
stat, err := os.Stat(path)
if err != nil {
return notes
}
notes += "\nAdditional notes about plugin:\n"
notes += fmt.Sprintf(" Path: %s\n", path)
notes += fmt.Sprintf(" Mode: %s\n", stat.Mode())
statT, ok := stat.Sys().(*syscall.Stat_t)
if ok {
currentUsername := "?"
if u, err := user.LookupId(strconv.FormatUint(uint64(os.Getuid()), 10)); err == nil {
currentUsername = u.Username
}
currentGroup := "?"
if g, err := user.LookupGroupId(strconv.FormatUint(uint64(os.Getgid()), 10)); err == nil {
currentGroup = g.Name
}
username := "?"
if u, err := user.LookupId(strconv.FormatUint(uint64(statT.Uid), 10)); err == nil {
username = u.Username
}
group := "?"
if g, err := user.LookupGroupId(strconv.FormatUint(uint64(statT.Gid), 10)); err == nil {
group = g.Name
}
notes += fmt.Sprintf(" Owner: %d [%s] (current: %d [%s])\n", statT.Uid, username, os.Getuid(), currentUsername)
notes += fmt.Sprintf(" Group: %d [%s] (current: %d [%s])\n", statT.Gid, group, os.Getgid(), currentGroup)
}
if elfFile, err := elf.Open(path); err == nil {
defer elfFile.Close()
notes += fmt.Sprintf(" ELF architecture: %s (current architecture: %s)\n", elfFile.Machine, runtime.GOARCH)
} else if machoFile, err := macho.Open(path); err == nil {
defer machoFile.Close()
notes += fmt.Sprintf(" MachO architecture: %s (current architecture: %s)\n", machoFile.Cpu, runtime.GOARCH)
} else if peFile, err := pe.Open(path); err == nil {
defer peFile.Close()
machine, ok := peTypes[peFile.Machine]
if !ok {
machine = "unknown"
}
notes += fmt.Sprintf(" PE architecture: %s (current architecture: %s)\n", machine, runtime.GOARCH)
}
return notes
}

View File

@ -0,0 +1,46 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
//go:build windows
// +build windows
package cmdrunner
import (
"debug/elf"
"debug/macho"
"debug/pe"
"fmt"
"os"
"runtime"
)
// additionalNotesAboutCommand tries to get additional information about a command that might help diagnose
// why it won't run correctly. It runs as a best effort only.
func additionalNotesAboutCommand(path string) string {
notes := ""
stat, err := os.Stat(path)
if err != nil {
return notes
}
notes += "\nAdditional notes about plugin:\n"
notes += fmt.Sprintf(" Path: %s\n", path)
notes += fmt.Sprintf(" Mode: %s\n", stat.Mode())
if elfFile, err := elf.Open(path); err == nil {
defer elfFile.Close()
notes += fmt.Sprintf(" ELF architecture: %s (current architecture: %s)\n", elfFile.Machine, runtime.GOARCH)
} else if machoFile, err := macho.Open(path); err == nil {
defer machoFile.Close()
notes += fmt.Sprintf(" MachO architecture: %s (current architecture: %s)\n", machoFile.Cpu, runtime.GOARCH)
} else if peFile, err := pe.Open(path); err == nil {
defer peFile.Close()
machine, ok := peTypes[peFile.Machine]
if !ok {
machine = "unknown"
}
notes += fmt.Sprintf(" PE architecture: %s (current architecture: %s)\n", machine, runtime.GOARCH)
}
return notes
}

View File

@ -0,0 +1,25 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package cmdrunner
import "time"
// pidAlive checks whether a pid is alive.
func pidAlive(pid int) bool {
return _pidAlive(pid)
}
// pidWait blocks for a process to exit.
func pidWait(pid int) error {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for range ticker.C {
if !pidAlive(pid) {
break
}
}
return nil
}

View File

@ -0,0 +1,23 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
//go:build !windows
// +build !windows
package cmdrunner
import (
"os"
"syscall"
)
// _pidAlive tests whether a process is alive or not by sending it Signal 0,
// since Go otherwise has no way to test this.
func _pidAlive(pid int) bool {
proc, err := os.FindProcess(pid)
if err == nil {
err = proc.Signal(syscall.Signal(0))
}
return err == nil
}

View File

@ -0,0 +1,33 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package cmdrunner
import (
"syscall"
)
const (
// Weird name but matches the MSDN docs
exit_STILL_ACTIVE = 259
processDesiredAccess = syscall.STANDARD_RIGHTS_READ |
syscall.PROCESS_QUERY_INFORMATION |
syscall.SYNCHRONIZE
)
// _pidAlive tests whether a process is alive or not
func _pidAlive(pid int) bool {
h, err := syscall.OpenProcess(processDesiredAccess, false, uint32(pid))
if err != nil {
return false
}
defer syscall.CloseHandle(h)
var ec uint32
if e := syscall.GetExitCodeProcess(h, &ec); e != nil {
return false
}
return ec == exit_STILL_ACTIVE
}

View File

@ -0,0 +1,51 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package grpcmux
import (
"io"
"net"
"github.com/hashicorp/yamux"
)
var _ net.Listener = (*blockedClientListener)(nil)
// blockedClientListener accepts connections for a specific gRPC broker stream
// ID on the client (host) side of the connection.
type blockedClientListener struct {
session *yamux.Session
waitCh chan struct{}
doneCh <-chan struct{}
}
func newBlockedClientListener(session *yamux.Session, doneCh <-chan struct{}) *blockedClientListener {
return &blockedClientListener{
waitCh: make(chan struct{}, 1),
doneCh: doneCh,
session: session,
}
}
func (b *blockedClientListener) Accept() (net.Conn, error) {
select {
case <-b.waitCh:
return b.session.Accept()
case <-b.doneCh:
return nil, io.EOF
}
}
func (b *blockedClientListener) Addr() net.Addr {
return b.session.Addr()
}
func (b *blockedClientListener) Close() error {
// We don't close the session, the client muxer is responsible for that.
return nil
}
func (b *blockedClientListener) unblock() {
b.waitCh <- struct{}{}
}

View File

@ -0,0 +1,49 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package grpcmux
import (
"io"
"net"
)
var _ net.Listener = (*blockedServerListener)(nil)
// blockedServerListener accepts connections for a specific gRPC broker stream
// ID on the server (plugin) side of the connection.
type blockedServerListener struct {
addr net.Addr
acceptCh chan acceptResult
doneCh <-chan struct{}
}
type acceptResult struct {
conn net.Conn
err error
}
func newBlockedServerListener(addr net.Addr, doneCh <-chan struct{}) *blockedServerListener {
return &blockedServerListener{
addr: addr,
acceptCh: make(chan acceptResult),
doneCh: doneCh,
}
}
func (b *blockedServerListener) Accept() (net.Conn, error) {
select {
case accept := <-b.acceptCh:
return accept.conn, accept.err
case <-b.doneCh:
return nil, io.EOF
}
}
func (b *blockedServerListener) Addr() net.Addr {
return b.addr
}
func (b *blockedServerListener) Close() error {
return nil
}

View File

@ -0,0 +1,105 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package grpcmux
import (
"fmt"
"net"
"sync"
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/yamux"
)
var _ GRPCMuxer = (*GRPCClientMuxer)(nil)
// GRPCClientMuxer implements the client (host) side of the gRPC broker's
// GRPCMuxer interface for multiplexing multiple gRPC broker connections over
// a single net.Conn.
//
// The client dials the initial net.Conn eagerly, and creates a yamux.Session
// as the implementation for multiplexing any additional connections.
//
// Each net.Listener returned from Listener will block until the client receives
// a knock that matches its gRPC broker stream ID. There is no default listener
// on the client, as it is a client for the gRPC broker's control services. (See
// GRPCServerMuxer for more details).
type GRPCClientMuxer struct {
logger hclog.Logger
session *yamux.Session
acceptMutex sync.Mutex
acceptListeners map[uint32]*blockedClientListener
}
func NewGRPCClientMuxer(logger hclog.Logger, addr net.Addr) (*GRPCClientMuxer, error) {
// Eagerly establish the underlying connection as early as possible.
logger.Debug("making new client mux initial connection", "addr", addr)
conn, err := net.Dial(addr.Network(), addr.String())
if err != nil {
return nil, err
}
if tcpConn, ok := conn.(*net.TCPConn); ok {
// Make sure to set keep alive so that the connection doesn't die
_ = tcpConn.SetKeepAlive(true)
}
cfg := yamux.DefaultConfig()
cfg.Logger = logger.Named("yamux").StandardLogger(&hclog.StandardLoggerOptions{
InferLevels: true,
})
cfg.LogOutput = nil
sess, err := yamux.Client(conn, cfg)
if err != nil {
return nil, err
}
logger.Debug("client muxer connected", "addr", addr)
m := &GRPCClientMuxer{
logger: logger,
session: sess,
acceptListeners: make(map[uint32]*blockedClientListener),
}
return m, nil
}
func (m *GRPCClientMuxer) Enabled() bool {
return m != nil
}
func (m *GRPCClientMuxer) Listener(id uint32, doneCh <-chan struct{}) (net.Listener, error) {
ln := newBlockedClientListener(m.session, doneCh)
m.acceptMutex.Lock()
m.acceptListeners[id] = ln
m.acceptMutex.Unlock()
return ln, nil
}
func (m *GRPCClientMuxer) AcceptKnock(id uint32) error {
m.acceptMutex.Lock()
defer m.acceptMutex.Unlock()
ln, ok := m.acceptListeners[id]
if !ok {
return fmt.Errorf("no listener for id %d", id)
}
ln.unblock()
return nil
}
func (m *GRPCClientMuxer) Dial() (net.Conn, error) {
stream, err := m.session.Open()
if err != nil {
return nil, fmt.Errorf("error dialling new client stream: %w", err)
}
return stream, nil
}
func (m *GRPCClientMuxer) Close() error {
return m.session.Close()
}

View File

@ -0,0 +1,41 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package grpcmux
import (
"net"
)
// GRPCMuxer enables multiple implementations of net.Listener to accept
// connections over a single "main" multiplexed net.Conn, and dial multiple
// client connections over the same multiplexed net.Conn.
//
// The first multiplexed connection is used to serve the gRPC broker's own
// control services: plugin.GRPCBroker, plugin.GRPCController, plugin.GRPCStdio.
//
// Clients must "knock" before dialling, to tell the server side that the
// next net.Conn should be accepted onto a specific stream ID. The knock is a
// bidirectional streaming message on the plugin.GRPCBroker service.
type GRPCMuxer interface {
// Enabled determines whether multiplexing should be used. It saves users
// of the interface from having to compare an interface with nil, which
// is a bit awkward to do correctly.
Enabled() bool
// Listener returns a multiplexed listener that will wait until AcceptKnock
// is called with a matching ID before its Accept function returns.
Listener(id uint32, doneCh <-chan struct{}) (net.Listener, error)
// AcceptKnock unblocks the listener with the matching ID, and returns an
// error if it hasn't been created yet.
AcceptKnock(id uint32) error
// Dial makes a new multiplexed client connection. To dial a specific ID,
// a knock must be sent first.
Dial() (net.Conn, error)
// Close closes connections and releases any resources associated with the
// muxer.
Close() error
}

View File

@ -0,0 +1,190 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package grpcmux
import (
"errors"
"fmt"
"net"
"sync"
"time"
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/yamux"
)
var _ GRPCMuxer = (*GRPCServerMuxer)(nil)
var _ net.Listener = (*GRPCServerMuxer)(nil)
// GRPCServerMuxer implements the server (plugin) side of the gRPC broker's
// GRPCMuxer interface for multiplexing multiple gRPC broker connections over
// a single net.Conn.
//
// The server side needs a listener to serve the gRPC broker's control services,
// which includes the service we will receive knocks on. That means we always
// accept the first connection onto a "default" main listener, and if we accept
// any further connections without receiving a knock first, they are also given
// to the default listener.
//
// When creating additional multiplexed listeners for specific stream IDs, we
// can't control the order in which gRPC servers will call Accept() on each
// listener, but we do need to control which gRPC server accepts which connection.
// As such, each multiplexed listener blocks waiting on a channel. It will be
// unblocked when a knock is received for the matching stream ID.
type GRPCServerMuxer struct {
addr net.Addr
logger hclog.Logger
sessionErrCh chan error
sess *yamux.Session
knockCh chan uint32
acceptMutex sync.Mutex
acceptChannels map[uint32]chan acceptResult
}
func NewGRPCServerMuxer(logger hclog.Logger, ln net.Listener) *GRPCServerMuxer {
m := &GRPCServerMuxer{
addr: ln.Addr(),
logger: logger,
sessionErrCh: make(chan error),
knockCh: make(chan uint32, 1),
acceptChannels: make(map[uint32]chan acceptResult),
}
go m.acceptSession(ln)
return m
}
// acceptSessionAndMuxAccept is responsible for establishing the yamux session,
// and then kicking off the acceptLoop function.
func (m *GRPCServerMuxer) acceptSession(ln net.Listener) {
defer close(m.sessionErrCh)
m.logger.Debug("accepting initial connection", "addr", m.addr)
conn, err := ln.Accept()
if err != nil {
m.sessionErrCh <- err
return
}
m.logger.Debug("initial server connection accepted", "addr", m.addr)
cfg := yamux.DefaultConfig()
cfg.Logger = m.logger.Named("yamux").StandardLogger(&hclog.StandardLoggerOptions{
InferLevels: true,
})
cfg.LogOutput = nil
m.sess, err = yamux.Server(conn, cfg)
if err != nil {
m.sessionErrCh <- err
return
}
}
func (m *GRPCServerMuxer) session() (*yamux.Session, error) {
select {
case err := <-m.sessionErrCh:
if err != nil {
return nil, err
}
case <-time.After(5 * time.Second):
return nil, errors.New("timed out waiting for connection to be established")
}
// Should never happen.
if m.sess == nil {
return nil, errors.New("no connection established and no error received")
}
return m.sess, nil
}
// Accept accepts all incoming connections and routes them to the correct
// stream ID based on the most recent knock received.
func (m *GRPCServerMuxer) Accept() (net.Conn, error) {
session, err := m.session()
if err != nil {
return nil, fmt.Errorf("error establishing yamux session: %w", err)
}
for {
conn, acceptErr := session.Accept()
select {
case id := <-m.knockCh:
m.acceptMutex.Lock()
acceptCh, ok := m.acceptChannels[id]
m.acceptMutex.Unlock()
if !ok {
if conn != nil {
_ = conn.Close()
}
return nil, fmt.Errorf("received knock on ID %d that doesn't have a listener", id)
}
m.logger.Debug("sending conn to brokered listener", "id", id)
acceptCh <- acceptResult{
conn: conn,
err: acceptErr,
}
default:
m.logger.Debug("sending conn to default listener")
return conn, acceptErr
}
}
}
func (m *GRPCServerMuxer) Addr() net.Addr {
return m.addr
}
func (m *GRPCServerMuxer) Close() error {
session, err := m.session()
if err != nil {
return err
}
return session.Close()
}
func (m *GRPCServerMuxer) Enabled() bool {
return m != nil
}
func (m *GRPCServerMuxer) Listener(id uint32, doneCh <-chan struct{}) (net.Listener, error) {
sess, err := m.session()
if err != nil {
return nil, err
}
ln := newBlockedServerListener(sess.Addr(), doneCh)
m.acceptMutex.Lock()
m.acceptChannels[id] = ln.acceptCh
m.acceptMutex.Unlock()
return ln, nil
}
func (m *GRPCServerMuxer) Dial() (net.Conn, error) {
sess, err := m.session()
if err != nil {
return nil, err
}
stream, err := sess.OpenStream()
if err != nil {
return nil, fmt.Errorf("error dialling new server stream: %w", err)
}
return stream, nil
}
func (m *GRPCServerMuxer) AcceptKnock(id uint32) error {
m.knockCh <- id
return nil
}

View File

@ -0,0 +1,264 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.31.0
// protoc (unknown)
// source: internal/plugin/grpc_broker.proto
package plugin
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type ConnInfo struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
ServiceId uint32 `protobuf:"varint,1,opt,name=service_id,json=serviceId,proto3" json:"service_id,omitempty"`
Network string `protobuf:"bytes,2,opt,name=network,proto3" json:"network,omitempty"`
Address string `protobuf:"bytes,3,opt,name=address,proto3" json:"address,omitempty"`
Knock *ConnInfo_Knock `protobuf:"bytes,4,opt,name=knock,proto3" json:"knock,omitempty"`
}
func (x *ConnInfo) Reset() {
*x = ConnInfo{}
if protoimpl.UnsafeEnabled {
mi := &file_internal_plugin_grpc_broker_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ConnInfo) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ConnInfo) ProtoMessage() {}
func (x *ConnInfo) ProtoReflect() protoreflect.Message {
mi := &file_internal_plugin_grpc_broker_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ConnInfo.ProtoReflect.Descriptor instead.
func (*ConnInfo) Descriptor() ([]byte, []int) {
return file_internal_plugin_grpc_broker_proto_rawDescGZIP(), []int{0}
}
func (x *ConnInfo) GetServiceId() uint32 {
if x != nil {
return x.ServiceId
}
return 0
}
func (x *ConnInfo) GetNetwork() string {
if x != nil {
return x.Network
}
return ""
}
func (x *ConnInfo) GetAddress() string {
if x != nil {
return x.Address
}
return ""
}
func (x *ConnInfo) GetKnock() *ConnInfo_Knock {
if x != nil {
return x.Knock
}
return nil
}
type ConnInfo_Knock struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Knock bool `protobuf:"varint,1,opt,name=knock,proto3" json:"knock,omitempty"`
Ack bool `protobuf:"varint,2,opt,name=ack,proto3" json:"ack,omitempty"`
Error string `protobuf:"bytes,3,opt,name=error,proto3" json:"error,omitempty"`
}
func (x *ConnInfo_Knock) Reset() {
*x = ConnInfo_Knock{}
if protoimpl.UnsafeEnabled {
mi := &file_internal_plugin_grpc_broker_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ConnInfo_Knock) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ConnInfo_Knock) ProtoMessage() {}
func (x *ConnInfo_Knock) ProtoReflect() protoreflect.Message {
mi := &file_internal_plugin_grpc_broker_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ConnInfo_Knock.ProtoReflect.Descriptor instead.
func (*ConnInfo_Knock) Descriptor() ([]byte, []int) {
return file_internal_plugin_grpc_broker_proto_rawDescGZIP(), []int{0, 0}
}
func (x *ConnInfo_Knock) GetKnock() bool {
if x != nil {
return x.Knock
}
return false
}
func (x *ConnInfo_Knock) GetAck() bool {
if x != nil {
return x.Ack
}
return false
}
func (x *ConnInfo_Knock) GetError() string {
if x != nil {
return x.Error
}
return ""
}
var File_internal_plugin_grpc_broker_proto protoreflect.FileDescriptor
var file_internal_plugin_grpc_broker_proto_rawDesc = []byte{
0x0a, 0x21, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69,
0x6e, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x5f, 0x62, 0x72, 0x6f, 0x6b, 0x65, 0x72, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x12, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x22, 0xd2, 0x01, 0x0a, 0x08,
0x43, 0x6f, 0x6e, 0x6e, 0x49, 0x6e, 0x66, 0x6f, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x65, 0x72, 0x76,
0x69, 0x63, 0x65, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x09, 0x73, 0x65,
0x72, 0x76, 0x69, 0x63, 0x65, 0x49, 0x64, 0x12, 0x18, 0x0a, 0x07, 0x6e, 0x65, 0x74, 0x77, 0x6f,
0x72, 0x6b, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x6e, 0x65, 0x74, 0x77, 0x6f, 0x72,
0x6b, 0x12, 0x18, 0x0a, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x03, 0x20, 0x01,
0x28, 0x09, 0x52, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x2c, 0x0a, 0x05, 0x6b,
0x6e, 0x6f, 0x63, 0x6b, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x16, 0x2e, 0x70, 0x6c, 0x75,
0x67, 0x69, 0x6e, 0x2e, 0x43, 0x6f, 0x6e, 0x6e, 0x49, 0x6e, 0x66, 0x6f, 0x2e, 0x4b, 0x6e, 0x6f,
0x63, 0x6b, 0x52, 0x05, 0x6b, 0x6e, 0x6f, 0x63, 0x6b, 0x1a, 0x45, 0x0a, 0x05, 0x4b, 0x6e, 0x6f,
0x63, 0x6b, 0x12, 0x14, 0x0a, 0x05, 0x6b, 0x6e, 0x6f, 0x63, 0x6b, 0x18, 0x01, 0x20, 0x01, 0x28,
0x08, 0x52, 0x05, 0x6b, 0x6e, 0x6f, 0x63, 0x6b, 0x12, 0x10, 0x0a, 0x03, 0x61, 0x63, 0x6b, 0x18,
0x02, 0x20, 0x01, 0x28, 0x08, 0x52, 0x03, 0x61, 0x63, 0x6b, 0x12, 0x14, 0x0a, 0x05, 0x65, 0x72,
0x72, 0x6f, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x65, 0x72, 0x72, 0x6f, 0x72,
0x32, 0x43, 0x0a, 0x0a, 0x47, 0x52, 0x50, 0x43, 0x42, 0x72, 0x6f, 0x6b, 0x65, 0x72, 0x12, 0x35,
0x0a, 0x0b, 0x53, 0x74, 0x61, 0x72, 0x74, 0x53, 0x74, 0x72, 0x65, 0x61, 0x6d, 0x12, 0x10, 0x2e,
0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2e, 0x43, 0x6f, 0x6e, 0x6e, 0x49, 0x6e, 0x66, 0x6f, 0x1a,
0x10, 0x2e, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2e, 0x43, 0x6f, 0x6e, 0x6e, 0x49, 0x6e, 0x66,
0x6f, 0x28, 0x01, 0x30, 0x01, 0x42, 0x0a, 0x5a, 0x08, 0x2e, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69,
0x6e, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_internal_plugin_grpc_broker_proto_rawDescOnce sync.Once
file_internal_plugin_grpc_broker_proto_rawDescData = file_internal_plugin_grpc_broker_proto_rawDesc
)
func file_internal_plugin_grpc_broker_proto_rawDescGZIP() []byte {
file_internal_plugin_grpc_broker_proto_rawDescOnce.Do(func() {
file_internal_plugin_grpc_broker_proto_rawDescData = protoimpl.X.CompressGZIP(file_internal_plugin_grpc_broker_proto_rawDescData)
})
return file_internal_plugin_grpc_broker_proto_rawDescData
}
var file_internal_plugin_grpc_broker_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_internal_plugin_grpc_broker_proto_goTypes = []interface{}{
(*ConnInfo)(nil), // 0: plugin.ConnInfo
(*ConnInfo_Knock)(nil), // 1: plugin.ConnInfo.Knock
}
var file_internal_plugin_grpc_broker_proto_depIdxs = []int32{
1, // 0: plugin.ConnInfo.knock:type_name -> plugin.ConnInfo.Knock
0, // 1: plugin.GRPCBroker.StartStream:input_type -> plugin.ConnInfo
0, // 2: plugin.GRPCBroker.StartStream:output_type -> plugin.ConnInfo
2, // [2:3] is the sub-list for method output_type
1, // [1:2] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_internal_plugin_grpc_broker_proto_init() }
func file_internal_plugin_grpc_broker_proto_init() {
if File_internal_plugin_grpc_broker_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_internal_plugin_grpc_broker_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ConnInfo); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_internal_plugin_grpc_broker_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ConnInfo_Knock); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_internal_plugin_grpc_broker_proto_rawDesc,
NumEnums: 0,
NumMessages: 2,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_internal_plugin_grpc_broker_proto_goTypes,
DependencyIndexes: file_internal_plugin_grpc_broker_proto_depIdxs,
MessageInfos: file_internal_plugin_grpc_broker_proto_msgTypes,
}.Build()
File_internal_plugin_grpc_broker_proto = out.File
file_internal_plugin_grpc_broker_proto_rawDesc = nil
file_internal_plugin_grpc_broker_proto_goTypes = nil
file_internal_plugin_grpc_broker_proto_depIdxs = nil
}

View File

@ -0,0 +1,22 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
syntax = "proto3";
package plugin;
option go_package = "./plugin";
message ConnInfo {
uint32 service_id = 1;
string network = 2;
string address = 3;
message Knock {
bool knock = 1;
bool ack = 2;
string error = 3;
}
Knock knock = 4;
}
service GRPCBroker {
rpc StartStream(stream ConnInfo) returns (stream ConnInfo);
}

View File

@ -0,0 +1,142 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc (unknown)
// source: internal/plugin/grpc_broker.proto
package plugin
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
const (
GRPCBroker_StartStream_FullMethodName = "/plugin.GRPCBroker/StartStream"
)
// GRPCBrokerClient is the client API for GRPCBroker service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type GRPCBrokerClient interface {
StartStream(ctx context.Context, opts ...grpc.CallOption) (GRPCBroker_StartStreamClient, error)
}
type gRPCBrokerClient struct {
cc grpc.ClientConnInterface
}
func NewGRPCBrokerClient(cc grpc.ClientConnInterface) GRPCBrokerClient {
return &gRPCBrokerClient{cc}
}
func (c *gRPCBrokerClient) StartStream(ctx context.Context, opts ...grpc.CallOption) (GRPCBroker_StartStreamClient, error) {
stream, err := c.cc.NewStream(ctx, &GRPCBroker_ServiceDesc.Streams[0], GRPCBroker_StartStream_FullMethodName, opts...)
if err != nil {
return nil, err
}
x := &gRPCBrokerStartStreamClient{stream}
return x, nil
}
type GRPCBroker_StartStreamClient interface {
Send(*ConnInfo) error
Recv() (*ConnInfo, error)
grpc.ClientStream
}
type gRPCBrokerStartStreamClient struct {
grpc.ClientStream
}
func (x *gRPCBrokerStartStreamClient) Send(m *ConnInfo) error {
return x.ClientStream.SendMsg(m)
}
func (x *gRPCBrokerStartStreamClient) Recv() (*ConnInfo, error) {
m := new(ConnInfo)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// GRPCBrokerServer is the server API for GRPCBroker service.
// All implementations should embed UnimplementedGRPCBrokerServer
// for forward compatibility
type GRPCBrokerServer interface {
StartStream(GRPCBroker_StartStreamServer) error
}
// UnimplementedGRPCBrokerServer should be embedded to have forward compatible implementations.
type UnimplementedGRPCBrokerServer struct {
}
func (UnimplementedGRPCBrokerServer) StartStream(GRPCBroker_StartStreamServer) error {
return status.Errorf(codes.Unimplemented, "method StartStream not implemented")
}
// UnsafeGRPCBrokerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to GRPCBrokerServer will
// result in compilation errors.
type UnsafeGRPCBrokerServer interface {
mustEmbedUnimplementedGRPCBrokerServer()
}
func RegisterGRPCBrokerServer(s grpc.ServiceRegistrar, srv GRPCBrokerServer) {
s.RegisterService(&GRPCBroker_ServiceDesc, srv)
}
func _GRPCBroker_StartStream_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(GRPCBrokerServer).StartStream(&gRPCBrokerStartStreamServer{stream})
}
type GRPCBroker_StartStreamServer interface {
Send(*ConnInfo) error
Recv() (*ConnInfo, error)
grpc.ServerStream
}
type gRPCBrokerStartStreamServer struct {
grpc.ServerStream
}
func (x *gRPCBrokerStartStreamServer) Send(m *ConnInfo) error {
return x.ServerStream.SendMsg(m)
}
func (x *gRPCBrokerStartStreamServer) Recv() (*ConnInfo, error) {
m := new(ConnInfo)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// GRPCBroker_ServiceDesc is the grpc.ServiceDesc for GRPCBroker service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var GRPCBroker_ServiceDesc = grpc.ServiceDesc{
ServiceName: "plugin.GRPCBroker",
HandlerType: (*GRPCBrokerServer)(nil),
Methods: []grpc.MethodDesc{},
Streams: []grpc.StreamDesc{
{
StreamName: "StartStream",
Handler: _GRPCBroker_StartStream_Handler,
ServerStreams: true,
ClientStreams: true,
},
},
Metadata: "internal/plugin/grpc_broker.proto",
}

View File

@ -0,0 +1,141 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.31.0
// protoc (unknown)
// source: internal/plugin/grpc_controller.proto
package plugin
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Empty struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *Empty) Reset() {
*x = Empty{}
if protoimpl.UnsafeEnabled {
mi := &file_internal_plugin_grpc_controller_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Empty) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Empty) ProtoMessage() {}
func (x *Empty) ProtoReflect() protoreflect.Message {
mi := &file_internal_plugin_grpc_controller_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Empty.ProtoReflect.Descriptor instead.
func (*Empty) Descriptor() ([]byte, []int) {
return file_internal_plugin_grpc_controller_proto_rawDescGZIP(), []int{0}
}
var File_internal_plugin_grpc_controller_proto protoreflect.FileDescriptor
var file_internal_plugin_grpc_controller_proto_rawDesc = []byte{
0x0a, 0x25, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69,
0x6e, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x5f, 0x63, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65,
0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x22,
0x07, 0x0a, 0x05, 0x45, 0x6d, 0x70, 0x74, 0x79, 0x32, 0x3a, 0x0a, 0x0e, 0x47, 0x52, 0x50, 0x43,
0x43, 0x6f, 0x6e, 0x74, 0x72, 0x6f, 0x6c, 0x6c, 0x65, 0x72, 0x12, 0x28, 0x0a, 0x08, 0x53, 0x68,
0x75, 0x74, 0x64, 0x6f, 0x77, 0x6e, 0x12, 0x0d, 0x2e, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2e,
0x45, 0x6d, 0x70, 0x74, 0x79, 0x1a, 0x0d, 0x2e, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2e, 0x45,
0x6d, 0x70, 0x74, 0x79, 0x42, 0x0a, 0x5a, 0x08, 0x2e, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e,
0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_internal_plugin_grpc_controller_proto_rawDescOnce sync.Once
file_internal_plugin_grpc_controller_proto_rawDescData = file_internal_plugin_grpc_controller_proto_rawDesc
)
func file_internal_plugin_grpc_controller_proto_rawDescGZIP() []byte {
file_internal_plugin_grpc_controller_proto_rawDescOnce.Do(func() {
file_internal_plugin_grpc_controller_proto_rawDescData = protoimpl.X.CompressGZIP(file_internal_plugin_grpc_controller_proto_rawDescData)
})
return file_internal_plugin_grpc_controller_proto_rawDescData
}
var file_internal_plugin_grpc_controller_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_internal_plugin_grpc_controller_proto_goTypes = []interface{}{
(*Empty)(nil), // 0: plugin.Empty
}
var file_internal_plugin_grpc_controller_proto_depIdxs = []int32{
0, // 0: plugin.GRPCController.Shutdown:input_type -> plugin.Empty
0, // 1: plugin.GRPCController.Shutdown:output_type -> plugin.Empty
1, // [1:2] is the sub-list for method output_type
0, // [0:1] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_internal_plugin_grpc_controller_proto_init() }
func file_internal_plugin_grpc_controller_proto_init() {
if File_internal_plugin_grpc_controller_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_internal_plugin_grpc_controller_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Empty); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_internal_plugin_grpc_controller_proto_rawDesc,
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_internal_plugin_grpc_controller_proto_goTypes,
DependencyIndexes: file_internal_plugin_grpc_controller_proto_depIdxs,
MessageInfos: file_internal_plugin_grpc_controller_proto_msgTypes,
}.Build()
File_internal_plugin_grpc_controller_proto = out.File
file_internal_plugin_grpc_controller_proto_rawDesc = nil
file_internal_plugin_grpc_controller_proto_goTypes = nil
file_internal_plugin_grpc_controller_proto_depIdxs = nil
}

View File

@ -0,0 +1,14 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
syntax = "proto3";
package plugin;
option go_package = "./plugin";
message Empty {
}
// The GRPCController is responsible for telling the plugin server to shutdown.
service GRPCController {
rpc Shutdown(Empty) returns (Empty);
}

View File

@ -0,0 +1,110 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc (unknown)
// source: internal/plugin/grpc_controller.proto
package plugin
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
const (
GRPCController_Shutdown_FullMethodName = "/plugin.GRPCController/Shutdown"
)
// GRPCControllerClient is the client API for GRPCController service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type GRPCControllerClient interface {
Shutdown(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error)
}
type gRPCControllerClient struct {
cc grpc.ClientConnInterface
}
func NewGRPCControllerClient(cc grpc.ClientConnInterface) GRPCControllerClient {
return &gRPCControllerClient{cc}
}
func (c *gRPCControllerClient) Shutdown(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*Empty, error) {
out := new(Empty)
err := c.cc.Invoke(ctx, GRPCController_Shutdown_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// GRPCControllerServer is the server API for GRPCController service.
// All implementations should embed UnimplementedGRPCControllerServer
// for forward compatibility
type GRPCControllerServer interface {
Shutdown(context.Context, *Empty) (*Empty, error)
}
// UnimplementedGRPCControllerServer should be embedded to have forward compatible implementations.
type UnimplementedGRPCControllerServer struct {
}
func (UnimplementedGRPCControllerServer) Shutdown(context.Context, *Empty) (*Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method Shutdown not implemented")
}
// UnsafeGRPCControllerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to GRPCControllerServer will
// result in compilation errors.
type UnsafeGRPCControllerServer interface {
mustEmbedUnimplementedGRPCControllerServer()
}
func RegisterGRPCControllerServer(s grpc.ServiceRegistrar, srv GRPCControllerServer) {
s.RegisterService(&GRPCController_ServiceDesc, srv)
}
func _GRPCController_Shutdown_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(Empty)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(GRPCControllerServer).Shutdown(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: GRPCController_Shutdown_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(GRPCControllerServer).Shutdown(ctx, req.(*Empty))
}
return interceptor(ctx, in, info, handler)
}
// GRPCController_ServiceDesc is the grpc.ServiceDesc for GRPCController service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var GRPCController_ServiceDesc = grpc.ServiceDesc{
ServiceName: "plugin.GRPCController",
HandlerType: (*GRPCControllerServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "Shutdown",
Handler: _GRPCController_Shutdown_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "internal/plugin/grpc_controller.proto",
}

View File

@ -0,0 +1,225 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.31.0
// protoc (unknown)
// source: internal/plugin/grpc_stdio.proto
package plugin
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
emptypb "google.golang.org/protobuf/types/known/emptypb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type StdioData_Channel int32
const (
StdioData_INVALID StdioData_Channel = 0
StdioData_STDOUT StdioData_Channel = 1
StdioData_STDERR StdioData_Channel = 2
)
// Enum value maps for StdioData_Channel.
var (
StdioData_Channel_name = map[int32]string{
0: "INVALID",
1: "STDOUT",
2: "STDERR",
}
StdioData_Channel_value = map[string]int32{
"INVALID": 0,
"STDOUT": 1,
"STDERR": 2,
}
)
func (x StdioData_Channel) Enum() *StdioData_Channel {
p := new(StdioData_Channel)
*p = x
return p
}
func (x StdioData_Channel) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (StdioData_Channel) Descriptor() protoreflect.EnumDescriptor {
return file_internal_plugin_grpc_stdio_proto_enumTypes[0].Descriptor()
}
func (StdioData_Channel) Type() protoreflect.EnumType {
return &file_internal_plugin_grpc_stdio_proto_enumTypes[0]
}
func (x StdioData_Channel) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use StdioData_Channel.Descriptor instead.
func (StdioData_Channel) EnumDescriptor() ([]byte, []int) {
return file_internal_plugin_grpc_stdio_proto_rawDescGZIP(), []int{0, 0}
}
// StdioData is a single chunk of stdout or stderr data that is streamed
// from GRPCStdio.
type StdioData struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Channel StdioData_Channel `protobuf:"varint,1,opt,name=channel,proto3,enum=plugin.StdioData_Channel" json:"channel,omitempty"`
Data []byte `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
}
func (x *StdioData) Reset() {
*x = StdioData{}
if protoimpl.UnsafeEnabled {
mi := &file_internal_plugin_grpc_stdio_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *StdioData) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*StdioData) ProtoMessage() {}
func (x *StdioData) ProtoReflect() protoreflect.Message {
mi := &file_internal_plugin_grpc_stdio_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use StdioData.ProtoReflect.Descriptor instead.
func (*StdioData) Descriptor() ([]byte, []int) {
return file_internal_plugin_grpc_stdio_proto_rawDescGZIP(), []int{0}
}
func (x *StdioData) GetChannel() StdioData_Channel {
if x != nil {
return x.Channel
}
return StdioData_INVALID
}
func (x *StdioData) GetData() []byte {
if x != nil {
return x.Data
}
return nil
}
var File_internal_plugin_grpc_stdio_proto protoreflect.FileDescriptor
var file_internal_plugin_grpc_stdio_proto_rawDesc = []byte{
0x0a, 0x20, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x75, 0x67, 0x69,
0x6e, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x5f, 0x73, 0x74, 0x64, 0x69, 0x6f, 0x2e, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x12, 0x06, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67,
0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74,
0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x84, 0x01, 0x0a, 0x09, 0x53, 0x74, 0x64, 0x69,
0x6f, 0x44, 0x61, 0x74, 0x61, 0x12, 0x33, 0x0a, 0x07, 0x63, 0x68, 0x61, 0x6e, 0x6e, 0x65, 0x6c,
0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x19, 0x2e, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2e,
0x53, 0x74, 0x64, 0x69, 0x6f, 0x44, 0x61, 0x74, 0x61, 0x2e, 0x43, 0x68, 0x61, 0x6e, 0x6e, 0x65,
0x6c, 0x52, 0x07, 0x63, 0x68, 0x61, 0x6e, 0x6e, 0x65, 0x6c, 0x12, 0x12, 0x0a, 0x04, 0x64, 0x61,
0x74, 0x61, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x04, 0x64, 0x61, 0x74, 0x61, 0x22, 0x2e,
0x0a, 0x07, 0x43, 0x68, 0x61, 0x6e, 0x6e, 0x65, 0x6c, 0x12, 0x0b, 0x0a, 0x07, 0x49, 0x4e, 0x56,
0x41, 0x4c, 0x49, 0x44, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x53, 0x54, 0x44, 0x4f, 0x55, 0x54,
0x10, 0x01, 0x12, 0x0a, 0x0a, 0x06, 0x53, 0x54, 0x44, 0x45, 0x52, 0x52, 0x10, 0x02, 0x32, 0x47,
0x0a, 0x09, 0x47, 0x52, 0x50, 0x43, 0x53, 0x74, 0x64, 0x69, 0x6f, 0x12, 0x3a, 0x0a, 0x0b, 0x53,
0x74, 0x72, 0x65, 0x61, 0x6d, 0x53, 0x74, 0x64, 0x69, 0x6f, 0x12, 0x16, 0x2e, 0x67, 0x6f, 0x6f,
0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x45, 0x6d, 0x70,
0x74, 0x79, 0x1a, 0x11, 0x2e, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x2e, 0x53, 0x74, 0x64, 0x69,
0x6f, 0x44, 0x61, 0x74, 0x61, 0x30, 0x01, 0x42, 0x0a, 0x5a, 0x08, 0x2e, 0x2f, 0x70, 0x6c, 0x75,
0x67, 0x69, 0x6e, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_internal_plugin_grpc_stdio_proto_rawDescOnce sync.Once
file_internal_plugin_grpc_stdio_proto_rawDescData = file_internal_plugin_grpc_stdio_proto_rawDesc
)
func file_internal_plugin_grpc_stdio_proto_rawDescGZIP() []byte {
file_internal_plugin_grpc_stdio_proto_rawDescOnce.Do(func() {
file_internal_plugin_grpc_stdio_proto_rawDescData = protoimpl.X.CompressGZIP(file_internal_plugin_grpc_stdio_proto_rawDescData)
})
return file_internal_plugin_grpc_stdio_proto_rawDescData
}
var file_internal_plugin_grpc_stdio_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_internal_plugin_grpc_stdio_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_internal_plugin_grpc_stdio_proto_goTypes = []interface{}{
(StdioData_Channel)(0), // 0: plugin.StdioData.Channel
(*StdioData)(nil), // 1: plugin.StdioData
(*emptypb.Empty)(nil), // 2: google.protobuf.Empty
}
var file_internal_plugin_grpc_stdio_proto_depIdxs = []int32{
0, // 0: plugin.StdioData.channel:type_name -> plugin.StdioData.Channel
2, // 1: plugin.GRPCStdio.StreamStdio:input_type -> google.protobuf.Empty
1, // 2: plugin.GRPCStdio.StreamStdio:output_type -> plugin.StdioData
2, // [2:3] is the sub-list for method output_type
1, // [1:2] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_internal_plugin_grpc_stdio_proto_init() }
func file_internal_plugin_grpc_stdio_proto_init() {
if File_internal_plugin_grpc_stdio_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_internal_plugin_grpc_stdio_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*StdioData); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_internal_plugin_grpc_stdio_proto_rawDesc,
NumEnums: 1,
NumMessages: 1,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_internal_plugin_grpc_stdio_proto_goTypes,
DependencyIndexes: file_internal_plugin_grpc_stdio_proto_depIdxs,
EnumInfos: file_internal_plugin_grpc_stdio_proto_enumTypes,
MessageInfos: file_internal_plugin_grpc_stdio_proto_msgTypes,
}.Build()
File_internal_plugin_grpc_stdio_proto = out.File
file_internal_plugin_grpc_stdio_proto_rawDesc = nil
file_internal_plugin_grpc_stdio_proto_goTypes = nil
file_internal_plugin_grpc_stdio_proto_depIdxs = nil
}

View File

@ -0,0 +1,33 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
syntax = "proto3";
package plugin;
option go_package = "./plugin";
import "google/protobuf/empty.proto";
// GRPCStdio is a service that is automatically run by the plugin process
// to stream any stdout/err data so that it can be mirrored on the plugin
// host side.
service GRPCStdio {
// StreamStdio returns a stream that contains all the stdout/stderr.
// This RPC endpoint must only be called ONCE. Once stdio data is consumed
// it is not sent again.
//
// Callers should connect early to prevent blocking on the plugin process.
rpc StreamStdio(google.protobuf.Empty) returns (stream StdioData);
}
// StdioData is a single chunk of stdout or stderr data that is streamed
// from GRPCStdio.
message StdioData {
enum Channel {
INVALID = 0;
STDOUT = 1;
STDERR = 2;
}
Channel channel = 1;
bytes data = 2;
}

View File

@ -0,0 +1,148 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc (unknown)
// source: internal/plugin/grpc_stdio.proto
package plugin
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
emptypb "google.golang.org/protobuf/types/known/emptypb"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
const (
GRPCStdio_StreamStdio_FullMethodName = "/plugin.GRPCStdio/StreamStdio"
)
// GRPCStdioClient is the client API for GRPCStdio service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type GRPCStdioClient interface {
// StreamStdio returns a stream that contains all the stdout/stderr.
// This RPC endpoint must only be called ONCE. Once stdio data is consumed
// it is not sent again.
//
// Callers should connect early to prevent blocking on the plugin process.
StreamStdio(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (GRPCStdio_StreamStdioClient, error)
}
type gRPCStdioClient struct {
cc grpc.ClientConnInterface
}
func NewGRPCStdioClient(cc grpc.ClientConnInterface) GRPCStdioClient {
return &gRPCStdioClient{cc}
}
func (c *gRPCStdioClient) StreamStdio(ctx context.Context, in *emptypb.Empty, opts ...grpc.CallOption) (GRPCStdio_StreamStdioClient, error) {
stream, err := c.cc.NewStream(ctx, &GRPCStdio_ServiceDesc.Streams[0], GRPCStdio_StreamStdio_FullMethodName, opts...)
if err != nil {
return nil, err
}
x := &gRPCStdioStreamStdioClient{stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
return x, nil
}
type GRPCStdio_StreamStdioClient interface {
Recv() (*StdioData, error)
grpc.ClientStream
}
type gRPCStdioStreamStdioClient struct {
grpc.ClientStream
}
func (x *gRPCStdioStreamStdioClient) Recv() (*StdioData, error) {
m := new(StdioData)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// GRPCStdioServer is the server API for GRPCStdio service.
// All implementations should embed UnimplementedGRPCStdioServer
// for forward compatibility
type GRPCStdioServer interface {
// StreamStdio returns a stream that contains all the stdout/stderr.
// This RPC endpoint must only be called ONCE. Once stdio data is consumed
// it is not sent again.
//
// Callers should connect early to prevent blocking on the plugin process.
StreamStdio(*emptypb.Empty, GRPCStdio_StreamStdioServer) error
}
// UnimplementedGRPCStdioServer should be embedded to have forward compatible implementations.
type UnimplementedGRPCStdioServer struct {
}
func (UnimplementedGRPCStdioServer) StreamStdio(*emptypb.Empty, GRPCStdio_StreamStdioServer) error {
return status.Errorf(codes.Unimplemented, "method StreamStdio not implemented")
}
// UnsafeGRPCStdioServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to GRPCStdioServer will
// result in compilation errors.
type UnsafeGRPCStdioServer interface {
mustEmbedUnimplementedGRPCStdioServer()
}
func RegisterGRPCStdioServer(s grpc.ServiceRegistrar, srv GRPCStdioServer) {
s.RegisterService(&GRPCStdio_ServiceDesc, srv)
}
func _GRPCStdio_StreamStdio_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(emptypb.Empty)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(GRPCStdioServer).StreamStdio(m, &gRPCStdioStreamStdioServer{stream})
}
type GRPCStdio_StreamStdioServer interface {
Send(*StdioData) error
grpc.ServerStream
}
type gRPCStdioStreamStdioServer struct {
grpc.ServerStream
}
func (x *gRPCStdioStreamStdioServer) Send(m *StdioData) error {
return x.ServerStream.SendMsg(m)
}
// GRPCStdio_ServiceDesc is the grpc.ServiceDesc for GRPCStdio service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var GRPCStdio_ServiceDesc = grpc.ServiceDesc{
ServiceName: "plugin.GRPCStdio",
HandlerType: (*GRPCStdioServer)(nil),
Methods: []grpc.MethodDesc{},
Streams: []grpc.StreamDesc{
{
StreamName: "StreamStdio",
Handler: _GRPCStdio_StreamStdio_Handler,
ServerStreams: true,
},
},
Metadata: "internal/plugin/grpc_stdio.proto",
}

76
vendor/github.com/hashicorp/go-plugin/log_entry.go generated vendored Normal file
View File

@ -0,0 +1,76 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"encoding/json"
"time"
)
// logEntry is the JSON payload that gets sent to Stderr from the plugin to the host
type logEntry struct {
Message string `json:"@message"`
Level string `json:"@level"`
Timestamp time.Time `json:"timestamp"`
KVPairs []*logEntryKV `json:"kv_pairs"`
}
// logEntryKV is a key value pair within the Output payload
type logEntryKV struct {
Key string `json:"key"`
Value interface{} `json:"value"`
}
// flattenKVPairs is used to flatten KVPair slice into []interface{}
// for hclog consumption.
func flattenKVPairs(kvs []*logEntryKV) []interface{} {
var result []interface{}
for _, kv := range kvs {
result = append(result, kv.Key)
result = append(result, kv.Value)
}
return result
}
// parseJSON handles parsing JSON output
func parseJSON(input []byte) (*logEntry, error) {
var raw map[string]interface{}
entry := &logEntry{}
err := json.Unmarshal(input, &raw)
if err != nil {
return nil, err
}
// Parse hclog-specific objects
if v, ok := raw["@message"]; ok {
entry.Message = v.(string)
delete(raw, "@message")
}
if v, ok := raw["@level"]; ok {
entry.Level = v.(string)
delete(raw, "@level")
}
if v, ok := raw["@timestamp"]; ok {
t, err := time.Parse("2006-01-02T15:04:05.000000Z07:00", v.(string))
if err != nil {
return nil, err
}
entry.Timestamp = t
delete(raw, "@timestamp")
}
// Parse dynamic KV args from the hclog payload.
for k, v := range raw {
entry.KVPairs = append(entry.KVPairs, &logEntryKV{
Key: k,
Value: v,
})
}
return entry, nil
}

76
vendor/github.com/hashicorp/go-plugin/mtls.go generated vendored Normal file
View File

@ -0,0 +1,76 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"bytes"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"math/big"
"time"
)
// generateCert generates a temporary certificate for plugin authentication. The
// certificate and private key are returns in PEM format.
func generateCert() (cert []byte, privateKey []byte, err error) {
key, err := ecdsa.GenerateKey(elliptic.P521(), rand.Reader)
if err != nil {
return nil, nil, err
}
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
sn, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
return nil, nil, err
}
host := "localhost"
template := &x509.Certificate{
Subject: pkix.Name{
CommonName: host,
Organization: []string{"HashiCorp"},
},
DNSNames: []string{host},
ExtKeyUsage: []x509.ExtKeyUsage{
x509.ExtKeyUsageClientAuth,
x509.ExtKeyUsageServerAuth,
},
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment | x509.KeyUsageKeyAgreement | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
SerialNumber: sn,
NotBefore: time.Now().Add(-30 * time.Second),
NotAfter: time.Now().Add(262980 * time.Hour),
IsCA: true,
}
der, err := x509.CreateCertificate(rand.Reader, template, template, key.Public(), key)
if err != nil {
return nil, nil, err
}
var certOut bytes.Buffer
if err := pem.Encode(&certOut, &pem.Block{Type: "CERTIFICATE", Bytes: der}); err != nil {
return nil, nil, err
}
keyBytes, err := x509.MarshalECPrivateKey(key)
if err != nil {
return nil, nil, err
}
var keyOut bytes.Buffer
if err := pem.Encode(&keyOut, &pem.Block{Type: "EC PRIVATE KEY", Bytes: keyBytes}); err != nil {
return nil, nil, err
}
cert = certOut.Bytes()
privateKey = keyOut.Bytes()
return cert, privateKey, nil
}

207
vendor/github.com/hashicorp/go-plugin/mux_broker.go generated vendored Normal file
View File

@ -0,0 +1,207 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"encoding/binary"
"fmt"
"log"
"net"
"sync"
"sync/atomic"
"time"
"github.com/hashicorp/yamux"
)
// MuxBroker is responsible for brokering multiplexed connections by unique ID.
//
// It is used by plugins to multiplex multiple RPC connections and data
// streams on top of a single connection between the plugin process and the
// host process.
//
// This allows a plugin to request a channel with a specific ID to connect to
// or accept a connection from, and the broker handles the details of
// holding these channels open while they're being negotiated.
//
// The Plugin interface has access to these for both Server and Client.
// The broker can be used by either (optionally) to reserve and connect to
// new multiplexed streams. This is useful for complex args and return values,
// or anything else you might need a data stream for.
type MuxBroker struct {
nextId uint32
session *yamux.Session
streams map[uint32]*muxBrokerPending
sync.Mutex
}
type muxBrokerPending struct {
ch chan net.Conn
doneCh chan struct{}
}
func newMuxBroker(s *yamux.Session) *MuxBroker {
return &MuxBroker{
session: s,
streams: make(map[uint32]*muxBrokerPending),
}
}
// Accept accepts a connection by ID.
//
// This should not be called multiple times with the same ID at one time.
func (m *MuxBroker) Accept(id uint32) (net.Conn, error) {
var c net.Conn
p := m.getStream(id)
select {
case c = <-p.ch:
close(p.doneCh)
case <-time.After(5 * time.Second):
m.Lock()
defer m.Unlock()
delete(m.streams, id)
return nil, fmt.Errorf("timeout waiting for accept")
}
// Ack our connection
if err := binary.Write(c, binary.LittleEndian, id); err != nil {
c.Close()
return nil, err
}
return c, nil
}
// AcceptAndServe is used to accept a specific stream ID and immediately
// serve an RPC server on that stream ID. This is used to easily serve
// complex arguments.
//
// The served interface is always registered to the "Plugin" name.
func (m *MuxBroker) AcceptAndServe(id uint32, v interface{}) {
conn, err := m.Accept(id)
if err != nil {
log.Printf("[ERR] plugin: plugin acceptAndServe error: %s", err)
return
}
serve(conn, "Plugin", v)
}
// Close closes the connection and all sub-connections.
func (m *MuxBroker) Close() error {
return m.session.Close()
}
// Dial opens a connection by ID.
func (m *MuxBroker) Dial(id uint32) (net.Conn, error) {
// Open the stream
stream, err := m.session.OpenStream()
if err != nil {
return nil, err
}
// Write the stream ID onto the wire.
if err := binary.Write(stream, binary.LittleEndian, id); err != nil {
stream.Close()
return nil, err
}
// Read the ack that we connected. Then we're off!
var ack uint32
if err := binary.Read(stream, binary.LittleEndian, &ack); err != nil {
stream.Close()
return nil, err
}
if ack != id {
stream.Close()
return nil, fmt.Errorf("bad ack: %d (expected %d)", ack, id)
}
return stream, nil
}
// NextId returns a unique ID to use next.
//
// It is possible for very long-running plugin hosts to wrap this value,
// though it would require a very large amount of RPC calls. In practice
// we've never seen it happen.
func (m *MuxBroker) NextId() uint32 {
return atomic.AddUint32(&m.nextId, 1)
}
// Run starts the brokering and should be executed in a goroutine, since it
// blocks forever, or until the session closes.
//
// Uses of MuxBroker never need to call this. It is called internally by
// the plugin host/client.
func (m *MuxBroker) Run() {
for {
stream, err := m.session.AcceptStream()
if err != nil {
// Once we receive an error, just exit
break
}
// Read the stream ID from the stream
var id uint32
if err := binary.Read(stream, binary.LittleEndian, &id); err != nil {
stream.Close()
continue
}
// Initialize the waiter
p := m.getStream(id)
select {
case p.ch <- stream:
default:
}
// Wait for a timeout
go m.timeoutWait(id, p)
}
}
func (m *MuxBroker) getStream(id uint32) *muxBrokerPending {
m.Lock()
defer m.Unlock()
p, ok := m.streams[id]
if ok {
return p
}
m.streams[id] = &muxBrokerPending{
ch: make(chan net.Conn, 1),
doneCh: make(chan struct{}),
}
return m.streams[id]
}
func (m *MuxBroker) timeoutWait(id uint32, p *muxBrokerPending) {
// Wait for the stream to either be picked up and connected, or
// for a timeout.
timeout := false
select {
case <-p.doneCh:
case <-time.After(5 * time.Second):
timeout = true
}
m.Lock()
defer m.Unlock()
// Delete the stream so no one else can grab it
delete(m.streams, id)
// If we timed out, then check if we have a channel in the buffer,
// and if so, close it.
if timeout {
select {
case s := <-p.ch:
s.Close()
}
}
}

61
vendor/github.com/hashicorp/go-plugin/plugin.go generated vendored Normal file
View File

@ -0,0 +1,61 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
// The plugin package exposes functions and helpers for communicating to
// plugins which are implemented as standalone binary applications.
//
// plugin.Client fully manages the lifecycle of executing the application,
// connecting to it, and returning the RPC client for dispensing plugins.
//
// plugin.Serve fully manages listeners to expose an RPC server from a binary
// that plugin.Client can connect to.
package plugin
import (
"context"
"errors"
"net/rpc"
"google.golang.org/grpc"
)
// Plugin is the interface that is implemented to serve/connect to an
// inteface implementation.
type Plugin interface {
// Server should return the RPC server compatible struct to serve
// the methods that the Client calls over net/rpc.
Server(*MuxBroker) (interface{}, error)
// Client returns an interface implementation for the plugin you're
// serving that communicates to the server end of the plugin.
Client(*MuxBroker, *rpc.Client) (interface{}, error)
}
// GRPCPlugin is the interface that is implemented to serve/connect to
// a plugin over gRPC.
type GRPCPlugin interface {
// GRPCServer should register this plugin for serving with the
// given GRPCServer. Unlike Plugin.Server, this is only called once
// since gRPC plugins serve singletons.
GRPCServer(*GRPCBroker, *grpc.Server) error
// GRPCClient should return the interface implementation for the plugin
// you're serving via gRPC. The provided context will be canceled by
// go-plugin in the event of the plugin process exiting.
GRPCClient(context.Context, *GRPCBroker, *grpc.ClientConn) (interface{}, error)
}
// NetRPCUnsupportedPlugin implements Plugin but returns errors for the
// Server and Client functions. This will effectively disable support for
// net/rpc based plugins.
//
// This struct can be embedded in your struct.
type NetRPCUnsupportedPlugin struct{}
func (p NetRPCUnsupportedPlugin) Server(*MuxBroker) (interface{}, error) {
return nil, errors.New("net/rpc plugin protocol not supported")
}
func (p NetRPCUnsupportedPlugin) Client(*MuxBroker, *rpc.Client) (interface{}, error) {
return nil, errors.New("net/rpc plugin protocol not supported")
}

4
vendor/github.com/hashicorp/go-plugin/process.go generated vendored Normal file
View File

@ -0,0 +1,4 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin

48
vendor/github.com/hashicorp/go-plugin/protocol.go generated vendored Normal file
View File

@ -0,0 +1,48 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"io"
"net"
)
// Protocol is an enum representing the types of protocols.
type Protocol string
const (
ProtocolInvalid Protocol = ""
ProtocolNetRPC Protocol = "netrpc"
ProtocolGRPC Protocol = "grpc"
)
// ServerProtocol is an interface that must be implemented for new plugin
// protocols to be servers.
type ServerProtocol interface {
// Init is called once to configure and initialize the protocol, but
// not start listening. This is the point at which all validation should
// be done and errors returned.
Init() error
// Config is extra configuration to be outputted to stdout. This will
// be automatically base64 encoded to ensure it can be parsed properly.
// This can be an empty string if additional configuration is not needed.
Config() string
// Serve is called to serve connections on the given listener. This should
// continue until the listener is closed.
Serve(net.Listener)
}
// ClientProtocol is an interface that must be implemented for new plugin
// protocols to be clients.
type ClientProtocol interface {
io.Closer
// Dispense dispenses a new instance of the plugin with the given name.
Dispense(string) (interface{}, error)
// Ping checks that the client connection is still healthy.
Ping() error
}

173
vendor/github.com/hashicorp/go-plugin/rpc_client.go generated vendored Normal file
View File

@ -0,0 +1,173 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"crypto/tls"
"fmt"
"io"
"net"
"net/rpc"
"github.com/hashicorp/yamux"
)
// RPCClient connects to an RPCServer over net/rpc to dispense plugin types.
type RPCClient struct {
broker *MuxBroker
control *rpc.Client
plugins map[string]Plugin
// These are the streams used for the various stdout/err overrides
stdout, stderr net.Conn
}
// newRPCClient creates a new RPCClient. The Client argument is expected
// to be successfully started already with a lock held.
func newRPCClient(c *Client) (*RPCClient, error) {
// Connect to the client
conn, err := net.Dial(c.address.Network(), c.address.String())
if err != nil {
return nil, err
}
if tcpConn, ok := conn.(*net.TCPConn); ok {
// Make sure to set keep alive so that the connection doesn't die
tcpConn.SetKeepAlive(true)
}
if c.config.TLSConfig != nil {
conn = tls.Client(conn, c.config.TLSConfig)
}
// Create the actual RPC client
result, err := NewRPCClient(conn, c.config.Plugins)
if err != nil {
conn.Close()
return nil, err
}
// Begin the stream syncing so that stdin, out, err work properly
err = result.SyncStreams(
c.config.SyncStdout,
c.config.SyncStderr)
if err != nil {
result.Close()
return nil, err
}
return result, nil
}
// NewRPCClient creates a client from an already-open connection-like value.
// Dial is typically used instead.
func NewRPCClient(conn io.ReadWriteCloser, plugins map[string]Plugin) (*RPCClient, error) {
// Create the yamux client so we can multiplex
mux, err := yamux.Client(conn, nil)
if err != nil {
conn.Close()
return nil, err
}
// Connect to the control stream.
control, err := mux.Open()
if err != nil {
mux.Close()
return nil, err
}
// Connect stdout, stderr streams
stdstream := make([]net.Conn, 2)
for i, _ := range stdstream {
stdstream[i], err = mux.Open()
if err != nil {
mux.Close()
return nil, err
}
}
// Create the broker and start it up
broker := newMuxBroker(mux)
go broker.Run()
// Build the client using our broker and control channel.
return &RPCClient{
broker: broker,
control: rpc.NewClient(control),
plugins: plugins,
stdout: stdstream[0],
stderr: stdstream[1],
}, nil
}
// SyncStreams should be called to enable syncing of stdout,
// stderr with the plugin.
//
// This will return immediately and the syncing will continue to happen
// in the background. You do not need to launch this in a goroutine itself.
//
// This should never be called multiple times.
func (c *RPCClient) SyncStreams(stdout io.Writer, stderr io.Writer) error {
go copyStream("stdout", stdout, c.stdout)
go copyStream("stderr", stderr, c.stderr)
return nil
}
// Close closes the connection. The client is no longer usable after this
// is called.
func (c *RPCClient) Close() error {
// Call the control channel and ask it to gracefully exit. If this
// errors, then we save it so that we always return an error but we
// want to try to close the other channels anyways.
var empty struct{}
returnErr := c.control.Call("Control.Quit", true, &empty)
// Close the other streams we have
if err := c.control.Close(); err != nil {
return err
}
if err := c.stdout.Close(); err != nil {
return err
}
if err := c.stderr.Close(); err != nil {
return err
}
if err := c.broker.Close(); err != nil {
return err
}
// Return back the error we got from Control.Quit. This is very important
// since we MUST return non-nil error if this fails so that Client.Kill
// will properly try a process.Kill.
return returnErr
}
func (c *RPCClient) Dispense(name string) (interface{}, error) {
p, ok := c.plugins[name]
if !ok {
return nil, fmt.Errorf("unknown plugin type: %s", name)
}
var id uint32
if err := c.control.Call(
"Dispenser.Dispense", name, &id); err != nil {
return nil, err
}
conn, err := c.broker.Dial(id)
if err != nil {
return nil, err
}
return p.Client(c.broker, rpc.NewClient(conn))
}
// Ping pings the connection to ensure it is still alive.
//
// The error from the RPC call is returned exactly if you want to inspect
// it for further error analysis. Any error returned from here would indicate
// that the connection to the plugin is not healthy.
func (c *RPCClient) Ping() error {
var empty struct{}
return c.control.Call("Control.Ping", true, &empty)
}

209
vendor/github.com/hashicorp/go-plugin/rpc_server.go generated vendored Normal file
View File

@ -0,0 +1,209 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"errors"
"fmt"
"io"
"log"
"net"
"net/rpc"
"sync"
"github.com/hashicorp/yamux"
)
// RPCServer listens for network connections and then dispenses interface
// implementations over net/rpc.
//
// After setting the fields below, they shouldn't be read again directly
// from the structure which may be reading/writing them concurrently.
type RPCServer struct {
Plugins map[string]Plugin
// Stdout, Stderr are what this server will use instead of the
// normal stdin/out/err. This is because due to the multi-process nature
// of our plugin system, we can't use the normal process values so we
// make our own custom one we pipe across.
Stdout io.Reader
Stderr io.Reader
// DoneCh should be set to a non-nil channel that will be closed
// when the control requests the RPC server to end.
DoneCh chan<- struct{}
lock sync.Mutex
}
// ServerProtocol impl.
func (s *RPCServer) Init() error { return nil }
// ServerProtocol impl.
func (s *RPCServer) Config() string { return "" }
// ServerProtocol impl.
func (s *RPCServer) Serve(lis net.Listener) {
defer s.done()
for {
conn, err := lis.Accept()
if err != nil {
severity := "ERR"
if errors.Is(err, net.ErrClosed) {
severity = "DEBUG"
}
log.Printf("[%s] plugin: plugin server: %s", severity, err)
return
}
go s.ServeConn(conn)
}
}
// ServeConn runs a single connection.
//
// ServeConn blocks, serving the connection until the client hangs up.
func (s *RPCServer) ServeConn(conn io.ReadWriteCloser) {
// First create the yamux server to wrap this connection
mux, err := yamux.Server(conn, nil)
if err != nil {
conn.Close()
log.Printf("[ERR] plugin: error creating yamux server: %s", err)
return
}
// Accept the control connection
control, err := mux.Accept()
if err != nil {
mux.Close()
if err != io.EOF {
log.Printf("[ERR] plugin: error accepting control connection: %s", err)
}
return
}
// Connect the stdstreams (in, out, err)
stdstream := make([]net.Conn, 2)
for i := range stdstream {
stdstream[i], err = mux.Accept()
if err != nil {
mux.Close()
log.Printf("[ERR] plugin: accepting stream %d: %s", i, err)
return
}
}
// Copy std streams out to the proper place
go copyStream("stdout", stdstream[0], s.Stdout)
go copyStream("stderr", stdstream[1], s.Stderr)
// Create the broker and start it up
broker := newMuxBroker(mux)
go broker.Run()
// Use the control connection to build the dispenser and serve the
// connection.
server := rpc.NewServer()
server.RegisterName("Control", &controlServer{
server: s,
})
server.RegisterName("Dispenser", &dispenseServer{
broker: broker,
plugins: s.Plugins,
})
server.ServeConn(control)
}
// done is called internally by the control server to trigger the
// doneCh to close which is listened to by the main process to cleanly
// exit.
func (s *RPCServer) done() {
s.lock.Lock()
defer s.lock.Unlock()
if s.DoneCh != nil {
close(s.DoneCh)
s.DoneCh = nil
}
}
// dispenseServer dispenses variousinterface implementations for Terraform.
type controlServer struct {
server *RPCServer
}
// Ping can be called to verify the connection (and likely the binary)
// is still alive to a plugin.
func (c *controlServer) Ping(
null bool, response *struct{},
) error {
*response = struct{}{}
return nil
}
func (c *controlServer) Quit(
null bool, response *struct{},
) error {
// End the server
c.server.done()
// Always return true
*response = struct{}{}
return nil
}
// dispenseServer dispenses variousinterface implementations for Terraform.
type dispenseServer struct {
broker *MuxBroker
plugins map[string]Plugin
}
func (d *dispenseServer) Dispense(
name string, response *uint32,
) error {
// Find the function to create this implementation
p, ok := d.plugins[name]
if !ok {
return fmt.Errorf("unknown plugin type: %s", name)
}
// Create the implementation first so we know if there is an error.
impl, err := p.Server(d.broker)
if err != nil {
// We turn the error into an errors error so that it works across RPC
return errors.New(err.Error())
}
// Reserve an ID for our implementation
id := d.broker.NextId()
*response = id
// Run the rest in a goroutine since it can only happen once this RPC
// call returns. We wait for a connection for the plugin implementation
// and serve it.
go func() {
conn, err := d.broker.Accept(id)
if err != nil {
log.Printf("[ERR] go-plugin: plugin dispense error: %s: %s", name, err)
return
}
serve(conn, "Plugin", impl)
}()
return nil
}
func serve(conn io.ReadWriteCloser, name string, v interface{}) {
server := rpc.NewServer()
if err := server.RegisterName(name, v); err != nil {
log.Printf("[ERR] go-plugin: plugin dispense error: %s", err)
return
}
server.ServeConn(conn)
}

72
vendor/github.com/hashicorp/go-plugin/runner/runner.go generated vendored Normal file
View File

@ -0,0 +1,72 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package runner
import (
"context"
"io"
)
// Runner defines the interface required by go-plugin to manage the lifecycle of
// of a plugin and attempt to negotiate a connection with it. Note that this
// is orthogonal to the protocol and transport used, which is negotiated over stdout.
type Runner interface {
// Start should start the plugin and ensure any work required for servicing
// other interface methods is done. If the context is cancelled, it should
// only abort any attempts to _start_ the plugin. Waiting and shutdown are
// handled separately.
Start(ctx context.Context) error
// Diagnose makes a best-effort attempt to return any debug information that
// might help users understand why a plugin failed to start and negotiate a
// connection.
Diagnose(ctx context.Context) string
// Stdout is used to negotiate the go-plugin protocol.
Stdout() io.ReadCloser
// Stderr is used for forwarding plugin logs to the host process logger.
Stderr() io.ReadCloser
// Name is a human-friendly name for the plugin, such as the path to the
// executable. It does not have to be unique.
Name() string
AttachedRunner
}
// AttachedRunner defines a limited subset of Runner's interface to represent the
// reduced responsibility for plugin lifecycle when attaching to an already running
// plugin.
type AttachedRunner interface {
// Wait should wait until the plugin stops running, whether in response to
// an out of band signal or in response to calling Kill().
Wait(ctx context.Context) error
// Kill should stop the plugin and perform any cleanup required.
Kill(ctx context.Context) error
// ID is a unique identifier to represent the running plugin. e.g. pid or
// container ID.
ID() string
AddrTranslator
}
// AddrTranslator translates addresses between the execution context of the host
// process and the plugin. For example, if the plugin is in a container, the file
// path for a Unix socket may be different between the host and the container.
//
// It is only intended to be used by the host process.
type AddrTranslator interface {
// Called before connecting on any addresses received back from the plugin.
PluginToHost(pluginNet, pluginAddr string) (hostNet string, hostAddr string, err error)
// Called on any host process addresses before they are sent to the plugin.
HostToPlugin(hostNet, hostAddr string) (pluginNet string, pluginAddr string, err error)
}
// ReattachFunc can be passed to a client's reattach config to reattach to an
// already running plugin instead of starting it ourselves.
type ReattachFunc func() (AttachedRunner, error)

665
vendor/github.com/hashicorp/go-plugin/server.go generated vendored Normal file
View File

@ -0,0 +1,665 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"context"
"crypto/tls"
"crypto/x509"
"encoding/base64"
"errors"
"fmt"
"io"
"net"
"os"
"os/signal"
"os/user"
"runtime"
"sort"
"strconv"
"strings"
hclog "github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin/internal/grpcmux"
"google.golang.org/grpc"
)
// CoreProtocolVersion is the ProtocolVersion of the plugin system itself.
// We will increment this whenever we change any protocol behavior. This
// will invalidate any prior plugins but will at least allow us to iterate
// on the core in a safe way. We will do our best to do this very
// infrequently.
const CoreProtocolVersion = 1
// HandshakeConfig is the configuration used by client and servers to
// handshake before starting a plugin connection. This is embedded by
// both ServeConfig and ClientConfig.
//
// In practice, the plugin host creates a HandshakeConfig that is exported
// and plugins then can easily consume it.
type HandshakeConfig struct {
// ProtocolVersion is the version that clients must match on to
// agree they can communicate. This should match the ProtocolVersion
// set on ClientConfig when using a plugin.
// This field is not required if VersionedPlugins are being used in the
// Client or Server configurations.
ProtocolVersion uint
// MagicCookieKey and value are used as a very basic verification
// that a plugin is intended to be launched. This is not a security
// measure, just a UX feature. If the magic cookie doesn't match,
// we show human-friendly output.
MagicCookieKey string
MagicCookieValue string
}
// PluginSet is a set of plugins provided to be registered in the plugin
// server.
type PluginSet map[string]Plugin
// ServeConfig configures what sorts of plugins are served.
type ServeConfig struct {
// HandshakeConfig is the configuration that must match clients.
HandshakeConfig
// TLSProvider is a function that returns a configured tls.Config.
TLSProvider func() (*tls.Config, error)
// Plugins are the plugins that are served.
// The implied version of this PluginSet is the Handshake.ProtocolVersion.
Plugins PluginSet
// VersionedPlugins is a map of PluginSets for specific protocol versions.
// These can be used to negotiate a compatible version between client and
// server. If this is set, Handshake.ProtocolVersion is not required.
VersionedPlugins map[int]PluginSet
// GRPCServer should be non-nil to enable serving the plugins over
// gRPC. This is a function to create the server when needed with the
// given server options. The server options populated by go-plugin will
// be for TLS if set. You may modify the input slice.
//
// Note that the grpc.Server will automatically be registered with
// the gRPC health checking service. This is not optional since go-plugin
// relies on this to implement Ping().
GRPCServer func([]grpc.ServerOption) *grpc.Server
// Logger is used to pass a logger into the server. If none is provided the
// server will create a default logger.
Logger hclog.Logger
// Test, if non-nil, will put plugin serving into "test mode". This is
// meant to be used as part of `go test` within a plugin's codebase to
// launch the plugin in-process and output a ReattachConfig.
//
// This changes the behavior of the server in a number of ways to
// accomodate the expectation of running in-process:
//
// * The handshake cookie is not validated.
// * Stdout/stderr will receive plugin reads and writes
// * Connection information will not be sent to stdout
//
Test *ServeTestConfig
}
// ServeTestConfig configures plugin serving for test mode. See ServeConfig.Test.
type ServeTestConfig struct {
// Context, if set, will force the plugin serving to end when cancelled.
// This is only a test configuration because the non-test configuration
// expects to take over the process and therefore end on an interrupt or
// kill signal. For tests, we need to kill the plugin serving routinely
// and this provides a way to do so.
//
// If you want to wait for the plugin process to close before moving on,
// you can wait on CloseCh.
Context context.Context
// If this channel is non-nil, we will send the ReattachConfig via
// this channel. This can be encoded (via JSON recommended) to the
// plugin client to attach to this plugin.
ReattachConfigCh chan<- *ReattachConfig
// CloseCh, if non-nil, will be closed when serving exits. This can be
// used along with Context to determine when the server is fully shut down.
// If this is not set, you can still use Context on its own, but note there
// may be a period of time between canceling the context and the plugin
// server being shut down.
CloseCh chan<- struct{}
// SyncStdio, if true, will enable the client side "SyncStdout/Stderr"
// functionality to work. This defaults to false because the implementation
// of making this work within test environments is particularly messy
// and SyncStdio functionality is fairly rare, so we default to the simple
// scenario.
SyncStdio bool
}
func unixSocketConfigFromEnv() UnixSocketConfig {
return UnixSocketConfig{
Group: os.Getenv(EnvUnixSocketGroup),
socketDir: os.Getenv(EnvUnixSocketDir),
}
}
// protocolVersion determines the protocol version and plugin set to be used by
// the server. In the event that there is no suitable version, the last version
// in the config is returned leaving the client to report the incompatibility.
func protocolVersion(opts *ServeConfig) (int, Protocol, PluginSet) {
protoVersion := int(opts.ProtocolVersion)
pluginSet := opts.Plugins
protoType := ProtocolNetRPC
// Check if the client sent a list of acceptable versions
var clientVersions []int
if vs := os.Getenv("PLUGIN_PROTOCOL_VERSIONS"); vs != "" {
for _, s := range strings.Split(vs, ",") {
v, err := strconv.Atoi(s)
if err != nil {
fmt.Fprintf(os.Stderr, "server sent invalid plugin version %q", s)
continue
}
clientVersions = append(clientVersions, v)
}
}
// We want to iterate in reverse order, to ensure we match the newest
// compatible plugin version.
sort.Sort(sort.Reverse(sort.IntSlice(clientVersions)))
// set the old un-versioned fields as if they were versioned plugins
if opts.VersionedPlugins == nil {
opts.VersionedPlugins = make(map[int]PluginSet)
}
if pluginSet != nil {
opts.VersionedPlugins[protoVersion] = pluginSet
}
// Sort the version to make sure we match the latest first
var versions []int
for v := range opts.VersionedPlugins {
versions = append(versions, v)
}
sort.Sort(sort.Reverse(sort.IntSlice(versions)))
// See if we have multiple versions of Plugins to choose from
for _, version := range versions {
// Record each version, since we guarantee that this returns valid
// values even if they are not a protocol match.
protoVersion = version
pluginSet = opts.VersionedPlugins[version]
// If we have a configured gRPC server we should select a protocol
if opts.GRPCServer != nil {
// All plugins in a set must use the same transport, so check the first
// for the protocol type
for _, p := range pluginSet {
switch p.(type) {
case GRPCPlugin:
protoType = ProtocolGRPC
default:
protoType = ProtocolNetRPC
}
break
}
}
for _, clientVersion := range clientVersions {
if clientVersion == protoVersion {
return protoVersion, protoType, pluginSet
}
}
}
// Return the lowest version as the fallback.
// Since we iterated over all the versions in reverse order above, these
// values are from the lowest version number plugins (which may be from
// a combination of the Handshake.ProtocolVersion and ServeConfig.Plugins
// fields). This allows serving the oldest version of our plugins to a
// legacy client that did not send a PLUGIN_PROTOCOL_VERSIONS list.
return protoVersion, protoType, pluginSet
}
// Serve serves the plugins given by ServeConfig.
//
// Serve doesn't return until the plugin is done being executed. Any
// fixable errors will be output to os.Stderr and the process will
// exit with a status code of 1. Serve will panic for unexpected
// conditions where a user's fix is unknown.
//
// This is the method that plugins should call in their main() functions.
func Serve(opts *ServeConfig) {
exitCode := -1
// We use this to trigger an `os.Exit` so that we can execute our other
// deferred functions. In test mode, we just output the err to stderr
// and return.
defer func() {
if opts.Test == nil && exitCode >= 0 {
os.Exit(exitCode)
}
if opts.Test != nil && opts.Test.CloseCh != nil {
close(opts.Test.CloseCh)
}
}()
if opts.Test == nil {
// Validate the handshake config
if opts.MagicCookieKey == "" || opts.MagicCookieValue == "" {
fmt.Fprintf(os.Stderr,
"Misconfigured ServeConfig given to serve this plugin: no magic cookie\n"+
"key or value was set. Please notify the plugin author and report\n"+
"this as a bug.\n")
exitCode = 1
return
}
// First check the cookie
if os.Getenv(opts.MagicCookieKey) != opts.MagicCookieValue {
fmt.Fprintf(os.Stderr,
"This binary is a plugin. These are not meant to be executed directly.\n"+
"Please execute the program that consumes these plugins, which will\n"+
"load any plugins automatically\n")
exitCode = 1
return
}
}
// negotiate the version and plugins
// start with default version in the handshake config
protoVersion, protoType, pluginSet := protocolVersion(opts)
logger := opts.Logger
if logger == nil {
// internal logger to os.Stderr
logger = hclog.New(&hclog.LoggerOptions{
Level: hclog.Trace,
Output: os.Stderr,
JSONFormat: true,
})
}
// Register a listener so we can accept a connection
listener, err := serverListener(unixSocketConfigFromEnv())
if err != nil {
logger.Error("plugin init error", "error", err)
return
}
// Close the listener on return. We wrap this in a func() on purpose
// because the "listener" reference may change to TLS.
defer func() {
listener.Close()
}()
var tlsConfig *tls.Config
if opts.TLSProvider != nil {
tlsConfig, err = opts.TLSProvider()
if err != nil {
logger.Error("plugin tls init", "error", err)
return
}
}
var serverCert string
clientCert := os.Getenv("PLUGIN_CLIENT_CERT")
// If the client is configured using AutoMTLS, the certificate will be here,
// and we need to generate our own in response.
if tlsConfig == nil && clientCert != "" {
logger.Info("configuring server automatic mTLS")
clientCertPool := x509.NewCertPool()
if !clientCertPool.AppendCertsFromPEM([]byte(clientCert)) {
logger.Error("client cert provided but failed to parse", "cert", clientCert)
}
certPEM, keyPEM, err := generateCert()
if err != nil {
logger.Error("failed to generate server certificate", "error", err)
panic(err)
}
cert, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
logger.Error("failed to parse server certificate", "error", err)
panic(err)
}
tlsConfig = &tls.Config{
Certificates: []tls.Certificate{cert},
ClientAuth: tls.RequireAndVerifyClientCert,
ClientCAs: clientCertPool,
MinVersion: tls.VersionTLS12,
RootCAs: clientCertPool,
ServerName: "localhost",
}
// We send back the raw leaf cert data for the client rather than the
// PEM, since the protocol can't handle newlines.
serverCert = base64.RawStdEncoding.EncodeToString(cert.Certificate[0])
}
// Create the channel to tell us when we're done
doneCh := make(chan struct{})
// Create our new stdout, stderr files. These will override our built-in
// stdout/stderr so that it works across the stream boundary.
var stdout_r, stderr_r io.Reader
stdout_r, stdout_w, err := os.Pipe()
if err != nil {
fmt.Fprintf(os.Stderr, "Error preparing plugin: %s\n", err)
os.Exit(1)
}
stderr_r, stderr_w, err := os.Pipe()
if err != nil {
fmt.Fprintf(os.Stderr, "Error preparing plugin: %s\n", err)
os.Exit(1)
}
// If we're in test mode, we tee off the reader and write the data
// as-is to our normal Stdout and Stderr so that they continue working
// while stdio works. This is because in test mode, we assume we're running
// in `go test` or some equivalent and we want output to go to standard
// locations.
if opts.Test != nil {
// TODO(mitchellh): This isn't super ideal because a TeeReader
// only works if the reader side is actively read. If we never
// connect via a plugin client, the output still gets swallowed.
stdout_r = io.TeeReader(stdout_r, os.Stdout)
stderr_r = io.TeeReader(stderr_r, os.Stderr)
}
// Build the server type
var server ServerProtocol
switch protoType {
case ProtocolNetRPC:
// If we have a TLS configuration then we wrap the listener
// ourselves and do it at that level.
if tlsConfig != nil {
listener = tls.NewListener(listener, tlsConfig)
}
// Create the RPC server to dispense
server = &RPCServer{
Plugins: pluginSet,
Stdout: stdout_r,
Stderr: stderr_r,
DoneCh: doneCh,
}
case ProtocolGRPC:
var muxer *grpcmux.GRPCServerMuxer
if multiplex, _ := strconv.ParseBool(os.Getenv(envMultiplexGRPC)); multiplex {
muxer = grpcmux.NewGRPCServerMuxer(logger, listener)
listener = muxer
}
// Create the gRPC server
server = &GRPCServer{
Plugins: pluginSet,
Server: opts.GRPCServer,
TLS: tlsConfig,
Stdout: stdout_r,
Stderr: stderr_r,
DoneCh: doneCh,
logger: logger,
muxer: muxer,
}
default:
panic("unknown server protocol: " + protoType)
}
// Initialize the servers
if err := server.Init(); err != nil {
logger.Error("protocol init", "error", err)
return
}
logger.Debug("plugin address", "network", listener.Addr().Network(), "address", listener.Addr().String())
// Output the address and service name to stdout so that the client can
// bring it up. In test mode, we don't do this because clients will
// attach via a reattach config.
if opts.Test == nil {
const grpcBrokerMultiplexingSupported = true
protocolLine := fmt.Sprintf("%d|%d|%s|%s|%s|%s",
CoreProtocolVersion,
protoVersion,
listener.Addr().Network(),
listener.Addr().String(),
protoType,
serverCert)
// Old clients will error with new plugins if we blindly append the
// seventh segment for gRPC broker multiplexing support, because old
// client code uses strings.SplitN(line, "|", 6), which means a seventh
// segment will get appended to the sixth segment as "sixthpart|true".
//
// If the environment variable is set, we assume the client is new enough
// to handle a seventh segment, as it should now use
// strings.Split(line, "|") and always handle each segment individually.
if os.Getenv(envMultiplexGRPC) != "" {
protocolLine += fmt.Sprintf("|%v", grpcBrokerMultiplexingSupported)
}
fmt.Printf("%s\n", protocolLine)
os.Stdout.Sync()
} else if ch := opts.Test.ReattachConfigCh; ch != nil {
// Send back the reattach config that can be used. This isn't
// quite ready if they connect immediately but the client should
// retry a few times.
ch <- &ReattachConfig{
Protocol: protoType,
ProtocolVersion: protoVersion,
Addr: listener.Addr(),
Pid: os.Getpid(),
Test: true,
}
}
// Eat the interrupts. In test mode we disable this so that go test
// can be cancelled properly.
if opts.Test == nil {
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt)
go func() {
count := 0
for {
<-ch
count++
logger.Trace("plugin received interrupt signal, ignoring", "count", count)
}
}()
}
// Set our stdout, stderr to the stdio stream that clients can retrieve
// using ClientConfig.SyncStdout/err. We only do this for non-test mode
// or if the test mode explicitly requests it.
//
// In test mode, we use a multiwriter so that the data continues going
// to the normal stdout/stderr so output can show up in test logs. We
// also send to the stdio stream so that clients can continue working
// if they depend on that.
if opts.Test == nil || opts.Test.SyncStdio {
if opts.Test != nil {
// In test mode we need to maintain the original values so we can
// reset it.
defer func(out, err *os.File) {
os.Stdout = out
os.Stderr = err
}(os.Stdout, os.Stderr)
}
os.Stdout = stdout_w
os.Stderr = stderr_w
}
// Accept connections and wait for completion
go server.Serve(listener)
ctx := context.Background()
if opts.Test != nil && opts.Test.Context != nil {
ctx = opts.Test.Context
}
select {
case <-ctx.Done():
// Cancellation. We can stop the server by closing the listener.
// This isn't graceful at all but this is currently only used by
// tests and its our only way to stop.
listener.Close()
// If this is a grpc server, then we also ask the server itself to
// end which will kill all connections. There isn't an easy way to do
// this for net/rpc currently but net/rpc is more and more unused.
if s, ok := server.(*GRPCServer); ok {
s.Stop()
}
// Wait for the server itself to shut down
<-doneCh
case <-doneCh:
// Note that given the documentation of Serve we should probably be
// setting exitCode = 0 and using os.Exit here. That's how it used to
// work before extracting this library. However, for years we've done
// this so we'll keep this functionality.
}
}
func serverListener(unixSocketCfg UnixSocketConfig) (net.Listener, error) {
if runtime.GOOS == "windows" {
return serverListener_tcp()
}
return serverListener_unix(unixSocketCfg)
}
func serverListener_tcp() (net.Listener, error) {
envMinPort := os.Getenv("PLUGIN_MIN_PORT")
envMaxPort := os.Getenv("PLUGIN_MAX_PORT")
var minPort, maxPort int64
var err error
switch {
case len(envMinPort) == 0:
minPort = 0
default:
minPort, err = strconv.ParseInt(envMinPort, 10, 32)
if err != nil {
return nil, fmt.Errorf("Couldn't get value from PLUGIN_MIN_PORT: %v", err)
}
}
switch {
case len(envMaxPort) == 0:
maxPort = 0
default:
maxPort, err = strconv.ParseInt(envMaxPort, 10, 32)
if err != nil {
return nil, fmt.Errorf("Couldn't get value from PLUGIN_MAX_PORT: %v", err)
}
}
if minPort > maxPort {
return nil, fmt.Errorf("PLUGIN_MIN_PORT value of %d is greater than PLUGIN_MAX_PORT value of %d", minPort, maxPort)
}
for port := minPort; port <= maxPort; port++ {
address := fmt.Sprintf("127.0.0.1:%d", port)
listener, err := net.Listen("tcp", address)
if err == nil {
return listener, nil
}
}
return nil, errors.New("Couldn't bind plugin TCP listener")
}
func serverListener_unix(unixSocketCfg UnixSocketConfig) (net.Listener, error) {
tf, err := os.CreateTemp(unixSocketCfg.socketDir, "plugin")
if err != nil {
return nil, err
}
path := tf.Name()
// Close the file and remove it because it has to not exist for
// the domain socket.
if err := tf.Close(); err != nil {
return nil, err
}
if err := os.Remove(path); err != nil {
return nil, err
}
l, err := net.Listen("unix", path)
if err != nil {
return nil, err
}
// By default, unix sockets are only writable by the owner. Set up a custom
// group owner and group write permissions if configured.
if unixSocketCfg.Group != "" {
err = setGroupWritable(path, unixSocketCfg.Group, 0o660)
if err != nil {
return nil, err
}
}
// Wrap the listener in rmListener so that the Unix domain socket file
// is removed on close.
return newDeleteFileListener(l, path), nil
}
func setGroupWritable(path, groupString string, mode os.FileMode) error {
groupID, err := strconv.Atoi(groupString)
if err != nil {
group, err := user.LookupGroup(groupString)
if err != nil {
return fmt.Errorf("failed to find gid from %q: %w", groupString, err)
}
groupID, err = strconv.Atoi(group.Gid)
if err != nil {
return fmt.Errorf("failed to parse %q group's gid as an integer: %w", groupString, err)
}
}
err = os.Chown(path, -1, groupID)
if err != nil {
return err
}
err = os.Chmod(path, mode)
if err != nil {
return err
}
return nil
}
// rmListener is an implementation of net.Listener that forwards most
// calls to the listener but also calls an additional close function. We
// use this to cleanup the unix domain socket on close, as well as clean
// up multiplexed listeners.
type rmListener struct {
net.Listener
close func() error
}
func newDeleteFileListener(ln net.Listener, path string) *rmListener {
return &rmListener{
Listener: ln,
close: func() error {
return os.Remove(path)
},
}
}
func (l *rmListener) Close() error {
// Close the listener itself
if err := l.Listener.Close(); err != nil {
return err
}
// Remove the file
return l.close()
}

34
vendor/github.com/hashicorp/go-plugin/server_mux.go generated vendored Normal file
View File

@ -0,0 +1,34 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"fmt"
"os"
)
// ServeMuxMap is the type that is used to configure ServeMux
type ServeMuxMap map[string]*ServeConfig
// ServeMux is like Serve, but serves multiple types of plugins determined
// by the argument given on the command-line.
//
// This command doesn't return until the plugin is done being executed. Any
// errors are logged or output to stderr.
func ServeMux(m ServeMuxMap) {
if len(os.Args) != 2 {
fmt.Fprintf(os.Stderr,
"Invoked improperly. This is an internal command that shouldn't\n"+
"be manually invoked.\n")
os.Exit(1)
}
opts, ok := m[os.Args[1]]
if !ok {
fmt.Fprintf(os.Stderr, "Unknown plugin: %s\n", os.Args[1])
os.Exit(1)
}
Serve(opts)
}

21
vendor/github.com/hashicorp/go-plugin/stream.go generated vendored Normal file
View File

@ -0,0 +1,21 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"io"
"log"
)
func copyStream(name string, dst io.Writer, src io.Reader) {
if src == nil {
panic(name + ": src is nil")
}
if dst == nil {
panic(name + ": dst is nil")
}
if _, err := io.Copy(dst, src); err != nil && err != io.EOF {
log.Printf("[ERR] plugin: stream copy '%s' error: %s", name, err)
}
}

185
vendor/github.com/hashicorp/go-plugin/testing.go generated vendored Normal file
View File

@ -0,0 +1,185 @@
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package plugin
import (
"bytes"
"context"
"io"
"net"
"net/rpc"
hclog "github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-plugin/internal/grpcmux"
"github.com/mitchellh/go-testing-interface"
"google.golang.org/grpc"
)
// TestOptions allows specifying options that can affect the behavior of the
// test functions
type TestOptions struct {
//ServerStdout causes the given value to be used in place of a blank buffer
//for RPCServer's Stdout
ServerStdout io.ReadCloser
//ServerStderr causes the given value to be used in place of a blank buffer
//for RPCServer's Stderr
ServerStderr io.ReadCloser
}
// The testing file contains test helpers that you can use outside of
// this package for making it easier to test plugins themselves.
// TestConn is a helper function for returning a client and server
// net.Conn connected to each other.
func TestConn(t testing.T) (net.Conn, net.Conn) {
// Listen to any local port. This listener will be closed
// after a single connection is established.
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("err: %s", err)
}
// Start a goroutine to accept our client connection
var serverConn net.Conn
doneCh := make(chan struct{})
go func() {
defer close(doneCh)
defer l.Close()
var err error
serverConn, err = l.Accept()
if err != nil {
t.Fatalf("err: %s", err)
}
}()
// Connect to the server
clientConn, err := net.Dial("tcp", l.Addr().String())
if err != nil {
t.Fatalf("err: %s", err)
}
// Wait for the server side to acknowledge it has connected
<-doneCh
return clientConn, serverConn
}
// TestRPCConn returns a rpc client and server connected to each other.
func TestRPCConn(t testing.T) (*rpc.Client, *rpc.Server) {
clientConn, serverConn := TestConn(t)
server := rpc.NewServer()
go server.ServeConn(serverConn)
client := rpc.NewClient(clientConn)
return client, server
}
// TestPluginRPCConn returns a plugin RPC client and server that are connected
// together and configured.
func TestPluginRPCConn(t testing.T, ps map[string]Plugin, opts *TestOptions) (*RPCClient, *RPCServer) {
// Create two net.Conns we can use to shuttle our control connection
clientConn, serverConn := TestConn(t)
// Start up the server
server := &RPCServer{Plugins: ps, Stdout: new(bytes.Buffer), Stderr: new(bytes.Buffer)}
if opts != nil {
if opts.ServerStdout != nil {
server.Stdout = opts.ServerStdout
}
if opts.ServerStderr != nil {
server.Stderr = opts.ServerStderr
}
}
go server.ServeConn(serverConn)
// Connect the client to the server
client, err := NewRPCClient(clientConn, ps)
if err != nil {
t.Fatalf("err: %s", err)
}
return client, server
}
// TestGRPCConn returns a gRPC client conn and grpc server that are connected
// together and configured. The register function is used to register services
// prior to the Serve call. This is used to test gRPC connections.
func TestGRPCConn(t testing.T, register func(*grpc.Server)) (*grpc.ClientConn, *grpc.Server) {
// Create a listener
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("err: %s", err)
}
server := grpc.NewServer()
register(server)
go server.Serve(l)
// Connect to the server
conn, err := grpc.Dial(
l.Addr().String(),
grpc.WithBlock(),
grpc.WithInsecure())
if err != nil {
t.Fatalf("err: %s", err)
}
// Connection successful, close the listener
l.Close()
return conn, server
}
// TestPluginGRPCConn returns a plugin gRPC client and server that are connected
// together and configured. This is used to test gRPC connections.
func TestPluginGRPCConn(t testing.T, multiplex bool, ps map[string]Plugin) (*GRPCClient, *GRPCServer) {
// Create a listener
ln, err := serverListener(UnixSocketConfig{})
if err != nil {
t.Fatal(err)
}
logger := hclog.New(&hclog.LoggerOptions{
Level: hclog.Debug,
})
// Start up the server
var muxer *grpcmux.GRPCServerMuxer
if multiplex {
muxer = grpcmux.NewGRPCServerMuxer(logger, ln)
ln = muxer
}
server := &GRPCServer{
Plugins: ps,
DoneCh: make(chan struct{}),
Server: DefaultGRPCServer,
Stdout: new(bytes.Buffer),
Stderr: new(bytes.Buffer),
logger: logger,
muxer: muxer,
}
if err := server.Init(); err != nil {
t.Fatalf("err: %s", err)
}
go server.Serve(ln)
client := &Client{
address: ln.Addr(),
protocol: ProtocolGRPC,
config: &ClientConfig{
Plugins: ps,
GRPCBrokerMultiplex: multiplex,
},
logger: logger,
}
grpcClient, err := newGRPCClient(context.Background(), client)
if err != nil {
t.Fatal(err)
}
return grpcClient, server
}

View File

@ -1,25 +1,7 @@
golang-lru
==========
This provides the `lru` package which implements a fixed-size
thread safe LRU cache. It is based on the cache in Groupcache.
Documentation
=============
Full docs are available on [Godoc](https://pkg.go.dev/github.com/hashicorp/golang-lru)
Example
=======
Using the LRU is very simple:
```go
l, _ := New(128)
for i := 0; i < 256; i++ {
l.Add(i, nil)
}
if l.Len() != 128 {
panic(fmt.Sprintf("bad len: %v", l.Len()))
}
```
Please upgrade to github.com/hashicorp/golang-lru/v2 for all new code as v1 will
not be updated anymore. The v2 version supports generics and is faster; old code
can specify a specific tag, e.g. github.com/hashicorp/golang-lru/v1.0.2 for
backwards compatibility.

23
vendor/github.com/hashicorp/yamux/.gitignore generated vendored Normal file
View File

@ -0,0 +1,23 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test

362
vendor/github.com/hashicorp/yamux/LICENSE generated vendored Normal file
View File

@ -0,0 +1,362 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.

86
vendor/github.com/hashicorp/yamux/README.md generated vendored Normal file
View File

@ -0,0 +1,86 @@
# Yamux
Yamux (Yet another Multiplexer) is a multiplexing library for Golang.
It relies on an underlying connection to provide reliability
and ordering, such as TCP or Unix domain sockets, and provides
stream-oriented multiplexing. It is inspired by SPDY but is not
interoperable with it.
Yamux features include:
* Bi-directional streams
* Streams can be opened by either client or server
* Useful for NAT traversal
* Server-side push support
* Flow control
* Avoid starvation
* Back-pressure to prevent overwhelming a receiver
* Keep Alives
* Enables persistent connections over a load balancer
* Efficient
* Enables thousands of logical streams with low overhead
## Documentation
For complete documentation, see the associated [Godoc](http://godoc.org/github.com/hashicorp/yamux).
## Specification
The full specification for Yamux is provided in the `spec.md` file.
It can be used as a guide to implementors of interoperable libraries.
## Usage
Using Yamux is remarkably simple:
```go
func client() {
// Get a TCP connection
conn, err := net.Dial(...)
if err != nil {
panic(err)
}
// Setup client side of yamux
session, err := yamux.Client(conn, nil)
if err != nil {
panic(err)
}
// Open a new stream
stream, err := session.Open()
if err != nil {
panic(err)
}
// Stream implements net.Conn
stream.Write([]byte("ping"))
}
func server() {
// Accept a TCP connection
conn, err := listener.Accept()
if err != nil {
panic(err)
}
// Setup server side of yamux
session, err := yamux.Server(conn, nil)
if err != nil {
panic(err)
}
// Accept a stream
stream, err := session.Accept()
if err != nil {
panic(err)
}
// Listen for a message
buf := make([]byte, 4)
stream.Read(buf)
}
```

60
vendor/github.com/hashicorp/yamux/addr.go generated vendored Normal file
View File

@ -0,0 +1,60 @@
package yamux
import (
"fmt"
"net"
)
// hasAddr is used to get the address from the underlying connection
type hasAddr interface {
LocalAddr() net.Addr
RemoteAddr() net.Addr
}
// yamuxAddr is used when we cannot get the underlying address
type yamuxAddr struct {
Addr string
}
func (*yamuxAddr) Network() string {
return "yamux"
}
func (y *yamuxAddr) String() string {
return fmt.Sprintf("yamux:%s", y.Addr)
}
// Addr is used to get the address of the listener.
func (s *Session) Addr() net.Addr {
return s.LocalAddr()
}
// LocalAddr is used to get the local address of the
// underlying connection.
func (s *Session) LocalAddr() net.Addr {
addr, ok := s.conn.(hasAddr)
if !ok {
return &yamuxAddr{"local"}
}
return addr.LocalAddr()
}
// RemoteAddr is used to get the address of remote end
// of the underlying connection
func (s *Session) RemoteAddr() net.Addr {
addr, ok := s.conn.(hasAddr)
if !ok {
return &yamuxAddr{"remote"}
}
return addr.RemoteAddr()
}
// LocalAddr returns the local address
func (s *Stream) LocalAddr() net.Addr {
return s.session.LocalAddr()
}
// RemoteAddr returns the remote address
func (s *Stream) RemoteAddr() net.Addr {
return s.session.RemoteAddr()
}

182
vendor/github.com/hashicorp/yamux/const.go generated vendored Normal file
View File

@ -0,0 +1,182 @@
package yamux
import (
"encoding/binary"
"fmt"
)
// NetError implements net.Error
type NetError struct {
err error
timeout bool
temporary bool
}
func (e *NetError) Error() string {
return e.err.Error()
}
func (e *NetError) Timeout() bool {
return e.timeout
}
func (e *NetError) Temporary() bool {
return e.temporary
}
var (
// ErrInvalidVersion means we received a frame with an
// invalid version
ErrInvalidVersion = fmt.Errorf("invalid protocol version")
// ErrInvalidMsgType means we received a frame with an
// invalid message type
ErrInvalidMsgType = fmt.Errorf("invalid msg type")
// ErrSessionShutdown is used if there is a shutdown during
// an operation
ErrSessionShutdown = fmt.Errorf("session shutdown")
// ErrStreamsExhausted is returned if we have no more
// stream ids to issue
ErrStreamsExhausted = fmt.Errorf("streams exhausted")
// ErrDuplicateStream is used if a duplicate stream is
// opened inbound
ErrDuplicateStream = fmt.Errorf("duplicate stream initiated")
// ErrReceiveWindowExceeded indicates the window was exceeded
ErrRecvWindowExceeded = fmt.Errorf("recv window exceeded")
// ErrTimeout is used when we reach an IO deadline
ErrTimeout = &NetError{
err: fmt.Errorf("i/o deadline reached"),
// Error should meet net.Error interface for timeouts for compatability
// with standard library expectations, such as http servers.
timeout: true,
}
// ErrStreamClosed is returned when using a closed stream
ErrStreamClosed = fmt.Errorf("stream closed")
// ErrUnexpectedFlag is set when we get an unexpected flag
ErrUnexpectedFlag = fmt.Errorf("unexpected flag")
// ErrRemoteGoAway is used when we get a go away from the other side
ErrRemoteGoAway = fmt.Errorf("remote end is not accepting connections")
// ErrConnectionReset is sent if a stream is reset. This can happen
// if the backlog is exceeded, or if there was a remote GoAway.
ErrConnectionReset = fmt.Errorf("connection reset")
// ErrConnectionWriteTimeout indicates that we hit the "safety valve"
// timeout writing to the underlying stream connection.
ErrConnectionWriteTimeout = fmt.Errorf("connection write timeout")
// ErrKeepAliveTimeout is sent if a missed keepalive caused the stream close
ErrKeepAliveTimeout = fmt.Errorf("keepalive timeout")
)
const (
// protoVersion is the only version we support
protoVersion uint8 = 0
)
const (
// Data is used for data frames. They are followed
// by length bytes worth of payload.
typeData uint8 = iota
// WindowUpdate is used to change the window of
// a given stream. The length indicates the delta
// update to the window.
typeWindowUpdate
// Ping is sent as a keep-alive or to measure
// the RTT. The StreamID and Length value are echoed
// back in the response.
typePing
// GoAway is sent to terminate a session. The StreamID
// should be 0 and the length is an error code.
typeGoAway
)
const (
// SYN is sent to signal a new stream. May
// be sent with a data payload
flagSYN uint16 = 1 << iota
// ACK is sent to acknowledge a new stream. May
// be sent with a data payload
flagACK
// FIN is sent to half-close the given stream.
// May be sent with a data payload.
flagFIN
// RST is used to hard close a given stream.
flagRST
)
const (
// initialStreamWindow is the initial stream window size
initialStreamWindow uint32 = 256 * 1024
)
const (
// goAwayNormal is sent on a normal termination
goAwayNormal uint32 = iota
// goAwayProtoErr sent on a protocol error
goAwayProtoErr
// goAwayInternalErr sent on an internal error
goAwayInternalErr
)
const (
sizeOfVersion = 1
sizeOfType = 1
sizeOfFlags = 2
sizeOfStreamID = 4
sizeOfLength = 4
headerSize = sizeOfVersion + sizeOfType + sizeOfFlags +
sizeOfStreamID + sizeOfLength
)
type header []byte
func (h header) Version() uint8 {
return h[0]
}
func (h header) MsgType() uint8 {
return h[1]
}
func (h header) Flags() uint16 {
return binary.BigEndian.Uint16(h[2:4])
}
func (h header) StreamID() uint32 {
return binary.BigEndian.Uint32(h[4:8])
}
func (h header) Length() uint32 {
return binary.BigEndian.Uint32(h[8:12])
}
func (h header) String() string {
return fmt.Sprintf("Vsn:%d Type:%d Flags:%d StreamID:%d Length:%d",
h.Version(), h.MsgType(), h.Flags(), h.StreamID(), h.Length())
}
func (h header) encode(msgType uint8, flags uint16, streamID uint32, length uint32) {
h[0] = protoVersion
h[1] = msgType
binary.BigEndian.PutUint16(h[2:4], flags)
binary.BigEndian.PutUint32(h[4:8], streamID)
binary.BigEndian.PutUint32(h[8:12], length)
}

114
vendor/github.com/hashicorp/yamux/mux.go generated vendored Normal file
View File

@ -0,0 +1,114 @@
package yamux
import (
"fmt"
"io"
"log"
"os"
"time"
)
// Config is used to tune the Yamux session
type Config struct {
// AcceptBacklog is used to limit how many streams may be
// waiting an accept.
AcceptBacklog int
// EnableKeepalive is used to do a period keep alive
// messages using a ping.
EnableKeepAlive bool
// KeepAliveInterval is how often to perform the keep alive
KeepAliveInterval time.Duration
// ConnectionWriteTimeout is meant to be a "safety valve" timeout after
// we which will suspect a problem with the underlying connection and
// close it. This is only applied to writes, where's there's generally
// an expectation that things will move along quickly.
ConnectionWriteTimeout time.Duration
// MaxStreamWindowSize is used to control the maximum
// window size that we allow for a stream.
MaxStreamWindowSize uint32
// StreamOpenTimeout is the maximum amount of time that a stream will
// be allowed to remain in pending state while waiting for an ack from the peer.
// Once the timeout is reached the session will be gracefully closed.
// A zero value disables the StreamOpenTimeout allowing unbounded
// blocking on OpenStream calls.
StreamOpenTimeout time.Duration
// StreamCloseTimeout is the maximum time that a stream will allowed to
// be in a half-closed state when `Close` is called before forcibly
// closing the connection. Forcibly closed connections will empty the
// receive buffer, drop any future packets received for that stream,
// and send a RST to the remote side.
StreamCloseTimeout time.Duration
// LogOutput is used to control the log destination. Either Logger or
// LogOutput can be set, not both.
LogOutput io.Writer
// Logger is used to pass in the logger to be used. Either Logger or
// LogOutput can be set, not both.
Logger *log.Logger
}
// DefaultConfig is used to return a default configuration
func DefaultConfig() *Config {
return &Config{
AcceptBacklog: 256,
EnableKeepAlive: true,
KeepAliveInterval: 30 * time.Second,
ConnectionWriteTimeout: 10 * time.Second,
MaxStreamWindowSize: initialStreamWindow,
StreamCloseTimeout: 5 * time.Minute,
StreamOpenTimeout: 75 * time.Second,
LogOutput: os.Stderr,
}
}
// VerifyConfig is used to verify the sanity of configuration
func VerifyConfig(config *Config) error {
if config.AcceptBacklog <= 0 {
return fmt.Errorf("backlog must be positive")
}
if config.KeepAliveInterval == 0 {
return fmt.Errorf("keep-alive interval must be positive")
}
if config.MaxStreamWindowSize < initialStreamWindow {
return fmt.Errorf("MaxStreamWindowSize must be larger than %d", initialStreamWindow)
}
if config.LogOutput != nil && config.Logger != nil {
return fmt.Errorf("both Logger and LogOutput may not be set, select one")
} else if config.LogOutput == nil && config.Logger == nil {
return fmt.Errorf("one of Logger or LogOutput must be set, select one")
}
return nil
}
// Server is used to initialize a new server-side connection.
// There must be at most one server-side connection. If a nil config is
// provided, the DefaultConfiguration will be used.
func Server(conn io.ReadWriteCloser, config *Config) (*Session, error) {
if config == nil {
config = DefaultConfig()
}
if err := VerifyConfig(config); err != nil {
return nil, err
}
return newSession(config, conn, false), nil
}
// Client is used to initialize a new client-side connection.
// There must be at most one client-side connection.
func Client(conn io.ReadWriteCloser, config *Config) (*Session, error) {
if config == nil {
config = DefaultConfig()
}
if err := VerifyConfig(config); err != nil {
return nil, err
}
return newSession(config, conn, true), nil
}

732
vendor/github.com/hashicorp/yamux/session.go generated vendored Normal file
View File

@ -0,0 +1,732 @@
package yamux
import (
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"log"
"math"
"net"
"strings"
"sync"
"sync/atomic"
"time"
)
// Session is used to wrap a reliable ordered connection and to
// multiplex it into multiple streams.
type Session struct {
// remoteGoAway indicates the remote side does
// not want futher connections. Must be first for alignment.
remoteGoAway int32
// localGoAway indicates that we should stop
// accepting futher connections. Must be first for alignment.
localGoAway int32
// nextStreamID is the next stream we should
// send. This depends if we are a client/server.
nextStreamID uint32
// config holds our configuration
config *Config
// logger is used for our logs
logger *log.Logger
// conn is the underlying connection
conn io.ReadWriteCloser
// bufRead is a buffered reader
bufRead *bufio.Reader
// pings is used to track inflight pings
pings map[uint32]chan struct{}
pingID uint32
pingLock sync.Mutex
// streams maps a stream id to a stream, and inflight has an entry
// for any outgoing stream that has not yet been established. Both are
// protected by streamLock.
streams map[uint32]*Stream
inflight map[uint32]struct{}
streamLock sync.Mutex
// synCh acts like a semaphore. It is sized to the AcceptBacklog which
// is assumed to be symmetric between the client and server. This allows
// the client to avoid exceeding the backlog and instead blocks the open.
synCh chan struct{}
// acceptCh is used to pass ready streams to the client
acceptCh chan *Stream
// sendCh is used to mark a stream as ready to send,
// or to send a header out directly.
sendCh chan *sendReady
// recvDoneCh is closed when recv() exits to avoid a race
// between stream registration and stream shutdown
recvDoneCh chan struct{}
sendDoneCh chan struct{}
// shutdown is used to safely close a session
shutdown bool
shutdownErr error
shutdownCh chan struct{}
shutdownLock sync.Mutex
shutdownErrLock sync.Mutex
}
// sendReady is used to either mark a stream as ready
// or to directly send a header
type sendReady struct {
Hdr []byte
mu sync.Mutex // Protects Body from unsafe reads.
Body []byte
Err chan error
}
// newSession is used to construct a new session
func newSession(config *Config, conn io.ReadWriteCloser, client bool) *Session {
logger := config.Logger
if logger == nil {
logger = log.New(config.LogOutput, "", log.LstdFlags)
}
s := &Session{
config: config,
logger: logger,
conn: conn,
bufRead: bufio.NewReader(conn),
pings: make(map[uint32]chan struct{}),
streams: make(map[uint32]*Stream),
inflight: make(map[uint32]struct{}),
synCh: make(chan struct{}, config.AcceptBacklog),
acceptCh: make(chan *Stream, config.AcceptBacklog),
sendCh: make(chan *sendReady, 64),
recvDoneCh: make(chan struct{}),
sendDoneCh: make(chan struct{}),
shutdownCh: make(chan struct{}),
}
if client {
s.nextStreamID = 1
} else {
s.nextStreamID = 2
}
go s.recv()
go s.send()
if config.EnableKeepAlive {
go s.keepalive()
}
return s
}
// IsClosed does a safe check to see if we have shutdown
func (s *Session) IsClosed() bool {
select {
case <-s.shutdownCh:
return true
default:
return false
}
}
// CloseChan returns a read-only channel which is closed as
// soon as the session is closed.
func (s *Session) CloseChan() <-chan struct{} {
return s.shutdownCh
}
// NumStreams returns the number of currently open streams
func (s *Session) NumStreams() int {
s.streamLock.Lock()
num := len(s.streams)
s.streamLock.Unlock()
return num
}
// Open is used to create a new stream as a net.Conn
func (s *Session) Open() (net.Conn, error) {
conn, err := s.OpenStream()
if err != nil {
return nil, err
}
return conn, nil
}
// OpenStream is used to create a new stream
func (s *Session) OpenStream() (*Stream, error) {
if s.IsClosed() {
return nil, ErrSessionShutdown
}
if atomic.LoadInt32(&s.remoteGoAway) == 1 {
return nil, ErrRemoteGoAway
}
// Block if we have too many inflight SYNs
select {
case s.synCh <- struct{}{}:
case <-s.shutdownCh:
return nil, ErrSessionShutdown
}
GET_ID:
// Get an ID, and check for stream exhaustion
id := atomic.LoadUint32(&s.nextStreamID)
if id >= math.MaxUint32-1 {
return nil, ErrStreamsExhausted
}
if !atomic.CompareAndSwapUint32(&s.nextStreamID, id, id+2) {
goto GET_ID
}
// Register the stream
stream := newStream(s, id, streamInit)
s.streamLock.Lock()
s.streams[id] = stream
s.inflight[id] = struct{}{}
s.streamLock.Unlock()
if s.config.StreamOpenTimeout > 0 {
go s.setOpenTimeout(stream)
}
// Send the window update to create
if err := stream.sendWindowUpdate(); err != nil {
select {
case <-s.synCh:
default:
s.logger.Printf("[ERR] yamux: aborted stream open without inflight syn semaphore")
}
return nil, err
}
return stream, nil
}
// setOpenTimeout implements a timeout for streams that are opened but not established.
// If the StreamOpenTimeout is exceeded we assume the peer is unable to ACK,
// and close the session.
// The number of running timers is bounded by the capacity of the synCh.
func (s *Session) setOpenTimeout(stream *Stream) {
timer := time.NewTimer(s.config.StreamOpenTimeout)
defer timer.Stop()
select {
case <-stream.establishCh:
return
case <-s.shutdownCh:
return
case <-timer.C:
// Timeout reached while waiting for ACK.
// Close the session to force connection re-establishment.
s.logger.Printf("[ERR] yamux: aborted stream open (destination=%s): %v", s.RemoteAddr().String(), ErrTimeout.err)
s.Close()
}
}
// Accept is used to block until the next available stream
// is ready to be accepted.
func (s *Session) Accept() (net.Conn, error) {
conn, err := s.AcceptStream()
if err != nil {
return nil, err
}
return conn, err
}
// AcceptStream is used to block until the next available stream
// is ready to be accepted.
func (s *Session) AcceptStream() (*Stream, error) {
select {
case stream := <-s.acceptCh:
if err := stream.sendWindowUpdate(); err != nil {
return nil, err
}
return stream, nil
case <-s.shutdownCh:
return nil, s.shutdownErr
}
}
// Close is used to close the session and all streams.
// Attempts to send a GoAway before closing the connection.
func (s *Session) Close() error {
s.shutdownLock.Lock()
defer s.shutdownLock.Unlock()
if s.shutdown {
return nil
}
s.shutdown = true
s.shutdownErrLock.Lock()
if s.shutdownErr == nil {
s.shutdownErr = ErrSessionShutdown
}
s.shutdownErrLock.Unlock()
close(s.shutdownCh)
s.conn.Close()
<-s.recvDoneCh
s.streamLock.Lock()
defer s.streamLock.Unlock()
for _, stream := range s.streams {
stream.forceClose()
}
<-s.sendDoneCh
return nil
}
// exitErr is used to handle an error that is causing the
// session to terminate.
func (s *Session) exitErr(err error) {
s.shutdownErrLock.Lock()
if s.shutdownErr == nil {
s.shutdownErr = err
}
s.shutdownErrLock.Unlock()
s.Close()
}
// GoAway can be used to prevent accepting further
// connections. It does not close the underlying conn.
func (s *Session) GoAway() error {
return s.waitForSend(s.goAway(goAwayNormal), nil)
}
// goAway is used to send a goAway message
func (s *Session) goAway(reason uint32) header {
atomic.SwapInt32(&s.localGoAway, 1)
hdr := header(make([]byte, headerSize))
hdr.encode(typeGoAway, 0, 0, reason)
return hdr
}
// Ping is used to measure the RTT response time
func (s *Session) Ping() (time.Duration, error) {
// Get a channel for the ping
ch := make(chan struct{})
// Get a new ping id, mark as pending
s.pingLock.Lock()
id := s.pingID
s.pingID++
s.pings[id] = ch
s.pingLock.Unlock()
// Send the ping request
hdr := header(make([]byte, headerSize))
hdr.encode(typePing, flagSYN, 0, id)
if err := s.waitForSend(hdr, nil); err != nil {
return 0, err
}
// Wait for a response
start := time.Now()
select {
case <-ch:
case <-time.After(s.config.ConnectionWriteTimeout):
s.pingLock.Lock()
delete(s.pings, id) // Ignore it if a response comes later.
s.pingLock.Unlock()
return 0, ErrTimeout
case <-s.shutdownCh:
return 0, ErrSessionShutdown
}
// Compute the RTT
return time.Now().Sub(start), nil
}
// keepalive is a long running goroutine that periodically does
// a ping to keep the connection alive.
func (s *Session) keepalive() {
for {
select {
case <-time.After(s.config.KeepAliveInterval):
_, err := s.Ping()
if err != nil {
if err != ErrSessionShutdown {
s.logger.Printf("[ERR] yamux: keepalive failed: %v", err)
s.exitErr(ErrKeepAliveTimeout)
}
return
}
case <-s.shutdownCh:
return
}
}
}
// waitForSendErr waits to send a header, checking for a potential shutdown
func (s *Session) waitForSend(hdr header, body []byte) error {
errCh := make(chan error, 1)
return s.waitForSendErr(hdr, body, errCh)
}
// waitForSendErr waits to send a header with optional data, checking for a
// potential shutdown. Since there's the expectation that sends can happen
// in a timely manner, we enforce the connection write timeout here.
func (s *Session) waitForSendErr(hdr header, body []byte, errCh chan error) error {
t := timerPool.Get()
timer := t.(*time.Timer)
timer.Reset(s.config.ConnectionWriteTimeout)
defer func() {
timer.Stop()
select {
case <-timer.C:
default:
}
timerPool.Put(t)
}()
ready := &sendReady{Hdr: hdr, Body: body, Err: errCh}
select {
case s.sendCh <- ready:
case <-s.shutdownCh:
return ErrSessionShutdown
case <-timer.C:
return ErrConnectionWriteTimeout
}
bodyCopy := func() {
if body == nil {
return // A nil body is ignored.
}
// In the event of session shutdown or connection write timeout,
// we need to prevent `send` from reading the body buffer after
// returning from this function since the caller may re-use the
// underlying array.
ready.mu.Lock()
defer ready.mu.Unlock()
if ready.Body == nil {
return // Body was already copied in `send`.
}
newBody := make([]byte, len(body))
copy(newBody, body)
ready.Body = newBody
}
select {
case err := <-errCh:
return err
case <-s.shutdownCh:
bodyCopy()
return ErrSessionShutdown
case <-timer.C:
bodyCopy()
return ErrConnectionWriteTimeout
}
}
// sendNoWait does a send without waiting. Since there's the expectation that
// the send happens right here, we enforce the connection write timeout if we
// can't queue the header to be sent.
func (s *Session) sendNoWait(hdr header) error {
t := timerPool.Get()
timer := t.(*time.Timer)
timer.Reset(s.config.ConnectionWriteTimeout)
defer func() {
timer.Stop()
select {
case <-timer.C:
default:
}
timerPool.Put(t)
}()
select {
case s.sendCh <- &sendReady{Hdr: hdr}:
return nil
case <-s.shutdownCh:
return ErrSessionShutdown
case <-timer.C:
return ErrConnectionWriteTimeout
}
}
// send is a long running goroutine that sends data
func (s *Session) send() {
if err := s.sendLoop(); err != nil {
s.exitErr(err)
}
}
func (s *Session) sendLoop() error {
defer close(s.sendDoneCh)
var bodyBuf bytes.Buffer
for {
bodyBuf.Reset()
select {
case ready := <-s.sendCh:
// Send a header if ready
if ready.Hdr != nil {
_, err := s.conn.Write(ready.Hdr)
if err != nil {
s.logger.Printf("[ERR] yamux: Failed to write header: %v", err)
asyncSendErr(ready.Err, err)
return err
}
}
ready.mu.Lock()
if ready.Body != nil {
// Copy the body into the buffer to avoid
// holding a mutex lock during the write.
_, err := bodyBuf.Write(ready.Body)
if err != nil {
ready.Body = nil
ready.mu.Unlock()
s.logger.Printf("[ERR] yamux: Failed to copy body into buffer: %v", err)
asyncSendErr(ready.Err, err)
return err
}
ready.Body = nil
}
ready.mu.Unlock()
if bodyBuf.Len() > 0 {
// Send data from a body if given
_, err := s.conn.Write(bodyBuf.Bytes())
if err != nil {
s.logger.Printf("[ERR] yamux: Failed to write body: %v", err)
asyncSendErr(ready.Err, err)
return err
}
}
// No error, successful send
asyncSendErr(ready.Err, nil)
case <-s.shutdownCh:
return nil
}
}
}
// recv is a long running goroutine that accepts new data
func (s *Session) recv() {
if err := s.recvLoop(); err != nil {
s.exitErr(err)
}
}
// Ensure that the index of the handler (typeData/typeWindowUpdate/etc) matches the message type
var (
handlers = []func(*Session, header) error{
typeData: (*Session).handleStreamMessage,
typeWindowUpdate: (*Session).handleStreamMessage,
typePing: (*Session).handlePing,
typeGoAway: (*Session).handleGoAway,
}
)
// recvLoop continues to receive data until a fatal error is encountered
func (s *Session) recvLoop() error {
defer close(s.recvDoneCh)
hdr := header(make([]byte, headerSize))
for {
// Read the header
if _, err := io.ReadFull(s.bufRead, hdr); err != nil {
if err != io.EOF && !strings.Contains(err.Error(), "closed") && !strings.Contains(err.Error(), "reset by peer") {
s.logger.Printf("[ERR] yamux: Failed to read header: %v", err)
}
return err
}
// Verify the version
if hdr.Version() != protoVersion {
s.logger.Printf("[ERR] yamux: Invalid protocol version: %d", hdr.Version())
return ErrInvalidVersion
}
mt := hdr.MsgType()
if mt < typeData || mt > typeGoAway {
return ErrInvalidMsgType
}
if err := handlers[mt](s, hdr); err != nil {
return err
}
}
}
// handleStreamMessage handles either a data or window update frame
func (s *Session) handleStreamMessage(hdr header) error {
// Check for a new stream creation
id := hdr.StreamID()
flags := hdr.Flags()
if flags&flagSYN == flagSYN {
if err := s.incomingStream(id); err != nil {
return err
}
}
// Get the stream
s.streamLock.Lock()
stream := s.streams[id]
s.streamLock.Unlock()
// If we do not have a stream, likely we sent a RST
if stream == nil {
// Drain any data on the wire
if hdr.MsgType() == typeData && hdr.Length() > 0 {
s.logger.Printf("[WARN] yamux: Discarding data for stream: %d", id)
if _, err := io.CopyN(ioutil.Discard, s.bufRead, int64(hdr.Length())); err != nil {
s.logger.Printf("[ERR] yamux: Failed to discard data: %v", err)
return nil
}
} else {
s.logger.Printf("[WARN] yamux: frame for missing stream: %v", hdr)
}
return nil
}
// Check if this is a window update
if hdr.MsgType() == typeWindowUpdate {
if err := stream.incrSendWindow(hdr, flags); err != nil {
if sendErr := s.sendNoWait(s.goAway(goAwayProtoErr)); sendErr != nil {
s.logger.Printf("[WARN] yamux: failed to send go away: %v", sendErr)
}
return err
}
return nil
}
// Read the new data
if err := stream.readData(hdr, flags, s.bufRead); err != nil {
if sendErr := s.sendNoWait(s.goAway(goAwayProtoErr)); sendErr != nil {
s.logger.Printf("[WARN] yamux: failed to send go away: %v", sendErr)
}
return err
}
return nil
}
// handlePing is invokde for a typePing frame
func (s *Session) handlePing(hdr header) error {
flags := hdr.Flags()
pingID := hdr.Length()
// Check if this is a query, respond back in a separate context so we
// don't interfere with the receiving thread blocking for the write.
if flags&flagSYN == flagSYN {
go func() {
hdr := header(make([]byte, headerSize))
hdr.encode(typePing, flagACK, 0, pingID)
if err := s.sendNoWait(hdr); err != nil {
s.logger.Printf("[WARN] yamux: failed to send ping reply: %v", err)
}
}()
return nil
}
// Handle a response
s.pingLock.Lock()
ch := s.pings[pingID]
if ch != nil {
delete(s.pings, pingID)
close(ch)
}
s.pingLock.Unlock()
return nil
}
// handleGoAway is invokde for a typeGoAway frame
func (s *Session) handleGoAway(hdr header) error {
code := hdr.Length()
switch code {
case goAwayNormal:
atomic.SwapInt32(&s.remoteGoAway, 1)
case goAwayProtoErr:
s.logger.Printf("[ERR] yamux: received protocol error go away")
return fmt.Errorf("yamux protocol error")
case goAwayInternalErr:
s.logger.Printf("[ERR] yamux: received internal error go away")
return fmt.Errorf("remote yamux internal error")
default:
s.logger.Printf("[ERR] yamux: received unexpected go away")
return fmt.Errorf("unexpected go away received")
}
return nil
}
// incomingStream is used to create a new incoming stream
func (s *Session) incomingStream(id uint32) error {
// Reject immediately if we are doing a go away
if atomic.LoadInt32(&s.localGoAway) == 1 {
hdr := header(make([]byte, headerSize))
hdr.encode(typeWindowUpdate, flagRST, id, 0)
return s.sendNoWait(hdr)
}
// Allocate a new stream
stream := newStream(s, id, streamSYNReceived)
s.streamLock.Lock()
defer s.streamLock.Unlock()
// Check if stream already exists
if _, ok := s.streams[id]; ok {
s.logger.Printf("[ERR] yamux: duplicate stream declared")
if sendErr := s.sendNoWait(s.goAway(goAwayProtoErr)); sendErr != nil {
s.logger.Printf("[WARN] yamux: failed to send go away: %v", sendErr)
}
return ErrDuplicateStream
}
// Register the stream
s.streams[id] = stream
// Check if we've exceeded the backlog
select {
case s.acceptCh <- stream:
return nil
default:
// Backlog exceeded! RST the stream
s.logger.Printf("[WARN] yamux: backlog exceeded, forcing connection reset")
delete(s.streams, id)
hdr := header(make([]byte, headerSize))
hdr.encode(typeWindowUpdate, flagRST, id, 0)
return s.sendNoWait(hdr)
}
}
// closeStream is used to close a stream once both sides have
// issued a close. If there was an in-flight SYN and the stream
// was not yet established, then this will give the credit back.
func (s *Session) closeStream(id uint32) {
s.streamLock.Lock()
if _, ok := s.inflight[id]; ok {
select {
case <-s.synCh:
default:
s.logger.Printf("[ERR] yamux: SYN tracking out of sync")
}
}
delete(s.streams, id)
s.streamLock.Unlock()
}
// establishStream is used to mark a stream that was in the
// SYN Sent state as established.
func (s *Session) establishStream(id uint32) {
s.streamLock.Lock()
if _, ok := s.inflight[id]; ok {
delete(s.inflight, id)
} else {
s.logger.Printf("[ERR] yamux: established stream without inflight SYN (no tracking entry)")
}
select {
case <-s.synCh:
default:
s.logger.Printf("[ERR] yamux: established stream without inflight SYN (didn't have semaphore)")
}
s.streamLock.Unlock()
}

140
vendor/github.com/hashicorp/yamux/spec.md generated vendored Normal file
View File

@ -0,0 +1,140 @@
# Specification
We use this document to detail the internal specification of Yamux.
This is used both as a guide for implementing Yamux, but also for
alternative interoperable libraries to be built.
# Framing
Yamux uses a streaming connection underneath, but imposes a message
framing so that it can be shared between many logical streams. Each
frame contains a header like:
* Version (8 bits)
* Type (8 bits)
* Flags (16 bits)
* StreamID (32 bits)
* Length (32 bits)
This means that each header has a 12 byte overhead.
All fields are encoded in network order (big endian).
Each field is described below:
## Version Field
The version field is used for future backward compatibility. At the
current time, the field is always set to 0, to indicate the initial
version.
## Type Field
The type field is used to switch the frame message type. The following
message types are supported:
* 0x0 Data - Used to transmit data. May transmit zero length payloads
depending on the flags.
* 0x1 Window Update - Used to updated the senders receive window size.
This is used to implement per-session flow control.
* 0x2 Ping - Used to measure RTT. It can also be used to heart-beat
and do keep-alives over TCP.
* 0x3 Go Away - Used to close a session.
## Flag Field
The flags field is used to provide additional information related
to the message type. The following flags are supported:
* 0x1 SYN - Signals the start of a new stream. May be sent with a data or
window update message. Also sent with a ping to indicate outbound.
* 0x2 ACK - Acknowledges the start of a new stream. May be sent with a data
or window update message. Also sent with a ping to indicate response.
* 0x4 FIN - Performs a half-close of a stream. May be sent with a data
message or window update.
* 0x8 RST - Reset a stream immediately. May be sent with a data or
window update message.
## StreamID Field
The StreamID field is used to identify the logical stream the frame
is addressing. The client side should use odd ID's, and the server even.
This prevents any collisions. Additionally, the 0 ID is reserved to represent
the session.
Both Ping and Go Away messages should always use the 0 StreamID.
## Length Field
The meaning of the length field depends on the message type:
* Data - provides the length of bytes following the header
* Window update - provides a delta update to the window size
* Ping - Contains an opaque value, echoed back
* Go Away - Contains an error code
# Message Flow
There is no explicit connection setup, as Yamux relies on an underlying
transport to be provided. However, there is a distinction between client
and server side of the connection.
## Opening a stream
To open a stream, an initial data or window update frame is sent
with a new StreamID. The SYN flag should be set to signal a new stream.
The receiver must then reply with either a data or window update frame
with the StreamID along with the ACK flag to accept the stream or with
the RST flag to reject the stream.
Because we are relying on the reliable stream underneath, a connection
can begin sending data once the SYN flag is sent. The corresponding
ACK does not need to be received. This is particularly well suited
for an RPC system where a client wants to open a stream and immediately
fire a request without waiting for the RTT of the ACK.
This does introduce the possibility of a connection being rejected
after data has been sent already. This is a slight semantic difference
from TCP, where the conection cannot be refused after it is opened.
Clients should be prepared to handle this by checking for an error
that indicates a RST was received.
## Closing a stream
To close a stream, either side sends a data or window update frame
along with the FIN flag. This does a half-close indicating the sender
will send no further data.
Once both sides have closed the connection, the stream is closed.
Alternatively, if an error occurs, the RST flag can be used to
hard close a stream immediately.
## Flow Control
When Yamux is initially starts each stream with a 256KB window size.
There is no window size for the session.
To prevent the streams from stalling, window update frames should be
sent regularly. Yamux can be configured to provide a larger limit for
windows sizes. Both sides assume the initial 256KB window, but can
immediately send a window update as part of the SYN/ACK indicating a
larger window.
Both sides should track the number of bytes sent in Data frames
only, as only they are tracked as part of the window size.
## Session termination
When a session is being terminated, the Go Away message should
be sent. The Length should be set to one of the following to
provide an error code:
* 0x0 Normal termination
* 0x1 Protocol error
* 0x2 Internal error

544
vendor/github.com/hashicorp/yamux/stream.go generated vendored Normal file
View File

@ -0,0 +1,544 @@
package yamux
import (
"bytes"
"errors"
"io"
"sync"
"sync/atomic"
"time"
)
type streamState int
const (
streamInit streamState = iota
streamSYNSent
streamSYNReceived
streamEstablished
streamLocalClose
streamRemoteClose
streamClosed
streamReset
)
// Stream is used to represent a logical stream
// within a session.
type Stream struct {
recvWindow uint32
sendWindow uint32
id uint32
session *Session
state streamState
stateLock sync.Mutex
recvBuf *bytes.Buffer
recvLock sync.Mutex
controlHdr header
controlErr chan error
controlHdrLock sync.Mutex
sendHdr header
sendErr chan error
sendLock sync.Mutex
recvNotifyCh chan struct{}
sendNotifyCh chan struct{}
readDeadline atomic.Value // time.Time
writeDeadline atomic.Value // time.Time
// establishCh is notified if the stream is established or being closed.
establishCh chan struct{}
// closeTimer is set with stateLock held to honor the StreamCloseTimeout
// setting on Session.
closeTimer *time.Timer
}
// newStream is used to construct a new stream within
// a given session for an ID
func newStream(session *Session, id uint32, state streamState) *Stream {
s := &Stream{
id: id,
session: session,
state: state,
controlHdr: header(make([]byte, headerSize)),
controlErr: make(chan error, 1),
sendHdr: header(make([]byte, headerSize)),
sendErr: make(chan error, 1),
recvWindow: initialStreamWindow,
sendWindow: initialStreamWindow,
recvNotifyCh: make(chan struct{}, 1),
sendNotifyCh: make(chan struct{}, 1),
establishCh: make(chan struct{}, 1),
}
s.readDeadline.Store(time.Time{})
s.writeDeadline.Store(time.Time{})
return s
}
// Session returns the associated stream session
func (s *Stream) Session() *Session {
return s.session
}
// StreamID returns the ID of this stream
func (s *Stream) StreamID() uint32 {
return s.id
}
// Read is used to read from the stream
func (s *Stream) Read(b []byte) (n int, err error) {
defer asyncNotify(s.recvNotifyCh)
START:
s.stateLock.Lock()
switch s.state {
case streamLocalClose:
fallthrough
case streamRemoteClose:
fallthrough
case streamClosed:
s.recvLock.Lock()
if s.recvBuf == nil || s.recvBuf.Len() == 0 {
s.recvLock.Unlock()
s.stateLock.Unlock()
return 0, io.EOF
}
s.recvLock.Unlock()
case streamReset:
s.stateLock.Unlock()
return 0, ErrConnectionReset
}
s.stateLock.Unlock()
// If there is no data available, block
s.recvLock.Lock()
if s.recvBuf == nil || s.recvBuf.Len() == 0 {
s.recvLock.Unlock()
goto WAIT
}
// Read any bytes
n, _ = s.recvBuf.Read(b)
s.recvLock.Unlock()
// Send a window update potentially
err = s.sendWindowUpdate()
if err == ErrSessionShutdown {
err = nil
}
return n, err
WAIT:
var timeout <-chan time.Time
var timer *time.Timer
readDeadline := s.readDeadline.Load().(time.Time)
if !readDeadline.IsZero() {
delay := readDeadline.Sub(time.Now())
timer = time.NewTimer(delay)
timeout = timer.C
}
select {
case <-s.recvNotifyCh:
if timer != nil {
timer.Stop()
}
goto START
case <-timeout:
return 0, ErrTimeout
}
}
// Write is used to write to the stream
func (s *Stream) Write(b []byte) (n int, err error) {
s.sendLock.Lock()
defer s.sendLock.Unlock()
total := 0
for total < len(b) {
n, err := s.write(b[total:])
total += n
if err != nil {
return total, err
}
}
return total, nil
}
// write is used to write to the stream, may return on
// a short write.
func (s *Stream) write(b []byte) (n int, err error) {
var flags uint16
var max uint32
var body []byte
START:
s.stateLock.Lock()
switch s.state {
case streamLocalClose:
fallthrough
case streamClosed:
s.stateLock.Unlock()
return 0, ErrStreamClosed
case streamReset:
s.stateLock.Unlock()
return 0, ErrConnectionReset
}
s.stateLock.Unlock()
// If there is no data available, block
window := atomic.LoadUint32(&s.sendWindow)
if window == 0 {
goto WAIT
}
// Determine the flags if any
flags = s.sendFlags()
// Send up to our send window
max = min(window, uint32(len(b)))
body = b[:max]
// Send the header
s.sendHdr.encode(typeData, flags, s.id, max)
if err = s.session.waitForSendErr(s.sendHdr, body, s.sendErr); err != nil {
if errors.Is(err, ErrSessionShutdown) || errors.Is(err, ErrConnectionWriteTimeout) {
// Message left in ready queue, header re-use is unsafe.
s.sendHdr = header(make([]byte, headerSize))
}
return 0, err
}
// Reduce our send window
atomic.AddUint32(&s.sendWindow, ^uint32(max-1))
// Unlock
return int(max), err
WAIT:
var timeout <-chan time.Time
writeDeadline := s.writeDeadline.Load().(time.Time)
if !writeDeadline.IsZero() {
delay := writeDeadline.Sub(time.Now())
timeout = time.After(delay)
}
select {
case <-s.sendNotifyCh:
goto START
case <-timeout:
return 0, ErrTimeout
}
return 0, nil
}
// sendFlags determines any flags that are appropriate
// based on the current stream state
func (s *Stream) sendFlags() uint16 {
s.stateLock.Lock()
defer s.stateLock.Unlock()
var flags uint16
switch s.state {
case streamInit:
flags |= flagSYN
s.state = streamSYNSent
case streamSYNReceived:
flags |= flagACK
s.state = streamEstablished
}
return flags
}
// sendWindowUpdate potentially sends a window update enabling
// further writes to take place. Must be invoked with the lock.
func (s *Stream) sendWindowUpdate() error {
s.controlHdrLock.Lock()
defer s.controlHdrLock.Unlock()
// Determine the delta update
max := s.session.config.MaxStreamWindowSize
var bufLen uint32
s.recvLock.Lock()
if s.recvBuf != nil {
bufLen = uint32(s.recvBuf.Len())
}
delta := (max - bufLen) - s.recvWindow
// Determine the flags if any
flags := s.sendFlags()
// Check if we can omit the update
if delta < (max/2) && flags == 0 {
s.recvLock.Unlock()
return nil
}
// Update our window
s.recvWindow += delta
s.recvLock.Unlock()
// Send the header
s.controlHdr.encode(typeWindowUpdate, flags, s.id, delta)
if err := s.session.waitForSendErr(s.controlHdr, nil, s.controlErr); err != nil {
if errors.Is(err, ErrSessionShutdown) || errors.Is(err, ErrConnectionWriteTimeout) {
// Message left in ready queue, header re-use is unsafe.
s.controlHdr = header(make([]byte, headerSize))
}
return err
}
return nil
}
// sendClose is used to send a FIN
func (s *Stream) sendClose() error {
s.controlHdrLock.Lock()
defer s.controlHdrLock.Unlock()
flags := s.sendFlags()
flags |= flagFIN
s.controlHdr.encode(typeWindowUpdate, flags, s.id, 0)
if err := s.session.waitForSendErr(s.controlHdr, nil, s.controlErr); err != nil {
if errors.Is(err, ErrSessionShutdown) || errors.Is(err, ErrConnectionWriteTimeout) {
// Message left in ready queue, header re-use is unsafe.
s.controlHdr = header(make([]byte, headerSize))
}
return err
}
return nil
}
// Close is used to close the stream
func (s *Stream) Close() error {
closeStream := false
s.stateLock.Lock()
switch s.state {
// Opened means we need to signal a close
case streamSYNSent:
fallthrough
case streamSYNReceived:
fallthrough
case streamEstablished:
s.state = streamLocalClose
goto SEND_CLOSE
case streamLocalClose:
case streamRemoteClose:
s.state = streamClosed
closeStream = true
goto SEND_CLOSE
case streamClosed:
case streamReset:
default:
panic("unhandled state")
}
s.stateLock.Unlock()
return nil
SEND_CLOSE:
// This shouldn't happen (the more realistic scenario to cancel the
// timer is via processFlags) but just in case this ever happens, we
// cancel the timer to prevent dangling timers.
if s.closeTimer != nil {
s.closeTimer.Stop()
s.closeTimer = nil
}
// If we have a StreamCloseTimeout set we start the timeout timer.
// We do this only if we're not already closing the stream since that
// means this was a graceful close.
//
// This prevents memory leaks if one side (this side) closes and the
// remote side poorly behaves and never responds with a FIN to complete
// the close. After the specified timeout, we clean our resources up no
// matter what.
if !closeStream && s.session.config.StreamCloseTimeout > 0 {
s.closeTimer = time.AfterFunc(
s.session.config.StreamCloseTimeout, s.closeTimeout)
}
s.stateLock.Unlock()
s.sendClose()
s.notifyWaiting()
if closeStream {
s.session.closeStream(s.id)
}
return nil
}
// closeTimeout is called after StreamCloseTimeout during a close to
// close this stream.
func (s *Stream) closeTimeout() {
// Close our side forcibly
s.forceClose()
// Free the stream from the session map
s.session.closeStream(s.id)
// Send a RST so the remote side closes too.
s.sendLock.Lock()
defer s.sendLock.Unlock()
hdr := header(make([]byte, headerSize))
hdr.encode(typeWindowUpdate, flagRST, s.id, 0)
s.session.sendNoWait(hdr)
}
// forceClose is used for when the session is exiting
func (s *Stream) forceClose() {
s.stateLock.Lock()
s.state = streamClosed
s.stateLock.Unlock()
s.notifyWaiting()
}
// processFlags is used to update the state of the stream
// based on set flags, if any. Lock must be held
func (s *Stream) processFlags(flags uint16) error {
s.stateLock.Lock()
defer s.stateLock.Unlock()
// Close the stream without holding the state lock
closeStream := false
defer func() {
if closeStream {
if s.closeTimer != nil {
// Stop our close timeout timer since we gracefully closed
s.closeTimer.Stop()
}
s.session.closeStream(s.id)
}
}()
if flags&flagACK == flagACK {
if s.state == streamSYNSent {
s.state = streamEstablished
}
asyncNotify(s.establishCh)
s.session.establishStream(s.id)
}
if flags&flagFIN == flagFIN {
switch s.state {
case streamSYNSent:
fallthrough
case streamSYNReceived:
fallthrough
case streamEstablished:
s.state = streamRemoteClose
s.notifyWaiting()
case streamLocalClose:
s.state = streamClosed
closeStream = true
s.notifyWaiting()
default:
s.session.logger.Printf("[ERR] yamux: unexpected FIN flag in state %d", s.state)
return ErrUnexpectedFlag
}
}
if flags&flagRST == flagRST {
s.state = streamReset
closeStream = true
s.notifyWaiting()
}
return nil
}
// notifyWaiting notifies all the waiting channels
func (s *Stream) notifyWaiting() {
asyncNotify(s.recvNotifyCh)
asyncNotify(s.sendNotifyCh)
asyncNotify(s.establishCh)
}
// incrSendWindow updates the size of our send window
func (s *Stream) incrSendWindow(hdr header, flags uint16) error {
if err := s.processFlags(flags); err != nil {
return err
}
// Increase window, unblock a sender
atomic.AddUint32(&s.sendWindow, hdr.Length())
asyncNotify(s.sendNotifyCh)
return nil
}
// readData is used to handle a data frame
func (s *Stream) readData(hdr header, flags uint16, conn io.Reader) error {
if err := s.processFlags(flags); err != nil {
return err
}
// Check that our recv window is not exceeded
length := hdr.Length()
if length == 0 {
return nil
}
// Wrap in a limited reader
conn = &io.LimitedReader{R: conn, N: int64(length)}
// Copy into buffer
s.recvLock.Lock()
if length > s.recvWindow {
s.session.logger.Printf("[ERR] yamux: receive window exceeded (stream: %d, remain: %d, recv: %d)", s.id, s.recvWindow, length)
s.recvLock.Unlock()
return ErrRecvWindowExceeded
}
if s.recvBuf == nil {
// Allocate the receive buffer just-in-time to fit the full data frame.
// This way we can read in the whole packet without further allocations.
s.recvBuf = bytes.NewBuffer(make([]byte, 0, length))
}
copiedLength, err := io.Copy(s.recvBuf, conn)
if err != nil {
s.session.logger.Printf("[ERR] yamux: Failed to read stream data: %v", err)
s.recvLock.Unlock()
return err
}
// Decrement the receive window
s.recvWindow -= uint32(copiedLength)
s.recvLock.Unlock()
// Unblock any readers
asyncNotify(s.recvNotifyCh)
return nil
}
// SetDeadline sets the read and write deadlines
func (s *Stream) SetDeadline(t time.Time) error {
if err := s.SetReadDeadline(t); err != nil {
return err
}
if err := s.SetWriteDeadline(t); err != nil {
return err
}
return nil
}
// SetReadDeadline sets the deadline for blocked and future Read calls.
func (s *Stream) SetReadDeadline(t time.Time) error {
s.readDeadline.Store(t)
asyncNotify(s.recvNotifyCh)
return nil
}
// SetWriteDeadline sets the deadline for blocked and future Write calls
func (s *Stream) SetWriteDeadline(t time.Time) error {
s.writeDeadline.Store(t)
asyncNotify(s.sendNotifyCh)
return nil
}
// Shrink is used to compact the amount of buffers utilized
// This is useful when using Yamux in a connection pool to reduce
// the idle memory utilization.
func (s *Stream) Shrink() {
s.recvLock.Lock()
if s.recvBuf != nil && s.recvBuf.Len() == 0 {
s.recvBuf = nil
}
s.recvLock.Unlock()
}

43
vendor/github.com/hashicorp/yamux/util.go generated vendored Normal file
View File

@ -0,0 +1,43 @@
package yamux
import (
"sync"
"time"
)
var (
timerPool = &sync.Pool{
New: func() interface{} {
timer := time.NewTimer(time.Hour * 1e6)
timer.Stop()
return timer
},
}
)
// asyncSendErr is used to try an async send of an error
func asyncSendErr(ch chan error, err error) {
if ch == nil {
return
}
select {
case ch <- err:
default:
}
}
// asyncNotify is used to signal a waiting goroutine
func asyncNotify(ch chan struct{}) {
select {
case ch <- struct{}{}:
default:
}
}
// min computes the minimum of two values
func min(a, b uint32) uint32 {
if a < b {
return a
}
return b
}