mirror of
https://github.com/fluxcd/flagger.git
synced 2026-02-14 18:10:00 +00:00
Revendor with go mod
This commit is contained in:
24
vendor/github.com/armon/go-metrics/.gitignore
generated
vendored
Normal file
24
vendor/github.com/armon/go-metrics/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
||||
|
||||
/metrics.out
|
||||
91
vendor/github.com/armon/go-metrics/README.md
generated
vendored
Normal file
91
vendor/github.com/armon/go-metrics/README.md
generated
vendored
Normal file
@@ -0,0 +1,91 @@
|
||||
go-metrics
|
||||
==========
|
||||
|
||||
This library provides a `metrics` package which can be used to instrument code,
|
||||
expose application metrics, and profile runtime performance in a flexible manner.
|
||||
|
||||
Current API: [](https://godoc.org/github.com/armon/go-metrics)
|
||||
|
||||
Sinks
|
||||
-----
|
||||
|
||||
The `metrics` package makes use of a `MetricSink` interface to support delivery
|
||||
to any type of backend. Currently the following sinks are provided:
|
||||
|
||||
* StatsiteSink : Sinks to a [statsite](https://github.com/armon/statsite/) instance (TCP)
|
||||
* StatsdSink: Sinks to a [StatsD](https://github.com/etsy/statsd/) / statsite instance (UDP)
|
||||
* PrometheusSink: Sinks to a [Prometheus](http://prometheus.io/) metrics endpoint (exposed via HTTP for scrapes)
|
||||
* InmemSink : Provides in-memory aggregation, can be used to export stats
|
||||
* FanoutSink : Sinks to multiple sinks. Enables writing to multiple statsite instances for example.
|
||||
* BlackholeSink : Sinks to nowhere
|
||||
|
||||
In addition to the sinks, the `InmemSignal` can be used to catch a signal,
|
||||
and dump a formatted output of recent metrics. For example, when a process gets
|
||||
a SIGUSR1, it can dump to stderr recent performance metrics for debugging.
|
||||
|
||||
Labels
|
||||
------
|
||||
|
||||
Most metrics do have an equivalent ending with `WithLabels`, such methods
|
||||
allow to push metrics with labels and use some features of underlying Sinks
|
||||
(ex: translated into Prometheus labels).
|
||||
|
||||
Since some of these labels may increase greatly cardinality of metrics, the
|
||||
library allow to filter labels using a blacklist/whitelist filtering system
|
||||
which is global to all metrics.
|
||||
|
||||
* If `Config.AllowedLabels` is not nil, then only labels specified in this value will be sent to underlying Sink, otherwise, all labels are sent by default.
|
||||
* If `Config.BlockedLabels` is not nil, any label specified in this value will not be sent to underlying Sinks.
|
||||
|
||||
By default, both `Config.AllowedLabels` and `Config.BlockedLabels` are nil, meaning that
|
||||
no tags are filetered at all, but it allow to a user to globally block some tags with high
|
||||
cardinality at application level.
|
||||
|
||||
Examples
|
||||
--------
|
||||
|
||||
Here is an example of using the package:
|
||||
|
||||
```go
|
||||
func SlowMethod() {
|
||||
// Profiling the runtime of a method
|
||||
defer metrics.MeasureSince([]string{"SlowMethod"}, time.Now())
|
||||
}
|
||||
|
||||
// Configure a statsite sink as the global metrics sink
|
||||
sink, _ := metrics.NewStatsiteSink("statsite:8125")
|
||||
metrics.NewGlobal(metrics.DefaultConfig("service-name"), sink)
|
||||
|
||||
// Emit a Key/Value pair
|
||||
metrics.EmitKey([]string{"questions", "meaning of life"}, 42)
|
||||
```
|
||||
|
||||
Here is an example of setting up a signal handler:
|
||||
|
||||
```go
|
||||
// Setup the inmem sink and signal handler
|
||||
inm := metrics.NewInmemSink(10*time.Second, time.Minute)
|
||||
sig := metrics.DefaultInmemSignal(inm)
|
||||
metrics.NewGlobal(metrics.DefaultConfig("service-name"), inm)
|
||||
|
||||
// Run some code
|
||||
inm.SetGauge([]string{"foo"}, 42)
|
||||
inm.EmitKey([]string{"bar"}, 30)
|
||||
|
||||
inm.IncrCounter([]string{"baz"}, 42)
|
||||
inm.IncrCounter([]string{"baz"}, 1)
|
||||
inm.IncrCounter([]string{"baz"}, 80)
|
||||
|
||||
inm.AddSample([]string{"method", "wow"}, 42)
|
||||
inm.AddSample([]string{"method", "wow"}, 100)
|
||||
inm.AddSample([]string{"method", "wow"}, 22)
|
||||
|
||||
....
|
||||
```
|
||||
|
||||
When a signal comes in, output like the following will be dumped to stderr:
|
||||
|
||||
[2014-01-28 14:57:33.04 -0800 PST][G] 'foo': 42.000
|
||||
[2014-01-28 14:57:33.04 -0800 PST][P] 'bar': 30.000
|
||||
[2014-01-28 14:57:33.04 -0800 PST][C] 'baz': Count: 3 Min: 1.000 Mean: 41.000 Max: 80.000 Stddev: 39.509
|
||||
[2014-01-28 14:57:33.04 -0800 PST][S] 'method.wow': Count: 3 Min: 22.000 Mean: 54.667 Max: 100.000 Stddev: 40.513
|
||||
21
vendor/github.com/avast/retry-go/.gitignore
generated
vendored
Normal file
21
vendor/github.com/avast/retry-go/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
# Binaries for programs and plugins
|
||||
*.exe
|
||||
*.dll
|
||||
*.so
|
||||
*.dylib
|
||||
|
||||
# Test binary, build with `go test -c`
|
||||
*.test
|
||||
|
||||
# Output of the go coverage tool, specifically when used with LiteIDE
|
||||
*.out
|
||||
|
||||
# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
|
||||
.glide/
|
||||
|
||||
# dep
|
||||
vendor/
|
||||
Gopkg.lock
|
||||
|
||||
# cover
|
||||
coverage.txt
|
||||
37
vendor/github.com/avast/retry-go/.godocdown.tmpl
generated
vendored
Normal file
37
vendor/github.com/avast/retry-go/.godocdown.tmpl
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
# {{ .Name }}
|
||||
|
||||
[](https://github.com/avast/retry-go/releases/latest)
|
||||
[](LICENSE.md)
|
||||
[](https://travis-ci.org/avast/retry-go)
|
||||
[](https://ci.appveyor.com/project/JaSei/retry-go)
|
||||
[](https://goreportcard.com/report/github.com/avast/retry-go)
|
||||
[](http://godoc.org/github.com/avast/retry-go)
|
||||
[](https://codecov.io/github/avast/retry-go?branch=master)
|
||||
[](https://sourcegraph.com/github.com/avast/retry-go?badge)
|
||||
|
||||
{{ .EmitSynopsis }}
|
||||
|
||||
{{ .EmitUsage }}
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are very much welcome.
|
||||
|
||||
### Makefile
|
||||
|
||||
Makefile provides several handy rules, like README.md `generator` , `setup` for prepare build/dev environment, `test`, `cover`, etc...
|
||||
|
||||
Try `make help` for more information.
|
||||
|
||||
### Before pull request
|
||||
|
||||
please try:
|
||||
* run tests (`make test`)
|
||||
* run linter (`make lint`)
|
||||
* if your IDE don't automaticaly do `go fmt`, run `go fmt` (`make fmt`)
|
||||
|
||||
### README
|
||||
|
||||
README.md are generate from template [.godocdown.tmpl](.godocdown.tmpl) and code documentation via [godocdown](https://github.com/robertkrimen/godocdown).
|
||||
|
||||
Never edit README.md direct, because your change will be lost.
|
||||
16
vendor/github.com/avast/retry-go/.travis.yml
generated
vendored
Normal file
16
vendor/github.com/avast/retry-go/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.6
|
||||
- 1.7
|
||||
- 1.8
|
||||
- 1.9
|
||||
|
||||
install:
|
||||
- make setup
|
||||
|
||||
script:
|
||||
- make ci
|
||||
|
||||
after_success:
|
||||
- bash <(curl -s https://codecov.io/bash)
|
||||
3
vendor/github.com/avast/retry-go/Gopkg.toml
generated
vendored
Normal file
3
vendor/github.com/avast/retry-go/Gopkg.toml
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
[[constraint]]
|
||||
name = "github.com/stretchr/testify"
|
||||
version = "1.1.4"
|
||||
66
vendor/github.com/avast/retry-go/Makefile
generated
vendored
Normal file
66
vendor/github.com/avast/retry-go/Makefile
generated
vendored
Normal file
@@ -0,0 +1,66 @@
|
||||
SOURCE_FILES?=$$(go list ./... | grep -v /vendor/)
|
||||
TEST_PATTERN?=.
|
||||
TEST_OPTIONS?=
|
||||
DEP?=$$(which dep)
|
||||
VERSION?=$$(cat VERSION)
|
||||
|
||||
ifeq ($(OS),Windows_NT)
|
||||
DEP_VERS=dep-windows-amd64
|
||||
else
|
||||
DEP_VERS=dep-linux-amd64
|
||||
endif
|
||||
|
||||
setup: ## Install all the build and lint dependencies
|
||||
# fix of gopkg.in issue (https://github.com/niemeyer/gopkg/issues/50)
|
||||
git config --global http.https://gopkg.in.followRedirects true
|
||||
go get -u gopkg.in/alecthomas/gometalinter.v1
|
||||
go get -u github.com/pierrre/gotestcover
|
||||
go get -u golang.org/x/tools/cmd/cover
|
||||
go get -u github.com/robertkrimen/godocdown/godocdown
|
||||
gometalinter.v1 --install
|
||||
@if [ "$(DEP)" = "" ]; then\
|
||||
curl -L https://github.com/golang/dep/releases/download/v0.3.1/$(DEP_VERS) >| $$GOPATH/bin/dep;\
|
||||
chmod +x $$GOPATH/bin/dep;\
|
||||
fi
|
||||
dep ensure
|
||||
|
||||
generate: ## Generate README.md
|
||||
godocdown >| README.md
|
||||
|
||||
test: generate ## Run all the tests
|
||||
gotestcover $(TEST_OPTIONS) -covermode=atomic -coverprofile=coverage.txt $(SOURCE_FILES) -run $(TEST_PATTERN) -timeout=2m
|
||||
|
||||
cover: test ## Run all the tests and opens the coverage report
|
||||
go tool cover -html=coverage.txt
|
||||
|
||||
fmt: ## gofmt and goimports all go files
|
||||
find . -name '*.go' -not -wholename './vendor/*' | while read -r file; do gofmt -w -s "$$file"; goimports -w "$$file"; done
|
||||
|
||||
lint: ## Run all the linters
|
||||
gometalinter.v1 --vendor --disable-all \
|
||||
--enable=deadcode \
|
||||
--enable=ineffassign \
|
||||
--enable=gosimple \
|
||||
--enable=staticcheck \
|
||||
--enable=gofmt \
|
||||
--enable=goimports \
|
||||
--enable=dupl \
|
||||
--enable=misspell \
|
||||
--enable=errcheck \
|
||||
--enable=vet \
|
||||
--deadline=10m \
|
||||
./...
|
||||
|
||||
ci: test lint ## Run all the tests and code checks
|
||||
|
||||
build:
|
||||
go build
|
||||
|
||||
release: ## Release new version
|
||||
git tag | grep -q $(VERSION) && echo This version was released! Increase VERSION! || git tag $(VERSION) && git push origin $(VERSION) && git tag v$(VERSION) && git push origin v$(VERSION)
|
||||
|
||||
# Absolutely awesome: http://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
|
||||
help:
|
||||
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
|
||||
|
||||
.DEFAULT_GOAL := build
|
||||
252
vendor/github.com/avast/retry-go/README.md
generated
vendored
Normal file
252
vendor/github.com/avast/retry-go/README.md
generated
vendored
Normal file
@@ -0,0 +1,252 @@
|
||||
# retry
|
||||
|
||||
[](https://github.com/avast/retry-go/releases/latest)
|
||||
[](LICENSE.md)
|
||||
[](https://travis-ci.org/avast/retry-go)
|
||||
[](https://ci.appveyor.com/project/JaSei/retry-go)
|
||||
[](https://goreportcard.com/report/github.com/avast/retry-go)
|
||||
[](http://godoc.org/github.com/avast/retry-go)
|
||||
[](https://codecov.io/github/avast/retry-go?branch=master)
|
||||
[](https://sourcegraph.com/github.com/avast/retry-go?badge)
|
||||
|
||||
Simple library for retry mechanism
|
||||
|
||||
slightly inspired by
|
||||
[Try::Tiny::Retry](https://metacpan.org/pod/Try::Tiny::Retry)
|
||||
|
||||
|
||||
### SYNOPSIS
|
||||
|
||||
http get with retry:
|
||||
|
||||
url := "http://example.com"
|
||||
var body []byte
|
||||
|
||||
err := retry.Do(
|
||||
func() error {
|
||||
resp, err := http.Get(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
body, err = ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
)
|
||||
|
||||
fmt.Println(body)
|
||||
|
||||
[next examples](https://github.com/avast/retry-go/tree/master/examples)
|
||||
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [giantswarm/retry-go](https://github.com/giantswarm/retry-go) - slightly
|
||||
complicated interface.
|
||||
|
||||
* [sethgrid/pester](https://github.com/sethgrid/pester) - only http retry for
|
||||
http calls with retries and backoff
|
||||
|
||||
* [cenkalti/backoff](https://github.com/cenkalti/backoff) - Go port of the
|
||||
exponential backoff algorithm from Google's HTTP Client Library for Java. Really
|
||||
complicated interface.
|
||||
|
||||
* [rafaeljesus/retry-go](https://github.com/rafaeljesus/retry-go) - looks good,
|
||||
slightly similar as this package, don't have 'simple' `Retry` method
|
||||
|
||||
* [matryer/try](https://github.com/matryer/try) - very popular package,
|
||||
nonintuitive interface (for me)
|
||||
|
||||
|
||||
### BREAKING CHANGES
|
||||
|
||||
1.0.2 -> 2.0.0
|
||||
|
||||
* argument of `retry.Delay` is final delay (no multiplication by `retry.Units`
|
||||
anymore)
|
||||
|
||||
* function `retry.Units` are removed
|
||||
|
||||
* [more about this breaking change](https://github.com/avast/retry-go/issues/7)
|
||||
|
||||
0.3.0 -> 1.0.0
|
||||
|
||||
* `retry.Retry` function are changed to `retry.Do` function
|
||||
|
||||
* `retry.RetryCustom` (OnRetry) and `retry.RetryCustomWithOpts` functions are
|
||||
now implement via functions produces Options (aka `retry.OnRetry`)
|
||||
|
||||
## Usage
|
||||
|
||||
#### func BackOffDelay
|
||||
|
||||
```go
|
||||
func BackOffDelay(n uint, config *config) time.Duration
|
||||
```
|
||||
BackOffDelay is a DelayType which increases delay between consecutive retries
|
||||
|
||||
#### func Do
|
||||
|
||||
```go
|
||||
func Do(retryableFunc RetryableFunc, opts ...Option) error
|
||||
```
|
||||
|
||||
#### func FixedDelay
|
||||
|
||||
```go
|
||||
func FixedDelay(_ uint, config *config) time.Duration
|
||||
```
|
||||
FixedDelay is a DelayType which keeps delay the same through all iterations
|
||||
|
||||
#### type DelayTypeFunc
|
||||
|
||||
```go
|
||||
type DelayTypeFunc func(n uint, config *config) time.Duration
|
||||
```
|
||||
|
||||
|
||||
#### type Error
|
||||
|
||||
```go
|
||||
type Error []error
|
||||
```
|
||||
|
||||
Error type represents list of errors in retry
|
||||
|
||||
#### func (Error) Error
|
||||
|
||||
```go
|
||||
func (e Error) Error() string
|
||||
```
|
||||
Error method return string representation of Error It is an implementation of
|
||||
error interface
|
||||
|
||||
#### func (Error) WrappedErrors
|
||||
|
||||
```go
|
||||
func (e Error) WrappedErrors() []error
|
||||
```
|
||||
WrappedErrors returns the list of errors that this Error is wrapping. It is an
|
||||
implementation of the `errwrap.Wrapper` interface in package
|
||||
[errwrap](https://github.com/hashicorp/errwrap) so that `retry.Error` can be
|
||||
used with that library.
|
||||
|
||||
#### type OnRetryFunc
|
||||
|
||||
```go
|
||||
type OnRetryFunc func(n uint, err error)
|
||||
```
|
||||
|
||||
Function signature of OnRetry function n = count of attempts
|
||||
|
||||
#### type Option
|
||||
|
||||
```go
|
||||
type Option func(*config)
|
||||
```
|
||||
|
||||
Option represents an option for retry.
|
||||
|
||||
#### func Attempts
|
||||
|
||||
```go
|
||||
func Attempts(attempts uint) Option
|
||||
```
|
||||
Attempts set count of retry default is 10
|
||||
|
||||
#### func Delay
|
||||
|
||||
```go
|
||||
func Delay(delay time.Duration) Option
|
||||
```
|
||||
Delay set delay between retry default is 100ms
|
||||
|
||||
#### func DelayType
|
||||
|
||||
```go
|
||||
func DelayType(delayType DelayTypeFunc) Option
|
||||
```
|
||||
DelayType set type of the delay between retries default is BackOff
|
||||
|
||||
#### func OnRetry
|
||||
|
||||
```go
|
||||
func OnRetry(onRetry OnRetryFunc) Option
|
||||
```
|
||||
OnRetry function callback are called each retry
|
||||
|
||||
log each retry example:
|
||||
|
||||
retry.Do(
|
||||
func() error {
|
||||
return errors.New("some error")
|
||||
},
|
||||
retry.OnRetry(func(n uint, err error) {
|
||||
log.Printf("#%d: %s\n", n, err)
|
||||
}),
|
||||
)
|
||||
|
||||
#### func RetryIf
|
||||
|
||||
```go
|
||||
func RetryIf(retryIf RetryIfFunc) Option
|
||||
```
|
||||
RetryIf controls whether a retry should be attempted after an error (assuming
|
||||
there are any retry attempts remaining)
|
||||
|
||||
skip retry if special error example:
|
||||
|
||||
retry.Do(
|
||||
func() error {
|
||||
return errors.New("special error")
|
||||
},
|
||||
retry.RetryIf(func(err error) bool {
|
||||
if err.Error() == "special error" {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
})
|
||||
)
|
||||
|
||||
#### type RetryIfFunc
|
||||
|
||||
```go
|
||||
type RetryIfFunc func(error) bool
|
||||
```
|
||||
|
||||
Function signature of retry if function
|
||||
|
||||
#### type RetryableFunc
|
||||
|
||||
```go
|
||||
type RetryableFunc func() error
|
||||
```
|
||||
|
||||
Function signature of retryable function
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are very much welcome.
|
||||
|
||||
### Makefile
|
||||
|
||||
Makefile provides several handy rules, like README.md `generator` , `setup` for prepare build/dev environment, `test`, `cover`, etc...
|
||||
|
||||
Try `make help` for more information.
|
||||
|
||||
### Before pull request
|
||||
|
||||
please try:
|
||||
* run tests (`make test`)
|
||||
* run linter (`make lint`)
|
||||
* if your IDE don't automaticaly do `go fmt`, run `go fmt` (`make fmt`)
|
||||
|
||||
### README
|
||||
|
||||
README.md are generate from template [.godocdown.tmpl](.godocdown.tmpl) and code documentation via [godocdown](https://github.com/robertkrimen/godocdown).
|
||||
|
||||
Never edit README.md direct, because your change will be lost.
|
||||
1
vendor/github.com/avast/retry-go/VERSION
generated
vendored
Normal file
1
vendor/github.com/avast/retry-go/VERSION
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
2.0.0
|
||||
19
vendor/github.com/avast/retry-go/appveyor.yml
generated
vendored
Normal file
19
vendor/github.com/avast/retry-go/appveyor.yml
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
version: "{build}"
|
||||
|
||||
clone_folder: c:\Users\appveyor\go\src\github.com\avast\retry-go
|
||||
|
||||
#os: Windows Server 2012 R2
|
||||
platform: x64
|
||||
|
||||
install:
|
||||
- copy c:\MinGW\bin\mingw32-make.exe c:\MinGW\bin\make.exe
|
||||
- set GOPATH=C:\Users\appveyor\go
|
||||
- set PATH=%PATH%;c:\MinGW\bin
|
||||
- set PATH=%PATH%;%GOPATH%\bin;c:\go\bin
|
||||
- set GOBIN=%GOPATH%\bin
|
||||
- go version
|
||||
- go env
|
||||
- make setup
|
||||
|
||||
build_script:
|
||||
- make ci
|
||||
2388
vendor/github.com/beorn7/perks/quantile/exampledata.txt
generated
vendored
Normal file
2388
vendor/github.com/beorn7/perks/quantile/exampledata.txt
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
16
vendor/github.com/evanphx/json-patch/.travis.yml
generated
vendored
Normal file
16
vendor/github.com/evanphx/json-patch/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.8
|
||||
- 1.7
|
||||
|
||||
install:
|
||||
- if ! go get code.google.com/p/go.tools/cmd/cover; then go get golang.org/x/tools/cmd/cover; fi
|
||||
- go get github.com/jessevdk/go-flags
|
||||
|
||||
script:
|
||||
- go get
|
||||
- go test -cover ./...
|
||||
|
||||
notifications:
|
||||
email: false
|
||||
292
vendor/github.com/evanphx/json-patch/README.md
generated
vendored
Normal file
292
vendor/github.com/evanphx/json-patch/README.md
generated
vendored
Normal file
@@ -0,0 +1,292 @@
|
||||
# JSON-Patch
|
||||
`jsonpatch` is a library which provides functionallity for both applying
|
||||
[RFC6902 JSON patches](http://tools.ietf.org/html/rfc6902) against documents, as
|
||||
well as for calculating & applying [RFC7396 JSON merge patches](https://tools.ietf.org/html/rfc7396).
|
||||
|
||||
[](http://godoc.org/github.com/evanphx/json-patch)
|
||||
[](https://travis-ci.org/evanphx/json-patch)
|
||||
[](https://goreportcard.com/report/github.com/evanphx/json-patch)
|
||||
|
||||
# Get It!
|
||||
|
||||
**Latest and greatest**:
|
||||
```bash
|
||||
go get -u github.com/evanphx/json-patch
|
||||
```
|
||||
|
||||
**Stable Versions**:
|
||||
* Version 4: `go get -u gopkg.in/evanphx/json-patch.v4`
|
||||
|
||||
(previous versions below `v3` are unavailable)
|
||||
|
||||
# Use It!
|
||||
* [Create and apply a merge patch](#create-and-apply-a-merge-patch)
|
||||
* [Create and apply a JSON Patch](#create-and-apply-a-json-patch)
|
||||
* [Comparing JSON documents](#comparing-json-documents)
|
||||
* [Combine merge patches](#combine-merge-patches)
|
||||
|
||||
|
||||
# Configuration
|
||||
|
||||
There is a single global configuration variable `jsonpatch.SupportNegativeIndices'. This
|
||||
defaults to `true` and enables the non-standard practice of allowing negative indices
|
||||
to mean indices starting at the end of an array. This functionality can be disabled
|
||||
by setting `jsonpatch.SupportNegativeIndices = false`.
|
||||
|
||||
## Create and apply a merge patch
|
||||
Given both an original JSON document and a modified JSON document, you can create
|
||||
a [Merge Patch](https://tools.ietf.org/html/rfc7396) document.
|
||||
|
||||
It can describe the changes needed to convert from the original to the
|
||||
modified JSON document.
|
||||
|
||||
Once you have a merge patch, you can apply it to other JSON documents using the
|
||||
`jsonpatch.MergePatch(document, patch)` function.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
jsonpatch "github.com/evanphx/json-patch"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Let's create a merge patch from these two documents...
|
||||
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
|
||||
target := []byte(`{"name": "Jane", "age": 24}`)
|
||||
|
||||
patch, err := jsonpatch.CreateMergePatch(original, target)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Now lets apply the patch against a different JSON document...
|
||||
|
||||
alternative := []byte(`{"name": "Tina", "age": 28, "height": 3.75}`)
|
||||
modifiedAlternative, err := jsonpatch.MergePatch(alternative, patch)
|
||||
|
||||
fmt.Printf("patch document: %s\n", patch)
|
||||
fmt.Printf("updated alternative doc: %s\n", modifiedAlternative)
|
||||
}
|
||||
```
|
||||
|
||||
When ran, you get the following output:
|
||||
|
||||
```bash
|
||||
$ go run main.go
|
||||
patch document: {"height":null,"name":"Jane"}
|
||||
updated tina doc: {"age":28,"name":"Jane"}
|
||||
```
|
||||
|
||||
## Create and apply a JSON Patch
|
||||
You can create patch objects using `DecodePatch([]byte)`, which can then
|
||||
be applied against JSON documents.
|
||||
|
||||
The following is an example of creating a patch from two operations, and
|
||||
applying it against a JSON document.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
jsonpatch "github.com/evanphx/json-patch"
|
||||
)
|
||||
|
||||
func main() {
|
||||
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
|
||||
patchJSON := []byte(`[
|
||||
{"op": "replace", "path": "/name", "value": "Jane"},
|
||||
{"op": "remove", "path": "/height"}
|
||||
]`)
|
||||
|
||||
patch, err := jsonpatch.DecodePatch(patchJSON)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
modified, err := patch.Apply(original)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
fmt.Printf("Original document: %s\n", original)
|
||||
fmt.Printf("Modified document: %s\n", modified)
|
||||
}
|
||||
```
|
||||
|
||||
When ran, you get the following output:
|
||||
|
||||
```bash
|
||||
$ go run main.go
|
||||
Original document: {"name": "John", "age": 24, "height": 3.21}
|
||||
Modified document: {"age":24,"name":"Jane"}
|
||||
```
|
||||
|
||||
## Comparing JSON documents
|
||||
Due to potential whitespace and ordering differences, one cannot simply compare
|
||||
JSON strings or byte-arrays directly.
|
||||
|
||||
As such, you can instead use `jsonpatch.Equal(document1, document2)` to
|
||||
determine if two JSON documents are _structurally_ equal. This ignores
|
||||
whitespace differences, and key-value ordering.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
jsonpatch "github.com/evanphx/json-patch"
|
||||
)
|
||||
|
||||
func main() {
|
||||
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
|
||||
similar := []byte(`
|
||||
{
|
||||
"age": 24,
|
||||
"height": 3.21,
|
||||
"name": "John"
|
||||
}
|
||||
`)
|
||||
different := []byte(`{"name": "Jane", "age": 20, "height": 3.37}`)
|
||||
|
||||
if jsonpatch.Equal(original, similar) {
|
||||
fmt.Println(`"original" is structurally equal to "similar"`)
|
||||
}
|
||||
|
||||
if !jsonpatch.Equal(original, different) {
|
||||
fmt.Println(`"original" is _not_ structurally equal to "similar"`)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
When ran, you get the following output:
|
||||
```bash
|
||||
$ go run main.go
|
||||
"original" is structurally equal to "similar"
|
||||
"original" is _not_ structurally equal to "similar"
|
||||
```
|
||||
|
||||
## Combine merge patches
|
||||
Given two JSON merge patch documents, it is possible to combine them into a
|
||||
single merge patch which can describe both set of changes.
|
||||
|
||||
The resulting merge patch can be used such that applying it results in a
|
||||
document structurally similar as merging each merge patch to the document
|
||||
in succession.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
jsonpatch "github.com/evanphx/json-patch"
|
||||
)
|
||||
|
||||
func main() {
|
||||
original := []byte(`{"name": "John", "age": 24, "height": 3.21}`)
|
||||
|
||||
nameAndHeight := []byte(`{"height":null,"name":"Jane"}`)
|
||||
ageAndEyes := []byte(`{"age":4.23,"eyes":"blue"}`)
|
||||
|
||||
// Let's combine these merge patch documents...
|
||||
combinedPatch, err := jsonpatch.MergeMergePatches(nameAndHeight, ageAndEyes)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Apply each patch individual against the original document
|
||||
withoutCombinedPatch, err := jsonpatch.MergePatch(original, nameAndHeight)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
withoutCombinedPatch, err = jsonpatch.MergePatch(withoutCombinedPatch, ageAndEyes)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Apply the combined patch against the original document
|
||||
|
||||
withCombinedPatch, err := jsonpatch.MergePatch(original, combinedPatch)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Do both result in the same thing? They should!
|
||||
if jsonpatch.Equal(withCombinedPatch, withoutCombinedPatch) {
|
||||
fmt.Println("Both JSON documents are structurally the same!")
|
||||
}
|
||||
|
||||
fmt.Printf("combined merge patch: %s", combinedPatch)
|
||||
}
|
||||
```
|
||||
|
||||
When ran, you get the following output:
|
||||
```bash
|
||||
$ go run main.go
|
||||
Both JSON documents are structurally the same!
|
||||
combined merge patch: {"age":4.23,"eyes":"blue","height":null,"name":"Jane"}
|
||||
```
|
||||
|
||||
# CLI for comparing JSON documents
|
||||
You can install the commandline program `json-patch`.
|
||||
|
||||
This program can take multiple JSON patch documents as arguments,
|
||||
and fed a JSON document from `stdin`. It will apply the patch(es) against
|
||||
the document and output the modified doc.
|
||||
|
||||
**patch.1.json**
|
||||
```json
|
||||
[
|
||||
{"op": "replace", "path": "/name", "value": "Jane"},
|
||||
{"op": "remove", "path": "/height"}
|
||||
]
|
||||
```
|
||||
|
||||
**patch.2.json**
|
||||
```json
|
||||
[
|
||||
{"op": "add", "path": "/address", "value": "123 Main St"},
|
||||
{"op": "replace", "path": "/age", "value": "21"}
|
||||
]
|
||||
```
|
||||
|
||||
**document.json**
|
||||
```json
|
||||
{
|
||||
"name": "John",
|
||||
"age": 24,
|
||||
"height": 3.21
|
||||
}
|
||||
```
|
||||
|
||||
You can then run:
|
||||
|
||||
```bash
|
||||
$ go install github.com/evanphx/json-patch/cmd/json-patch
|
||||
$ cat document.json | json-patch -p patch.1.json -p patch.2.json
|
||||
{"address":"123 Main St","age":"21","name":"Jane"}
|
||||
```
|
||||
|
||||
# Help It!
|
||||
Contributions are welcomed! Leave [an issue](https://github.com/evanphx/json-patch/issues)
|
||||
or [create a PR](https://github.com/evanphx/json-patch/compare).
|
||||
|
||||
|
||||
Before creating a pull request, we'd ask that you make sure tests are passing
|
||||
and that you have added new tests when applicable.
|
||||
|
||||
Contributors can run tests using:
|
||||
|
||||
```bash
|
||||
go test -cover ./...
|
||||
```
|
||||
|
||||
Builds for pull requests are tested automatically
|
||||
using [TravisCI](https://travis-ci.org/evanphx/json-patch).
|
||||
20
vendor/github.com/ghodss/yaml/.gitignore
generated
vendored
Normal file
20
vendor/github.com/ghodss/yaml/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
# OSX leaves these everywhere on SMB shares
|
||||
._*
|
||||
|
||||
# Eclipse files
|
||||
.classpath
|
||||
.project
|
||||
.settings/**
|
||||
|
||||
# Emacs save files
|
||||
*~
|
||||
|
||||
# Vim-related files
|
||||
[._]*.s[a-w][a-z]
|
||||
[._]s[a-w][a-z]
|
||||
*.un~
|
||||
Session.vim
|
||||
.netrwhist
|
||||
|
||||
# Go test binaries
|
||||
*.test
|
||||
7
vendor/github.com/ghodss/yaml/.travis.yml
generated
vendored
Normal file
7
vendor/github.com/ghodss/yaml/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.3
|
||||
- 1.4
|
||||
script:
|
||||
- go test
|
||||
- go build
|
||||
121
vendor/github.com/ghodss/yaml/README.md
generated
vendored
Normal file
121
vendor/github.com/ghodss/yaml/README.md
generated
vendored
Normal file
@@ -0,0 +1,121 @@
|
||||
# YAML marshaling and unmarshaling support for Go
|
||||
|
||||
[](https://travis-ci.org/ghodss/yaml)
|
||||
|
||||
## Introduction
|
||||
|
||||
A wrapper around [go-yaml](https://github.com/go-yaml/yaml) designed to enable a better way of handling YAML when marshaling to and from structs.
|
||||
|
||||
In short, this library first converts YAML to JSON using go-yaml and then uses `json.Marshal` and `json.Unmarshal` to convert to or from the struct. This means that it effectively reuses the JSON struct tags as well as the custom JSON methods `MarshalJSON` and `UnmarshalJSON` unlike go-yaml. For a detailed overview of the rationale behind this method, [see this blog post](http://ghodss.com/2014/the-right-way-to-handle-yaml-in-golang/).
|
||||
|
||||
## Compatibility
|
||||
|
||||
This package uses [go-yaml](https://github.com/go-yaml/yaml) and therefore supports [everything go-yaml supports](https://github.com/go-yaml/yaml#compatibility).
|
||||
|
||||
## Caveats
|
||||
|
||||
**Caveat #1:** When using `yaml.Marshal` and `yaml.Unmarshal`, binary data should NOT be preceded with the `!!binary` YAML tag. If you do, go-yaml will convert the binary data from base64 to native binary data, which is not compatible with JSON. You can still use binary in your YAML files though - just store them without the `!!binary` tag and decode the base64 in your code (e.g. in the custom JSON methods `MarshalJSON` and `UnmarshalJSON`). This also has the benefit that your YAML and your JSON binary data will be decoded exactly the same way. As an example:
|
||||
|
||||
```
|
||||
BAD:
|
||||
exampleKey: !!binary gIGC
|
||||
|
||||
GOOD:
|
||||
exampleKey: gIGC
|
||||
... and decode the base64 data in your code.
|
||||
```
|
||||
|
||||
**Caveat #2:** When using `YAMLToJSON` directly, maps with keys that are maps will result in an error since this is not supported by JSON. This error will occur in `Unmarshal` as well since you can't unmarshal map keys anyways since struct fields can't be keys.
|
||||
|
||||
## Installation and usage
|
||||
|
||||
To install, run:
|
||||
|
||||
```
|
||||
$ go get github.com/ghodss/yaml
|
||||
```
|
||||
|
||||
And import using:
|
||||
|
||||
```
|
||||
import "github.com/ghodss/yaml"
|
||||
```
|
||||
|
||||
Usage is very similar to the JSON library:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
)
|
||||
|
||||
type Person struct {
|
||||
Name string `json:"name"` // Affects YAML field names too.
|
||||
Age int `json:"age"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
// Marshal a Person struct to YAML.
|
||||
p := Person{"John", 30}
|
||||
y, err := yaml.Marshal(p)
|
||||
if err != nil {
|
||||
fmt.Printf("err: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(y))
|
||||
/* Output:
|
||||
age: 30
|
||||
name: John
|
||||
*/
|
||||
|
||||
// Unmarshal the YAML back into a Person struct.
|
||||
var p2 Person
|
||||
err = yaml.Unmarshal(y, &p2)
|
||||
if err != nil {
|
||||
fmt.Printf("err: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(p2)
|
||||
/* Output:
|
||||
{John 30}
|
||||
*/
|
||||
}
|
||||
```
|
||||
|
||||
`yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
)
|
||||
|
||||
func main() {
|
||||
j := []byte(`{"name": "John", "age": 30}`)
|
||||
y, err := yaml.JSONToYAML(j)
|
||||
if err != nil {
|
||||
fmt.Printf("err: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(y))
|
||||
/* Output:
|
||||
name: John
|
||||
age: 30
|
||||
*/
|
||||
j2, err := yaml.YAMLToJSON(y)
|
||||
if err != nil {
|
||||
fmt.Printf("err: %v\n", err)
|
||||
return
|
||||
}
|
||||
fmt.Println(string(j2))
|
||||
/* Output:
|
||||
{"age":30,"name":"John"}
|
||||
*/
|
||||
}
|
||||
```
|
||||
5
vendor/github.com/gogo/protobuf/GOLANG_CONTRIBUTORS
generated
vendored
5
vendor/github.com/gogo/protobuf/GOLANG_CONTRIBUTORS
generated
vendored
@@ -1,5 +0,0 @@
|
||||
The contributors to the Go protobuf repository:
|
||||
|
||||
# This source code was written by the Go contributors.
|
||||
# The master list of contributors is in the main Go distribution,
|
||||
# visible at http://tip.golang.org/CONTRIBUTORS.
|
||||
37
vendor/github.com/gogo/protobuf/gogoproto/Makefile
generated
vendored
Normal file
37
vendor/github.com/gogo/protobuf/gogoproto/Makefile
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
# Protocol Buffers for Go with Gadgets
|
||||
#
|
||||
# Copyright (c) 2013, The GoGo Authors. All rights reserved.
|
||||
# http://github.com/gogo/protobuf
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are
|
||||
# met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above
|
||||
# copyright notice, this list of conditions and the following disclaimer
|
||||
# in the documentation and/or other materials provided with the
|
||||
# distribution.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
regenerate:
|
||||
go install github.com/gogo/protobuf/protoc-gen-gogo
|
||||
protoc --gogo_out=Mgoogle/protobuf/descriptor.proto=github.com/gogo/protobuf/protoc-gen-gogo/descriptor:../../../../ --proto_path=../../../../:../protobuf/:. *.proto
|
||||
|
||||
restore:
|
||||
cp gogo.pb.golden gogo.pb.go
|
||||
|
||||
preserve:
|
||||
cp gogo.pb.go gogo.pb.golden
|
||||
45
vendor/github.com/gogo/protobuf/gogoproto/gogo.pb.golden
generated
vendored
Normal file
45
vendor/github.com/gogo/protobuf/gogoproto/gogo.pb.golden
generated
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
// Code generated by protoc-gen-go.
|
||||
// source: gogo.proto
|
||||
// DO NOT EDIT!
|
||||
|
||||
package gogoproto
|
||||
|
||||
import proto "github.com/gogo/protobuf/proto"
|
||||
import json "encoding/json"
|
||||
import math "math"
|
||||
import google_protobuf "github.com/gogo/protobuf/protoc-gen-gogo/descriptor"
|
||||
|
||||
// Reference proto, json, and math imports to suppress error if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = &json.SyntaxError{}
|
||||
var _ = math.Inf
|
||||
|
||||
var E_Nullable = &proto.ExtensionDesc{
|
||||
ExtendedType: (*google_protobuf.FieldOptions)(nil),
|
||||
ExtensionType: (*bool)(nil),
|
||||
Field: 51235,
|
||||
Name: "gogoproto.nullable",
|
||||
Tag: "varint,51235,opt,name=nullable",
|
||||
}
|
||||
|
||||
var E_Embed = &proto.ExtensionDesc{
|
||||
ExtendedType: (*google_protobuf.FieldOptions)(nil),
|
||||
ExtensionType: (*bool)(nil),
|
||||
Field: 51236,
|
||||
Name: "gogoproto.embed",
|
||||
Tag: "varint,51236,opt,name=embed",
|
||||
}
|
||||
|
||||
var E_Customtype = &proto.ExtensionDesc{
|
||||
ExtendedType: (*google_protobuf.FieldOptions)(nil),
|
||||
ExtensionType: (*string)(nil),
|
||||
Field: 51237,
|
||||
Name: "gogoproto.customtype",
|
||||
Tag: "bytes,51237,opt,name=customtype",
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterExtension(E_Nullable)
|
||||
proto.RegisterExtension(E_Embed)
|
||||
proto.RegisterExtension(E_Customtype)
|
||||
}
|
||||
144
vendor/github.com/gogo/protobuf/gogoproto/gogo.proto
generated
vendored
Normal file
144
vendor/github.com/gogo/protobuf/gogoproto/gogo.proto
generated
vendored
Normal file
@@ -0,0 +1,144 @@
|
||||
// Protocol Buffers for Go with Gadgets
|
||||
//
|
||||
// Copyright (c) 2013, The GoGo Authors. All rights reserved.
|
||||
// http://github.com/gogo/protobuf
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
syntax = "proto2";
|
||||
package gogoproto;
|
||||
|
||||
import "google/protobuf/descriptor.proto";
|
||||
|
||||
option java_package = "com.google.protobuf";
|
||||
option java_outer_classname = "GoGoProtos";
|
||||
option go_package = "github.com/gogo/protobuf/gogoproto";
|
||||
|
||||
extend google.protobuf.EnumOptions {
|
||||
optional bool goproto_enum_prefix = 62001;
|
||||
optional bool goproto_enum_stringer = 62021;
|
||||
optional bool enum_stringer = 62022;
|
||||
optional string enum_customname = 62023;
|
||||
optional bool enumdecl = 62024;
|
||||
}
|
||||
|
||||
extend google.protobuf.EnumValueOptions {
|
||||
optional string enumvalue_customname = 66001;
|
||||
}
|
||||
|
||||
extend google.protobuf.FileOptions {
|
||||
optional bool goproto_getters_all = 63001;
|
||||
optional bool goproto_enum_prefix_all = 63002;
|
||||
optional bool goproto_stringer_all = 63003;
|
||||
optional bool verbose_equal_all = 63004;
|
||||
optional bool face_all = 63005;
|
||||
optional bool gostring_all = 63006;
|
||||
optional bool populate_all = 63007;
|
||||
optional bool stringer_all = 63008;
|
||||
optional bool onlyone_all = 63009;
|
||||
|
||||
optional bool equal_all = 63013;
|
||||
optional bool description_all = 63014;
|
||||
optional bool testgen_all = 63015;
|
||||
optional bool benchgen_all = 63016;
|
||||
optional bool marshaler_all = 63017;
|
||||
optional bool unmarshaler_all = 63018;
|
||||
optional bool stable_marshaler_all = 63019;
|
||||
|
||||
optional bool sizer_all = 63020;
|
||||
|
||||
optional bool goproto_enum_stringer_all = 63021;
|
||||
optional bool enum_stringer_all = 63022;
|
||||
|
||||
optional bool unsafe_marshaler_all = 63023;
|
||||
optional bool unsafe_unmarshaler_all = 63024;
|
||||
|
||||
optional bool goproto_extensions_map_all = 63025;
|
||||
optional bool goproto_unrecognized_all = 63026;
|
||||
optional bool gogoproto_import = 63027;
|
||||
optional bool protosizer_all = 63028;
|
||||
optional bool compare_all = 63029;
|
||||
optional bool typedecl_all = 63030;
|
||||
optional bool enumdecl_all = 63031;
|
||||
|
||||
optional bool goproto_registration = 63032;
|
||||
optional bool messagename_all = 63033;
|
||||
|
||||
optional bool goproto_sizecache_all = 63034;
|
||||
optional bool goproto_unkeyed_all = 63035;
|
||||
}
|
||||
|
||||
extend google.protobuf.MessageOptions {
|
||||
optional bool goproto_getters = 64001;
|
||||
optional bool goproto_stringer = 64003;
|
||||
optional bool verbose_equal = 64004;
|
||||
optional bool face = 64005;
|
||||
optional bool gostring = 64006;
|
||||
optional bool populate = 64007;
|
||||
optional bool stringer = 67008;
|
||||
optional bool onlyone = 64009;
|
||||
|
||||
optional bool equal = 64013;
|
||||
optional bool description = 64014;
|
||||
optional bool testgen = 64015;
|
||||
optional bool benchgen = 64016;
|
||||
optional bool marshaler = 64017;
|
||||
optional bool unmarshaler = 64018;
|
||||
optional bool stable_marshaler = 64019;
|
||||
|
||||
optional bool sizer = 64020;
|
||||
|
||||
optional bool unsafe_marshaler = 64023;
|
||||
optional bool unsafe_unmarshaler = 64024;
|
||||
|
||||
optional bool goproto_extensions_map = 64025;
|
||||
optional bool goproto_unrecognized = 64026;
|
||||
|
||||
optional bool protosizer = 64028;
|
||||
optional bool compare = 64029;
|
||||
|
||||
optional bool typedecl = 64030;
|
||||
|
||||
optional bool messagename = 64033;
|
||||
|
||||
optional bool goproto_sizecache = 64034;
|
||||
optional bool goproto_unkeyed = 64035;
|
||||
}
|
||||
|
||||
extend google.protobuf.FieldOptions {
|
||||
optional bool nullable = 65001;
|
||||
optional bool embed = 65002;
|
||||
optional string customtype = 65003;
|
||||
optional string customname = 65004;
|
||||
optional string jsontag = 65005;
|
||||
optional string moretags = 65006;
|
||||
optional string casttype = 65007;
|
||||
optional string castkey = 65008;
|
||||
optional string castvalue = 65009;
|
||||
|
||||
optional bool stdtime = 65010;
|
||||
optional bool stdduration = 65011;
|
||||
optional bool wktpointer = 65012;
|
||||
|
||||
}
|
||||
43
vendor/github.com/gogo/protobuf/proto/Makefile
generated
vendored
Normal file
43
vendor/github.com/gogo/protobuf/proto/Makefile
generated
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
# Go support for Protocol Buffers - Google's data interchange format
|
||||
#
|
||||
# Copyright 2010 The Go Authors. All rights reserved.
|
||||
# https://github.com/golang/protobuf
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are
|
||||
# met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above
|
||||
# copyright notice, this list of conditions and the following disclaimer
|
||||
# in the documentation and/or other materials provided with the
|
||||
# distribution.
|
||||
# * Neither the name of Google Inc. nor the names of its
|
||||
# contributors may be used to endorse or promote products derived from
|
||||
# this software without specific prior written permission.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
install:
|
||||
go install
|
||||
|
||||
test: install generate-test-pbs
|
||||
go test
|
||||
|
||||
|
||||
generate-test-pbs:
|
||||
make install
|
||||
make -C test_proto
|
||||
make -C proto3_proto
|
||||
make
|
||||
36
vendor/github.com/gogo/protobuf/protoc-gen-gogo/descriptor/Makefile
generated
vendored
Normal file
36
vendor/github.com/gogo/protobuf/protoc-gen-gogo/descriptor/Makefile
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
# Go support for Protocol Buffers - Google's data interchange format
|
||||
#
|
||||
# Copyright 2010 The Go Authors. All rights reserved.
|
||||
# https://github.com/golang/protobuf
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are
|
||||
# met:
|
||||
#
|
||||
# * Redistributions of source code must retain the above copyright
|
||||
# notice, this list of conditions and the following disclaimer.
|
||||
# * Redistributions in binary form must reproduce the above
|
||||
# copyright notice, this list of conditions and the following disclaimer
|
||||
# in the documentation and/or other materials provided with the
|
||||
# distribution.
|
||||
# * Neither the name of Google Inc. nor the names of its
|
||||
# contributors may be used to endorse or promote products derived from
|
||||
# this software without specific prior written permission.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
regenerate:
|
||||
go install github.com/gogo/protobuf/protoc-gen-gogo
|
||||
go install github.com/gogo/protobuf/protoc-gen-gostring
|
||||
protoc --gogo_out=. -I=../../protobuf/google/protobuf ../../protobuf/google/protobuf/descriptor.proto
|
||||
protoc --gostring_out=. -I=../../protobuf/google/protobuf ../../protobuf/google/protobuf/descriptor.proto
|
||||
883
vendor/github.com/golang/protobuf/protoc-gen-go/descriptor/descriptor.proto
generated
vendored
Normal file
883
vendor/github.com/golang/protobuf/protoc-gen-go/descriptor/descriptor.proto
generated
vendored
Normal file
@@ -0,0 +1,883 @@
|
||||
// Protocol Buffers - Google's data interchange format
|
||||
// Copyright 2008 Google Inc. All rights reserved.
|
||||
// https://developers.google.com/protocol-buffers/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// Author: kenton@google.com (Kenton Varda)
|
||||
// Based on original Protocol Buffers design by
|
||||
// Sanjay Ghemawat, Jeff Dean, and others.
|
||||
//
|
||||
// The messages in this file describe the definitions found in .proto files.
|
||||
// A valid .proto file can be translated directly to a FileDescriptorProto
|
||||
// without any other information (e.g. without reading its imports).
|
||||
|
||||
|
||||
syntax = "proto2";
|
||||
|
||||
package google.protobuf;
|
||||
option go_package = "github.com/golang/protobuf/protoc-gen-go/descriptor;descriptor";
|
||||
option java_package = "com.google.protobuf";
|
||||
option java_outer_classname = "DescriptorProtos";
|
||||
option csharp_namespace = "Google.Protobuf.Reflection";
|
||||
option objc_class_prefix = "GPB";
|
||||
option cc_enable_arenas = true;
|
||||
|
||||
// descriptor.proto must be optimized for speed because reflection-based
|
||||
// algorithms don't work during bootstrapping.
|
||||
option optimize_for = SPEED;
|
||||
|
||||
// The protocol compiler can output a FileDescriptorSet containing the .proto
|
||||
// files it parses.
|
||||
message FileDescriptorSet {
|
||||
repeated FileDescriptorProto file = 1;
|
||||
}
|
||||
|
||||
// Describes a complete .proto file.
|
||||
message FileDescriptorProto {
|
||||
optional string name = 1; // file name, relative to root of source tree
|
||||
optional string package = 2; // e.g. "foo", "foo.bar", etc.
|
||||
|
||||
// Names of files imported by this file.
|
||||
repeated string dependency = 3;
|
||||
// Indexes of the public imported files in the dependency list above.
|
||||
repeated int32 public_dependency = 10;
|
||||
// Indexes of the weak imported files in the dependency list.
|
||||
// For Google-internal migration only. Do not use.
|
||||
repeated int32 weak_dependency = 11;
|
||||
|
||||
// All top-level definitions in this file.
|
||||
repeated DescriptorProto message_type = 4;
|
||||
repeated EnumDescriptorProto enum_type = 5;
|
||||
repeated ServiceDescriptorProto service = 6;
|
||||
repeated FieldDescriptorProto extension = 7;
|
||||
|
||||
optional FileOptions options = 8;
|
||||
|
||||
// This field contains optional information about the original source code.
|
||||
// You may safely remove this entire field without harming runtime
|
||||
// functionality of the descriptors -- the information is needed only by
|
||||
// development tools.
|
||||
optional SourceCodeInfo source_code_info = 9;
|
||||
|
||||
// The syntax of the proto file.
|
||||
// The supported values are "proto2" and "proto3".
|
||||
optional string syntax = 12;
|
||||
}
|
||||
|
||||
// Describes a message type.
|
||||
message DescriptorProto {
|
||||
optional string name = 1;
|
||||
|
||||
repeated FieldDescriptorProto field = 2;
|
||||
repeated FieldDescriptorProto extension = 6;
|
||||
|
||||
repeated DescriptorProto nested_type = 3;
|
||||
repeated EnumDescriptorProto enum_type = 4;
|
||||
|
||||
message ExtensionRange {
|
||||
optional int32 start = 1;
|
||||
optional int32 end = 2;
|
||||
|
||||
optional ExtensionRangeOptions options = 3;
|
||||
}
|
||||
repeated ExtensionRange extension_range = 5;
|
||||
|
||||
repeated OneofDescriptorProto oneof_decl = 8;
|
||||
|
||||
optional MessageOptions options = 7;
|
||||
|
||||
// Range of reserved tag numbers. Reserved tag numbers may not be used by
|
||||
// fields or extension ranges in the same message. Reserved ranges may
|
||||
// not overlap.
|
||||
message ReservedRange {
|
||||
optional int32 start = 1; // Inclusive.
|
||||
optional int32 end = 2; // Exclusive.
|
||||
}
|
||||
repeated ReservedRange reserved_range = 9;
|
||||
// Reserved field names, which may not be used by fields in the same message.
|
||||
// A given name may only be reserved once.
|
||||
repeated string reserved_name = 10;
|
||||
}
|
||||
|
||||
message ExtensionRangeOptions {
|
||||
// The parser stores options it doesn't recognize here. See above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message. See above.
|
||||
extensions 1000 to max;
|
||||
}
|
||||
|
||||
// Describes a field within a message.
|
||||
message FieldDescriptorProto {
|
||||
enum Type {
|
||||
// 0 is reserved for errors.
|
||||
// Order is weird for historical reasons.
|
||||
TYPE_DOUBLE = 1;
|
||||
TYPE_FLOAT = 2;
|
||||
// Not ZigZag encoded. Negative numbers take 10 bytes. Use TYPE_SINT64 if
|
||||
// negative values are likely.
|
||||
TYPE_INT64 = 3;
|
||||
TYPE_UINT64 = 4;
|
||||
// Not ZigZag encoded. Negative numbers take 10 bytes. Use TYPE_SINT32 if
|
||||
// negative values are likely.
|
||||
TYPE_INT32 = 5;
|
||||
TYPE_FIXED64 = 6;
|
||||
TYPE_FIXED32 = 7;
|
||||
TYPE_BOOL = 8;
|
||||
TYPE_STRING = 9;
|
||||
// Tag-delimited aggregate.
|
||||
// Group type is deprecated and not supported in proto3. However, Proto3
|
||||
// implementations should still be able to parse the group wire format and
|
||||
// treat group fields as unknown fields.
|
||||
TYPE_GROUP = 10;
|
||||
TYPE_MESSAGE = 11; // Length-delimited aggregate.
|
||||
|
||||
// New in version 2.
|
||||
TYPE_BYTES = 12;
|
||||
TYPE_UINT32 = 13;
|
||||
TYPE_ENUM = 14;
|
||||
TYPE_SFIXED32 = 15;
|
||||
TYPE_SFIXED64 = 16;
|
||||
TYPE_SINT32 = 17; // Uses ZigZag encoding.
|
||||
TYPE_SINT64 = 18; // Uses ZigZag encoding.
|
||||
};
|
||||
|
||||
enum Label {
|
||||
// 0 is reserved for errors
|
||||
LABEL_OPTIONAL = 1;
|
||||
LABEL_REQUIRED = 2;
|
||||
LABEL_REPEATED = 3;
|
||||
};
|
||||
|
||||
optional string name = 1;
|
||||
optional int32 number = 3;
|
||||
optional Label label = 4;
|
||||
|
||||
// If type_name is set, this need not be set. If both this and type_name
|
||||
// are set, this must be one of TYPE_ENUM, TYPE_MESSAGE or TYPE_GROUP.
|
||||
optional Type type = 5;
|
||||
|
||||
// For message and enum types, this is the name of the type. If the name
|
||||
// starts with a '.', it is fully-qualified. Otherwise, C++-like scoping
|
||||
// rules are used to find the type (i.e. first the nested types within this
|
||||
// message are searched, then within the parent, on up to the root
|
||||
// namespace).
|
||||
optional string type_name = 6;
|
||||
|
||||
// For extensions, this is the name of the type being extended. It is
|
||||
// resolved in the same manner as type_name.
|
||||
optional string extendee = 2;
|
||||
|
||||
// For numeric types, contains the original text representation of the value.
|
||||
// For booleans, "true" or "false".
|
||||
// For strings, contains the default text contents (not escaped in any way).
|
||||
// For bytes, contains the C escaped value. All bytes >= 128 are escaped.
|
||||
// TODO(kenton): Base-64 encode?
|
||||
optional string default_value = 7;
|
||||
|
||||
// If set, gives the index of a oneof in the containing type's oneof_decl
|
||||
// list. This field is a member of that oneof.
|
||||
optional int32 oneof_index = 9;
|
||||
|
||||
// JSON name of this field. The value is set by protocol compiler. If the
|
||||
// user has set a "json_name" option on this field, that option's value
|
||||
// will be used. Otherwise, it's deduced from the field's name by converting
|
||||
// it to camelCase.
|
||||
optional string json_name = 10;
|
||||
|
||||
optional FieldOptions options = 8;
|
||||
}
|
||||
|
||||
// Describes a oneof.
|
||||
message OneofDescriptorProto {
|
||||
optional string name = 1;
|
||||
optional OneofOptions options = 2;
|
||||
}
|
||||
|
||||
// Describes an enum type.
|
||||
message EnumDescriptorProto {
|
||||
optional string name = 1;
|
||||
|
||||
repeated EnumValueDescriptorProto value = 2;
|
||||
|
||||
optional EnumOptions options = 3;
|
||||
|
||||
// Range of reserved numeric values. Reserved values may not be used by
|
||||
// entries in the same enum. Reserved ranges may not overlap.
|
||||
//
|
||||
// Note that this is distinct from DescriptorProto.ReservedRange in that it
|
||||
// is inclusive such that it can appropriately represent the entire int32
|
||||
// domain.
|
||||
message EnumReservedRange {
|
||||
optional int32 start = 1; // Inclusive.
|
||||
optional int32 end = 2; // Inclusive.
|
||||
}
|
||||
|
||||
// Range of reserved numeric values. Reserved numeric values may not be used
|
||||
// by enum values in the same enum declaration. Reserved ranges may not
|
||||
// overlap.
|
||||
repeated EnumReservedRange reserved_range = 4;
|
||||
|
||||
// Reserved enum value names, which may not be reused. A given name may only
|
||||
// be reserved once.
|
||||
repeated string reserved_name = 5;
|
||||
}
|
||||
|
||||
// Describes a value within an enum.
|
||||
message EnumValueDescriptorProto {
|
||||
optional string name = 1;
|
||||
optional int32 number = 2;
|
||||
|
||||
optional EnumValueOptions options = 3;
|
||||
}
|
||||
|
||||
// Describes a service.
|
||||
message ServiceDescriptorProto {
|
||||
optional string name = 1;
|
||||
repeated MethodDescriptorProto method = 2;
|
||||
|
||||
optional ServiceOptions options = 3;
|
||||
}
|
||||
|
||||
// Describes a method of a service.
|
||||
message MethodDescriptorProto {
|
||||
optional string name = 1;
|
||||
|
||||
// Input and output type names. These are resolved in the same way as
|
||||
// FieldDescriptorProto.type_name, but must refer to a message type.
|
||||
optional string input_type = 2;
|
||||
optional string output_type = 3;
|
||||
|
||||
optional MethodOptions options = 4;
|
||||
|
||||
// Identifies if client streams multiple client messages
|
||||
optional bool client_streaming = 5 [default=false];
|
||||
// Identifies if server streams multiple server messages
|
||||
optional bool server_streaming = 6 [default=false];
|
||||
}
|
||||
|
||||
|
||||
// ===================================================================
|
||||
// Options
|
||||
|
||||
// Each of the definitions above may have "options" attached. These are
|
||||
// just annotations which may cause code to be generated slightly differently
|
||||
// or may contain hints for code that manipulates protocol messages.
|
||||
//
|
||||
// Clients may define custom options as extensions of the *Options messages.
|
||||
// These extensions may not yet be known at parsing time, so the parser cannot
|
||||
// store the values in them. Instead it stores them in a field in the *Options
|
||||
// message called uninterpreted_option. This field must have the same name
|
||||
// across all *Options messages. We then use this field to populate the
|
||||
// extensions when we build a descriptor, at which point all protos have been
|
||||
// parsed and so all extensions are known.
|
||||
//
|
||||
// Extension numbers for custom options may be chosen as follows:
|
||||
// * For options which will only be used within a single application or
|
||||
// organization, or for experimental options, use field numbers 50000
|
||||
// through 99999. It is up to you to ensure that you do not use the
|
||||
// same number for multiple options.
|
||||
// * For options which will be published and used publicly by multiple
|
||||
// independent entities, e-mail protobuf-global-extension-registry@google.com
|
||||
// to reserve extension numbers. Simply provide your project name (e.g.
|
||||
// Objective-C plugin) and your project website (if available) -- there's no
|
||||
// need to explain how you intend to use them. Usually you only need one
|
||||
// extension number. You can declare multiple options with only one extension
|
||||
// number by putting them in a sub-message. See the Custom Options section of
|
||||
// the docs for examples:
|
||||
// https://developers.google.com/protocol-buffers/docs/proto#options
|
||||
// If this turns out to be popular, a web service will be set up
|
||||
// to automatically assign option numbers.
|
||||
|
||||
|
||||
message FileOptions {
|
||||
|
||||
// Sets the Java package where classes generated from this .proto will be
|
||||
// placed. By default, the proto package is used, but this is often
|
||||
// inappropriate because proto packages do not normally start with backwards
|
||||
// domain names.
|
||||
optional string java_package = 1;
|
||||
|
||||
|
||||
// If set, all the classes from the .proto file are wrapped in a single
|
||||
// outer class with the given name. This applies to both Proto1
|
||||
// (equivalent to the old "--one_java_file" option) and Proto2 (where
|
||||
// a .proto always translates to a single class, but you may want to
|
||||
// explicitly choose the class name).
|
||||
optional string java_outer_classname = 8;
|
||||
|
||||
// If set true, then the Java code generator will generate a separate .java
|
||||
// file for each top-level message, enum, and service defined in the .proto
|
||||
// file. Thus, these types will *not* be nested inside the outer class
|
||||
// named by java_outer_classname. However, the outer class will still be
|
||||
// generated to contain the file's getDescriptor() method as well as any
|
||||
// top-level extensions defined in the file.
|
||||
optional bool java_multiple_files = 10 [default=false];
|
||||
|
||||
// This option does nothing.
|
||||
optional bool java_generate_equals_and_hash = 20 [deprecated=true];
|
||||
|
||||
// If set true, then the Java2 code generator will generate code that
|
||||
// throws an exception whenever an attempt is made to assign a non-UTF-8
|
||||
// byte sequence to a string field.
|
||||
// Message reflection will do the same.
|
||||
// However, an extension field still accepts non-UTF-8 byte sequences.
|
||||
// This option has no effect on when used with the lite runtime.
|
||||
optional bool java_string_check_utf8 = 27 [default=false];
|
||||
|
||||
|
||||
// Generated classes can be optimized for speed or code size.
|
||||
enum OptimizeMode {
|
||||
SPEED = 1; // Generate complete code for parsing, serialization,
|
||||
// etc.
|
||||
CODE_SIZE = 2; // Use ReflectionOps to implement these methods.
|
||||
LITE_RUNTIME = 3; // Generate code using MessageLite and the lite runtime.
|
||||
}
|
||||
optional OptimizeMode optimize_for = 9 [default=SPEED];
|
||||
|
||||
// Sets the Go package where structs generated from this .proto will be
|
||||
// placed. If omitted, the Go package will be derived from the following:
|
||||
// - The basename of the package import path, if provided.
|
||||
// - Otherwise, the package statement in the .proto file, if present.
|
||||
// - Otherwise, the basename of the .proto file, without extension.
|
||||
optional string go_package = 11;
|
||||
|
||||
|
||||
|
||||
// Should generic services be generated in each language? "Generic" services
|
||||
// are not specific to any particular RPC system. They are generated by the
|
||||
// main code generators in each language (without additional plugins).
|
||||
// Generic services were the only kind of service generation supported by
|
||||
// early versions of google.protobuf.
|
||||
//
|
||||
// Generic services are now considered deprecated in favor of using plugins
|
||||
// that generate code specific to your particular RPC system. Therefore,
|
||||
// these default to false. Old code which depends on generic services should
|
||||
// explicitly set them to true.
|
||||
optional bool cc_generic_services = 16 [default=false];
|
||||
optional bool java_generic_services = 17 [default=false];
|
||||
optional bool py_generic_services = 18 [default=false];
|
||||
optional bool php_generic_services = 42 [default=false];
|
||||
|
||||
// Is this file deprecated?
|
||||
// Depending on the target platform, this can emit Deprecated annotations
|
||||
// for everything in the file, or it will be completely ignored; in the very
|
||||
// least, this is a formalization for deprecating files.
|
||||
optional bool deprecated = 23 [default=false];
|
||||
|
||||
// Enables the use of arenas for the proto messages in this file. This applies
|
||||
// only to generated classes for C++.
|
||||
optional bool cc_enable_arenas = 31 [default=false];
|
||||
|
||||
|
||||
// Sets the objective c class prefix which is prepended to all objective c
|
||||
// generated classes from this .proto. There is no default.
|
||||
optional string objc_class_prefix = 36;
|
||||
|
||||
// Namespace for generated classes; defaults to the package.
|
||||
optional string csharp_namespace = 37;
|
||||
|
||||
// By default Swift generators will take the proto package and CamelCase it
|
||||
// replacing '.' with underscore and use that to prefix the types/symbols
|
||||
// defined. When this options is provided, they will use this value instead
|
||||
// to prefix the types/symbols defined.
|
||||
optional string swift_prefix = 39;
|
||||
|
||||
// Sets the php class prefix which is prepended to all php generated classes
|
||||
// from this .proto. Default is empty.
|
||||
optional string php_class_prefix = 40;
|
||||
|
||||
// Use this option to change the namespace of php generated classes. Default
|
||||
// is empty. When this option is empty, the package name will be used for
|
||||
// determining the namespace.
|
||||
optional string php_namespace = 41;
|
||||
|
||||
|
||||
// Use this option to change the namespace of php generated metadata classes.
|
||||
// Default is empty. When this option is empty, the proto file name will be used
|
||||
// for determining the namespace.
|
||||
optional string php_metadata_namespace = 44;
|
||||
|
||||
// Use this option to change the package of ruby generated classes. Default
|
||||
// is empty. When this option is not set, the package name will be used for
|
||||
// determining the ruby package.
|
||||
optional string ruby_package = 45;
|
||||
|
||||
// The parser stores options it doesn't recognize here.
|
||||
// See the documentation for the "Options" section above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message.
|
||||
// See the documentation for the "Options" section above.
|
||||
extensions 1000 to max;
|
||||
|
||||
reserved 38;
|
||||
}
|
||||
|
||||
message MessageOptions {
|
||||
// Set true to use the old proto1 MessageSet wire format for extensions.
|
||||
// This is provided for backwards-compatibility with the MessageSet wire
|
||||
// format. You should not use this for any other reason: It's less
|
||||
// efficient, has fewer features, and is more complicated.
|
||||
//
|
||||
// The message must be defined exactly as follows:
|
||||
// message Foo {
|
||||
// option message_set_wire_format = true;
|
||||
// extensions 4 to max;
|
||||
// }
|
||||
// Note that the message cannot have any defined fields; MessageSets only
|
||||
// have extensions.
|
||||
//
|
||||
// All extensions of your type must be singular messages; e.g. they cannot
|
||||
// be int32s, enums, or repeated messages.
|
||||
//
|
||||
// Because this is an option, the above two restrictions are not enforced by
|
||||
// the protocol compiler.
|
||||
optional bool message_set_wire_format = 1 [default=false];
|
||||
|
||||
// Disables the generation of the standard "descriptor()" accessor, which can
|
||||
// conflict with a field of the same name. This is meant to make migration
|
||||
// from proto1 easier; new code should avoid fields named "descriptor".
|
||||
optional bool no_standard_descriptor_accessor = 2 [default=false];
|
||||
|
||||
// Is this message deprecated?
|
||||
// Depending on the target platform, this can emit Deprecated annotations
|
||||
// for the message, or it will be completely ignored; in the very least,
|
||||
// this is a formalization for deprecating messages.
|
||||
optional bool deprecated = 3 [default=false];
|
||||
|
||||
// Whether the message is an automatically generated map entry type for the
|
||||
// maps field.
|
||||
//
|
||||
// For maps fields:
|
||||
// map<KeyType, ValueType> map_field = 1;
|
||||
// The parsed descriptor looks like:
|
||||
// message MapFieldEntry {
|
||||
// option map_entry = true;
|
||||
// optional KeyType key = 1;
|
||||
// optional ValueType value = 2;
|
||||
// }
|
||||
// repeated MapFieldEntry map_field = 1;
|
||||
//
|
||||
// Implementations may choose not to generate the map_entry=true message, but
|
||||
// use a native map in the target language to hold the keys and values.
|
||||
// The reflection APIs in such implementions still need to work as
|
||||
// if the field is a repeated message field.
|
||||
//
|
||||
// NOTE: Do not set the option in .proto files. Always use the maps syntax
|
||||
// instead. The option should only be implicitly set by the proto compiler
|
||||
// parser.
|
||||
optional bool map_entry = 7;
|
||||
|
||||
reserved 8; // javalite_serializable
|
||||
reserved 9; // javanano_as_lite
|
||||
|
||||
// The parser stores options it doesn't recognize here. See above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message. See above.
|
||||
extensions 1000 to max;
|
||||
}
|
||||
|
||||
message FieldOptions {
|
||||
// The ctype option instructs the C++ code generator to use a different
|
||||
// representation of the field than it normally would. See the specific
|
||||
// options below. This option is not yet implemented in the open source
|
||||
// release -- sorry, we'll try to include it in a future version!
|
||||
optional CType ctype = 1 [default = STRING];
|
||||
enum CType {
|
||||
// Default mode.
|
||||
STRING = 0;
|
||||
|
||||
CORD = 1;
|
||||
|
||||
STRING_PIECE = 2;
|
||||
}
|
||||
// The packed option can be enabled for repeated primitive fields to enable
|
||||
// a more efficient representation on the wire. Rather than repeatedly
|
||||
// writing the tag and type for each element, the entire array is encoded as
|
||||
// a single length-delimited blob. In proto3, only explicit setting it to
|
||||
// false will avoid using packed encoding.
|
||||
optional bool packed = 2;
|
||||
|
||||
// The jstype option determines the JavaScript type used for values of the
|
||||
// field. The option is permitted only for 64 bit integral and fixed types
|
||||
// (int64, uint64, sint64, fixed64, sfixed64). A field with jstype JS_STRING
|
||||
// is represented as JavaScript string, which avoids loss of precision that
|
||||
// can happen when a large value is converted to a floating point JavaScript.
|
||||
// Specifying JS_NUMBER for the jstype causes the generated JavaScript code to
|
||||
// use the JavaScript "number" type. The behavior of the default option
|
||||
// JS_NORMAL is implementation dependent.
|
||||
//
|
||||
// This option is an enum to permit additional types to be added, e.g.
|
||||
// goog.math.Integer.
|
||||
optional JSType jstype = 6 [default = JS_NORMAL];
|
||||
enum JSType {
|
||||
// Use the default type.
|
||||
JS_NORMAL = 0;
|
||||
|
||||
// Use JavaScript strings.
|
||||
JS_STRING = 1;
|
||||
|
||||
// Use JavaScript numbers.
|
||||
JS_NUMBER = 2;
|
||||
}
|
||||
|
||||
// Should this field be parsed lazily? Lazy applies only to message-type
|
||||
// fields. It means that when the outer message is initially parsed, the
|
||||
// inner message's contents will not be parsed but instead stored in encoded
|
||||
// form. The inner message will actually be parsed when it is first accessed.
|
||||
//
|
||||
// This is only a hint. Implementations are free to choose whether to use
|
||||
// eager or lazy parsing regardless of the value of this option. However,
|
||||
// setting this option true suggests that the protocol author believes that
|
||||
// using lazy parsing on this field is worth the additional bookkeeping
|
||||
// overhead typically needed to implement it.
|
||||
//
|
||||
// This option does not affect the public interface of any generated code;
|
||||
// all method signatures remain the same. Furthermore, thread-safety of the
|
||||
// interface is not affected by this option; const methods remain safe to
|
||||
// call from multiple threads concurrently, while non-const methods continue
|
||||
// to require exclusive access.
|
||||
//
|
||||
//
|
||||
// Note that implementations may choose not to check required fields within
|
||||
// a lazy sub-message. That is, calling IsInitialized() on the outer message
|
||||
// may return true even if the inner message has missing required fields.
|
||||
// This is necessary because otherwise the inner message would have to be
|
||||
// parsed in order to perform the check, defeating the purpose of lazy
|
||||
// parsing. An implementation which chooses not to check required fields
|
||||
// must be consistent about it. That is, for any particular sub-message, the
|
||||
// implementation must either *always* check its required fields, or *never*
|
||||
// check its required fields, regardless of whether or not the message has
|
||||
// been parsed.
|
||||
optional bool lazy = 5 [default=false];
|
||||
|
||||
// Is this field deprecated?
|
||||
// Depending on the target platform, this can emit Deprecated annotations
|
||||
// for accessors, or it will be completely ignored; in the very least, this
|
||||
// is a formalization for deprecating fields.
|
||||
optional bool deprecated = 3 [default=false];
|
||||
|
||||
// For Google-internal migration only. Do not use.
|
||||
optional bool weak = 10 [default=false];
|
||||
|
||||
|
||||
// The parser stores options it doesn't recognize here. See above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message. See above.
|
||||
extensions 1000 to max;
|
||||
|
||||
reserved 4; // removed jtype
|
||||
}
|
||||
|
||||
message OneofOptions {
|
||||
// The parser stores options it doesn't recognize here. See above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message. See above.
|
||||
extensions 1000 to max;
|
||||
}
|
||||
|
||||
message EnumOptions {
|
||||
|
||||
// Set this option to true to allow mapping different tag names to the same
|
||||
// value.
|
||||
optional bool allow_alias = 2;
|
||||
|
||||
// Is this enum deprecated?
|
||||
// Depending on the target platform, this can emit Deprecated annotations
|
||||
// for the enum, or it will be completely ignored; in the very least, this
|
||||
// is a formalization for deprecating enums.
|
||||
optional bool deprecated = 3 [default=false];
|
||||
|
||||
reserved 5; // javanano_as_lite
|
||||
|
||||
// The parser stores options it doesn't recognize here. See above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message. See above.
|
||||
extensions 1000 to max;
|
||||
}
|
||||
|
||||
message EnumValueOptions {
|
||||
// Is this enum value deprecated?
|
||||
// Depending on the target platform, this can emit Deprecated annotations
|
||||
// for the enum value, or it will be completely ignored; in the very least,
|
||||
// this is a formalization for deprecating enum values.
|
||||
optional bool deprecated = 1 [default=false];
|
||||
|
||||
// The parser stores options it doesn't recognize here. See above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message. See above.
|
||||
extensions 1000 to max;
|
||||
}
|
||||
|
||||
message ServiceOptions {
|
||||
|
||||
// Note: Field numbers 1 through 32 are reserved for Google's internal RPC
|
||||
// framework. We apologize for hoarding these numbers to ourselves, but
|
||||
// we were already using them long before we decided to release Protocol
|
||||
// Buffers.
|
||||
|
||||
// Is this service deprecated?
|
||||
// Depending on the target platform, this can emit Deprecated annotations
|
||||
// for the service, or it will be completely ignored; in the very least,
|
||||
// this is a formalization for deprecating services.
|
||||
optional bool deprecated = 33 [default=false];
|
||||
|
||||
// The parser stores options it doesn't recognize here. See above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message. See above.
|
||||
extensions 1000 to max;
|
||||
}
|
||||
|
||||
message MethodOptions {
|
||||
|
||||
// Note: Field numbers 1 through 32 are reserved for Google's internal RPC
|
||||
// framework. We apologize for hoarding these numbers to ourselves, but
|
||||
// we were already using them long before we decided to release Protocol
|
||||
// Buffers.
|
||||
|
||||
// Is this method deprecated?
|
||||
// Depending on the target platform, this can emit Deprecated annotations
|
||||
// for the method, or it will be completely ignored; in the very least,
|
||||
// this is a formalization for deprecating methods.
|
||||
optional bool deprecated = 33 [default=false];
|
||||
|
||||
// Is this method side-effect-free (or safe in HTTP parlance), or idempotent,
|
||||
// or neither? HTTP based RPC implementation may choose GET verb for safe
|
||||
// methods, and PUT verb for idempotent methods instead of the default POST.
|
||||
enum IdempotencyLevel {
|
||||
IDEMPOTENCY_UNKNOWN = 0;
|
||||
NO_SIDE_EFFECTS = 1; // implies idempotent
|
||||
IDEMPOTENT = 2; // idempotent, but may have side effects
|
||||
}
|
||||
optional IdempotencyLevel idempotency_level =
|
||||
34 [default=IDEMPOTENCY_UNKNOWN];
|
||||
|
||||
// The parser stores options it doesn't recognize here. See above.
|
||||
repeated UninterpretedOption uninterpreted_option = 999;
|
||||
|
||||
// Clients can define custom options in extensions of this message. See above.
|
||||
extensions 1000 to max;
|
||||
}
|
||||
|
||||
|
||||
// A message representing a option the parser does not recognize. This only
|
||||
// appears in options protos created by the compiler::Parser class.
|
||||
// DescriptorPool resolves these when building Descriptor objects. Therefore,
|
||||
// options protos in descriptor objects (e.g. returned by Descriptor::options(),
|
||||
// or produced by Descriptor::CopyTo()) will never have UninterpretedOptions
|
||||
// in them.
|
||||
message UninterpretedOption {
|
||||
// The name of the uninterpreted option. Each string represents a segment in
|
||||
// a dot-separated name. is_extension is true iff a segment represents an
|
||||
// extension (denoted with parentheses in options specs in .proto files).
|
||||
// E.g.,{ ["foo", false], ["bar.baz", true], ["qux", false] } represents
|
||||
// "foo.(bar.baz).qux".
|
||||
message NamePart {
|
||||
required string name_part = 1;
|
||||
required bool is_extension = 2;
|
||||
}
|
||||
repeated NamePart name = 2;
|
||||
|
||||
// The value of the uninterpreted option, in whatever type the tokenizer
|
||||
// identified it as during parsing. Exactly one of these should be set.
|
||||
optional string identifier_value = 3;
|
||||
optional uint64 positive_int_value = 4;
|
||||
optional int64 negative_int_value = 5;
|
||||
optional double double_value = 6;
|
||||
optional bytes string_value = 7;
|
||||
optional string aggregate_value = 8;
|
||||
}
|
||||
|
||||
// ===================================================================
|
||||
// Optional source code info
|
||||
|
||||
// Encapsulates information about the original source file from which a
|
||||
// FileDescriptorProto was generated.
|
||||
message SourceCodeInfo {
|
||||
// A Location identifies a piece of source code in a .proto file which
|
||||
// corresponds to a particular definition. This information is intended
|
||||
// to be useful to IDEs, code indexers, documentation generators, and similar
|
||||
// tools.
|
||||
//
|
||||
// For example, say we have a file like:
|
||||
// message Foo {
|
||||
// optional string foo = 1;
|
||||
// }
|
||||
// Let's look at just the field definition:
|
||||
// optional string foo = 1;
|
||||
// ^ ^^ ^^ ^ ^^^
|
||||
// a bc de f ghi
|
||||
// We have the following locations:
|
||||
// span path represents
|
||||
// [a,i) [ 4, 0, 2, 0 ] The whole field definition.
|
||||
// [a,b) [ 4, 0, 2, 0, 4 ] The label (optional).
|
||||
// [c,d) [ 4, 0, 2, 0, 5 ] The type (string).
|
||||
// [e,f) [ 4, 0, 2, 0, 1 ] The name (foo).
|
||||
// [g,h) [ 4, 0, 2, 0, 3 ] The number (1).
|
||||
//
|
||||
// Notes:
|
||||
// - A location may refer to a repeated field itself (i.e. not to any
|
||||
// particular index within it). This is used whenever a set of elements are
|
||||
// logically enclosed in a single code segment. For example, an entire
|
||||
// extend block (possibly containing multiple extension definitions) will
|
||||
// have an outer location whose path refers to the "extensions" repeated
|
||||
// field without an index.
|
||||
// - Multiple locations may have the same path. This happens when a single
|
||||
// logical declaration is spread out across multiple places. The most
|
||||
// obvious example is the "extend" block again -- there may be multiple
|
||||
// extend blocks in the same scope, each of which will have the same path.
|
||||
// - A location's span is not always a subset of its parent's span. For
|
||||
// example, the "extendee" of an extension declaration appears at the
|
||||
// beginning of the "extend" block and is shared by all extensions within
|
||||
// the block.
|
||||
// - Just because a location's span is a subset of some other location's span
|
||||
// does not mean that it is a descendent. For example, a "group" defines
|
||||
// both a type and a field in a single declaration. Thus, the locations
|
||||
// corresponding to the type and field and their components will overlap.
|
||||
// - Code which tries to interpret locations should probably be designed to
|
||||
// ignore those that it doesn't understand, as more types of locations could
|
||||
// be recorded in the future.
|
||||
repeated Location location = 1;
|
||||
message Location {
|
||||
// Identifies which part of the FileDescriptorProto was defined at this
|
||||
// location.
|
||||
//
|
||||
// Each element is a field number or an index. They form a path from
|
||||
// the root FileDescriptorProto to the place where the definition. For
|
||||
// example, this path:
|
||||
// [ 4, 3, 2, 7, 1 ]
|
||||
// refers to:
|
||||
// file.message_type(3) // 4, 3
|
||||
// .field(7) // 2, 7
|
||||
// .name() // 1
|
||||
// This is because FileDescriptorProto.message_type has field number 4:
|
||||
// repeated DescriptorProto message_type = 4;
|
||||
// and DescriptorProto.field has field number 2:
|
||||
// repeated FieldDescriptorProto field = 2;
|
||||
// and FieldDescriptorProto.name has field number 1:
|
||||
// optional string name = 1;
|
||||
//
|
||||
// Thus, the above path gives the location of a field name. If we removed
|
||||
// the last element:
|
||||
// [ 4, 3, 2, 7 ]
|
||||
// this path refers to the whole field declaration (from the beginning
|
||||
// of the label to the terminating semicolon).
|
||||
repeated int32 path = 1 [packed=true];
|
||||
|
||||
// Always has exactly three or four elements: start line, start column,
|
||||
// end line (optional, otherwise assumed same as start line), end column.
|
||||
// These are packed into a single field for efficiency. Note that line
|
||||
// and column numbers are zero-based -- typically you will want to add
|
||||
// 1 to each before displaying to a user.
|
||||
repeated int32 span = 2 [packed=true];
|
||||
|
||||
// If this SourceCodeInfo represents a complete declaration, these are any
|
||||
// comments appearing before and after the declaration which appear to be
|
||||
// attached to the declaration.
|
||||
//
|
||||
// A series of line comments appearing on consecutive lines, with no other
|
||||
// tokens appearing on those lines, will be treated as a single comment.
|
||||
//
|
||||
// leading_detached_comments will keep paragraphs of comments that appear
|
||||
// before (but not connected to) the current element. Each paragraph,
|
||||
// separated by empty lines, will be one comment element in the repeated
|
||||
// field.
|
||||
//
|
||||
// Only the comment content is provided; comment markers (e.g. //) are
|
||||
// stripped out. For block comments, leading whitespace and an asterisk
|
||||
// will be stripped from the beginning of each line other than the first.
|
||||
// Newlines are included in the output.
|
||||
//
|
||||
// Examples:
|
||||
//
|
||||
// optional int32 foo = 1; // Comment attached to foo.
|
||||
// // Comment attached to bar.
|
||||
// optional int32 bar = 2;
|
||||
//
|
||||
// optional string baz = 3;
|
||||
// // Comment attached to baz.
|
||||
// // Another line attached to baz.
|
||||
//
|
||||
// // Comment attached to qux.
|
||||
// //
|
||||
// // Another line attached to qux.
|
||||
// optional double qux = 4;
|
||||
//
|
||||
// // Detached comment for corge. This is not leading or trailing comments
|
||||
// // to qux or corge because there are blank lines separating it from
|
||||
// // both.
|
||||
//
|
||||
// // Detached comment for corge paragraph 2.
|
||||
//
|
||||
// optional string corge = 5;
|
||||
// /* Block comment attached
|
||||
// * to corge. Leading asterisks
|
||||
// * will be removed. */
|
||||
// /* Block comment attached to
|
||||
// * grault. */
|
||||
// optional int32 grault = 6;
|
||||
//
|
||||
// // ignored detached comments.
|
||||
optional string leading_comments = 3;
|
||||
optional string trailing_comments = 4;
|
||||
repeated string leading_detached_comments = 6;
|
||||
}
|
||||
}
|
||||
|
||||
// Describes the relationship between generated code and its original source
|
||||
// file. A GeneratedCodeInfo message is associated with only one generated
|
||||
// source file, but may contain references to different source .proto files.
|
||||
message GeneratedCodeInfo {
|
||||
// An Annotation connects some span of text in generated code to an element
|
||||
// of its generating .proto file.
|
||||
repeated Annotation annotation = 1;
|
||||
message Annotation {
|
||||
// Identifies the element in the original source .proto file. This field
|
||||
// is formatted the same as SourceCodeInfo.Location.path.
|
||||
repeated int32 path = 1 [packed=true];
|
||||
|
||||
// Identifies the filesystem path to the original source .proto.
|
||||
optional string source_file = 2;
|
||||
|
||||
// Identifies the starting offset in bytes in the generated code
|
||||
// that relates to the identified object.
|
||||
optional int32 begin = 3;
|
||||
|
||||
// Identifies the ending offset in bytes in the generated code that
|
||||
// relates to the identified offset. The end offset should be one past
|
||||
// the last relevant byte (so the length of the text = end - begin).
|
||||
optional int32 end = 4;
|
||||
}
|
||||
}
|
||||
154
vendor/github.com/golang/protobuf/ptypes/any/any.proto
generated
vendored
Normal file
154
vendor/github.com/golang/protobuf/ptypes/any/any.proto
generated
vendored
Normal file
@@ -0,0 +1,154 @@
|
||||
// Protocol Buffers - Google's data interchange format
|
||||
// Copyright 2008 Google Inc. All rights reserved.
|
||||
// https://developers.google.com/protocol-buffers/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
package google.protobuf;
|
||||
|
||||
option csharp_namespace = "Google.Protobuf.WellKnownTypes";
|
||||
option go_package = "github.com/golang/protobuf/ptypes/any";
|
||||
option java_package = "com.google.protobuf";
|
||||
option java_outer_classname = "AnyProto";
|
||||
option java_multiple_files = true;
|
||||
option objc_class_prefix = "GPB";
|
||||
|
||||
// `Any` contains an arbitrary serialized protocol buffer message along with a
|
||||
// URL that describes the type of the serialized message.
|
||||
//
|
||||
// Protobuf library provides support to pack/unpack Any values in the form
|
||||
// of utility functions or additional generated methods of the Any type.
|
||||
//
|
||||
// Example 1: Pack and unpack a message in C++.
|
||||
//
|
||||
// Foo foo = ...;
|
||||
// Any any;
|
||||
// any.PackFrom(foo);
|
||||
// ...
|
||||
// if (any.UnpackTo(&foo)) {
|
||||
// ...
|
||||
// }
|
||||
//
|
||||
// Example 2: Pack and unpack a message in Java.
|
||||
//
|
||||
// Foo foo = ...;
|
||||
// Any any = Any.pack(foo);
|
||||
// ...
|
||||
// if (any.is(Foo.class)) {
|
||||
// foo = any.unpack(Foo.class);
|
||||
// }
|
||||
//
|
||||
// Example 3: Pack and unpack a message in Python.
|
||||
//
|
||||
// foo = Foo(...)
|
||||
// any = Any()
|
||||
// any.Pack(foo)
|
||||
// ...
|
||||
// if any.Is(Foo.DESCRIPTOR):
|
||||
// any.Unpack(foo)
|
||||
// ...
|
||||
//
|
||||
// Example 4: Pack and unpack a message in Go
|
||||
//
|
||||
// foo := &pb.Foo{...}
|
||||
// any, err := ptypes.MarshalAny(foo)
|
||||
// ...
|
||||
// foo := &pb.Foo{}
|
||||
// if err := ptypes.UnmarshalAny(any, foo); err != nil {
|
||||
// ...
|
||||
// }
|
||||
//
|
||||
// The pack methods provided by protobuf library will by default use
|
||||
// 'type.googleapis.com/full.type.name' as the type URL and the unpack
|
||||
// methods only use the fully qualified type name after the last '/'
|
||||
// in the type URL, for example "foo.bar.com/x/y.z" will yield type
|
||||
// name "y.z".
|
||||
//
|
||||
//
|
||||
// JSON
|
||||
// ====
|
||||
// The JSON representation of an `Any` value uses the regular
|
||||
// representation of the deserialized, embedded message, with an
|
||||
// additional field `@type` which contains the type URL. Example:
|
||||
//
|
||||
// package google.profile;
|
||||
// message Person {
|
||||
// string first_name = 1;
|
||||
// string last_name = 2;
|
||||
// }
|
||||
//
|
||||
// {
|
||||
// "@type": "type.googleapis.com/google.profile.Person",
|
||||
// "firstName": <string>,
|
||||
// "lastName": <string>
|
||||
// }
|
||||
//
|
||||
// If the embedded message type is well-known and has a custom JSON
|
||||
// representation, that representation will be embedded adding a field
|
||||
// `value` which holds the custom JSON in addition to the `@type`
|
||||
// field. Example (for message [google.protobuf.Duration][]):
|
||||
//
|
||||
// {
|
||||
// "@type": "type.googleapis.com/google.protobuf.Duration",
|
||||
// "value": "1.212s"
|
||||
// }
|
||||
//
|
||||
message Any {
|
||||
// A URL/resource name that uniquely identifies the type of the serialized
|
||||
// protocol buffer message. The last segment of the URL's path must represent
|
||||
// the fully qualified name of the type (as in
|
||||
// `path/google.protobuf.Duration`). The name should be in a canonical form
|
||||
// (e.g., leading "." is not accepted).
|
||||
//
|
||||
// In practice, teams usually precompile into the binary all types that they
|
||||
// expect it to use in the context of Any. However, for URLs which use the
|
||||
// scheme `http`, `https`, or no scheme, one can optionally set up a type
|
||||
// server that maps type URLs to message definitions as follows:
|
||||
//
|
||||
// * If no scheme is provided, `https` is assumed.
|
||||
// * An HTTP GET on the URL must yield a [google.protobuf.Type][]
|
||||
// value in binary format, or produce an error.
|
||||
// * Applications are allowed to cache lookup results based on the
|
||||
// URL, or have them precompiled into a binary to avoid any
|
||||
// lookup. Therefore, binary compatibility needs to be preserved
|
||||
// on changes to types. (Use versioned type names to manage
|
||||
// breaking changes.)
|
||||
//
|
||||
// Note: this functionality is not currently available in the official
|
||||
// protobuf release, and it is not used for type URLs beginning with
|
||||
// type.googleapis.com.
|
||||
//
|
||||
// Schemes other than `http`, `https` (or the empty scheme) might be
|
||||
// used with implementation specific semantics.
|
||||
//
|
||||
string type_url = 1;
|
||||
|
||||
// Must be a valid serialized protocol buffer of the above specified type.
|
||||
bytes value = 2;
|
||||
}
|
||||
117
vendor/github.com/golang/protobuf/ptypes/duration/duration.proto
generated
vendored
Normal file
117
vendor/github.com/golang/protobuf/ptypes/duration/duration.proto
generated
vendored
Normal file
@@ -0,0 +1,117 @@
|
||||
// Protocol Buffers - Google's data interchange format
|
||||
// Copyright 2008 Google Inc. All rights reserved.
|
||||
// https://developers.google.com/protocol-buffers/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
package google.protobuf;
|
||||
|
||||
option csharp_namespace = "Google.Protobuf.WellKnownTypes";
|
||||
option cc_enable_arenas = true;
|
||||
option go_package = "github.com/golang/protobuf/ptypes/duration";
|
||||
option java_package = "com.google.protobuf";
|
||||
option java_outer_classname = "DurationProto";
|
||||
option java_multiple_files = true;
|
||||
option objc_class_prefix = "GPB";
|
||||
|
||||
// A Duration represents a signed, fixed-length span of time represented
|
||||
// as a count of seconds and fractions of seconds at nanosecond
|
||||
// resolution. It is independent of any calendar and concepts like "day"
|
||||
// or "month". It is related to Timestamp in that the difference between
|
||||
// two Timestamp values is a Duration and it can be added or subtracted
|
||||
// from a Timestamp. Range is approximately +-10,000 years.
|
||||
//
|
||||
// # Examples
|
||||
//
|
||||
// Example 1: Compute Duration from two Timestamps in pseudo code.
|
||||
//
|
||||
// Timestamp start = ...;
|
||||
// Timestamp end = ...;
|
||||
// Duration duration = ...;
|
||||
//
|
||||
// duration.seconds = end.seconds - start.seconds;
|
||||
// duration.nanos = end.nanos - start.nanos;
|
||||
//
|
||||
// if (duration.seconds < 0 && duration.nanos > 0) {
|
||||
// duration.seconds += 1;
|
||||
// duration.nanos -= 1000000000;
|
||||
// } else if (durations.seconds > 0 && duration.nanos < 0) {
|
||||
// duration.seconds -= 1;
|
||||
// duration.nanos += 1000000000;
|
||||
// }
|
||||
//
|
||||
// Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
|
||||
//
|
||||
// Timestamp start = ...;
|
||||
// Duration duration = ...;
|
||||
// Timestamp end = ...;
|
||||
//
|
||||
// end.seconds = start.seconds + duration.seconds;
|
||||
// end.nanos = start.nanos + duration.nanos;
|
||||
//
|
||||
// if (end.nanos < 0) {
|
||||
// end.seconds -= 1;
|
||||
// end.nanos += 1000000000;
|
||||
// } else if (end.nanos >= 1000000000) {
|
||||
// end.seconds += 1;
|
||||
// end.nanos -= 1000000000;
|
||||
// }
|
||||
//
|
||||
// Example 3: Compute Duration from datetime.timedelta in Python.
|
||||
//
|
||||
// td = datetime.timedelta(days=3, minutes=10)
|
||||
// duration = Duration()
|
||||
// duration.FromTimedelta(td)
|
||||
//
|
||||
// # JSON Mapping
|
||||
//
|
||||
// In JSON format, the Duration type is encoded as a string rather than an
|
||||
// object, where the string ends in the suffix "s" (indicating seconds) and
|
||||
// is preceded by the number of seconds, with nanoseconds expressed as
|
||||
// fractional seconds. For example, 3 seconds with 0 nanoseconds should be
|
||||
// encoded in JSON format as "3s", while 3 seconds and 1 nanosecond should
|
||||
// be expressed in JSON format as "3.000000001s", and 3 seconds and 1
|
||||
// microsecond should be expressed in JSON format as "3.000001s".
|
||||
//
|
||||
//
|
||||
message Duration {
|
||||
|
||||
// Signed seconds of the span of time. Must be from -315,576,000,000
|
||||
// to +315,576,000,000 inclusive. Note: these bounds are computed from:
|
||||
// 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years
|
||||
int64 seconds = 1;
|
||||
|
||||
// Signed fractions of a second at nanosecond resolution of the span
|
||||
// of time. Durations less than one second are represented with a 0
|
||||
// `seconds` field and a positive or negative `nanos` field. For durations
|
||||
// of one second or more, a non-zero value for the `nanos` field must be
|
||||
// of the same sign as the `seconds` field. Must be from -999,999,999
|
||||
// to +999,999,999 inclusive.
|
||||
int32 nanos = 2;
|
||||
}
|
||||
135
vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
generated
vendored
Normal file
135
vendor/github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
generated
vendored
Normal file
@@ -0,0 +1,135 @@
|
||||
// Protocol Buffers - Google's data interchange format
|
||||
// Copyright 2008 Google Inc. All rights reserved.
|
||||
// https://developers.google.com/protocol-buffers/
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
package google.protobuf;
|
||||
|
||||
option csharp_namespace = "Google.Protobuf.WellKnownTypes";
|
||||
option cc_enable_arenas = true;
|
||||
option go_package = "github.com/golang/protobuf/ptypes/timestamp";
|
||||
option java_package = "com.google.protobuf";
|
||||
option java_outer_classname = "TimestampProto";
|
||||
option java_multiple_files = true;
|
||||
option objc_class_prefix = "GPB";
|
||||
|
||||
// A Timestamp represents a point in time independent of any time zone
|
||||
// or calendar, represented as seconds and fractions of seconds at
|
||||
// nanosecond resolution in UTC Epoch time. It is encoded using the
|
||||
// Proleptic Gregorian Calendar which extends the Gregorian calendar
|
||||
// backwards to year one. It is encoded assuming all minutes are 60
|
||||
// seconds long, i.e. leap seconds are "smeared" so that no leap second
|
||||
// table is needed for interpretation. Range is from
|
||||
// 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z.
|
||||
// By restricting to that range, we ensure that we can convert to
|
||||
// and from RFC 3339 date strings.
|
||||
// See [https://www.ietf.org/rfc/rfc3339.txt](https://www.ietf.org/rfc/rfc3339.txt).
|
||||
//
|
||||
// # Examples
|
||||
//
|
||||
// Example 1: Compute Timestamp from POSIX `time()`.
|
||||
//
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds(time(NULL));
|
||||
// timestamp.set_nanos(0);
|
||||
//
|
||||
// Example 2: Compute Timestamp from POSIX `gettimeofday()`.
|
||||
//
|
||||
// struct timeval tv;
|
||||
// gettimeofday(&tv, NULL);
|
||||
//
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds(tv.tv_sec);
|
||||
// timestamp.set_nanos(tv.tv_usec * 1000);
|
||||
//
|
||||
// Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`.
|
||||
//
|
||||
// FILETIME ft;
|
||||
// GetSystemTimeAsFileTime(&ft);
|
||||
// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
|
||||
//
|
||||
// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
|
||||
// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
|
||||
// Timestamp timestamp;
|
||||
// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
|
||||
// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
|
||||
//
|
||||
// Example 4: Compute Timestamp from Java `System.currentTimeMillis()`.
|
||||
//
|
||||
// long millis = System.currentTimeMillis();
|
||||
//
|
||||
// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
|
||||
// .setNanos((int) ((millis % 1000) * 1000000)).build();
|
||||
//
|
||||
//
|
||||
// Example 5: Compute Timestamp from current time in Python.
|
||||
//
|
||||
// timestamp = Timestamp()
|
||||
// timestamp.GetCurrentTime()
|
||||
//
|
||||
// # JSON Mapping
|
||||
//
|
||||
// In JSON format, the Timestamp type is encoded as a string in the
|
||||
// [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format. That is, the
|
||||
// format is "{year}-{month}-{day}T{hour}:{min}:{sec}[.{frac_sec}]Z"
|
||||
// where {year} is always expressed using four digits while {month}, {day},
|
||||
// {hour}, {min}, and {sec} are zero-padded to two digits each. The fractional
|
||||
// seconds, which can go up to 9 digits (i.e. up to 1 nanosecond resolution),
|
||||
// are optional. The "Z" suffix indicates the timezone ("UTC"); the timezone
|
||||
// is required. A proto3 JSON serializer should always use UTC (as indicated by
|
||||
// "Z") when printing the Timestamp type and a proto3 JSON parser should be
|
||||
// able to accept both UTC and other timezones (as indicated by an offset).
|
||||
//
|
||||
// For example, "2017-01-15T01:30:15.01Z" encodes 15.01 seconds past
|
||||
// 01:30 UTC on January 15, 2017.
|
||||
//
|
||||
// In JavaScript, one can convert a Date object to this format using the
|
||||
// standard [toISOString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString]
|
||||
// method. In Python, a standard `datetime.datetime` object can be converted
|
||||
// to this format using [`strftime`](https://docs.python.org/2/library/time.html#time.strftime)
|
||||
// with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one
|
||||
// can use the Joda Time's [`ISODateTimeFormat.dateTime()`](
|
||||
// http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime--
|
||||
// ) to obtain a formatter capable of generating timestamps in this format.
|
||||
//
|
||||
//
|
||||
message Timestamp {
|
||||
|
||||
// Represents seconds of UTC time since Unix epoch
|
||||
// 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to
|
||||
// 9999-12-31T23:59:59Z inclusive.
|
||||
int64 seconds = 1;
|
||||
|
||||
// Non-negative fractions of a second at nanosecond resolution. Negative
|
||||
// second values with fractions must still have non-negative nanos values
|
||||
// that count forward in time. Must be from 0 to 999,999,999
|
||||
// inclusive.
|
||||
int32 nanos = 2;
|
||||
}
|
||||
16
vendor/github.com/golang/snappy/.gitignore
generated
vendored
Normal file
16
vendor/github.com/golang/snappy/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
cmd/snappytool/snappytool
|
||||
testdata/bench
|
||||
|
||||
# These explicitly listed benchmark data files are for an obsolete version of
|
||||
# snappy_test.go.
|
||||
testdata/alice29.txt
|
||||
testdata/asyoulik.txt
|
||||
testdata/fireworks.jpeg
|
||||
testdata/geo.protodata
|
||||
testdata/html
|
||||
testdata/html_x_4
|
||||
testdata/kppkn.gtb
|
||||
testdata/lcet10.txt
|
||||
testdata/paper-100k.pdf
|
||||
testdata/plrabn12.txt
|
||||
testdata/urls.10K
|
||||
107
vendor/github.com/golang/snappy/README
generated
vendored
Normal file
107
vendor/github.com/golang/snappy/README
generated
vendored
Normal file
@@ -0,0 +1,107 @@
|
||||
The Snappy compression format in the Go programming language.
|
||||
|
||||
To download and install from source:
|
||||
$ go get github.com/golang/snappy
|
||||
|
||||
Unless otherwise noted, the Snappy-Go source files are distributed
|
||||
under the BSD-style license found in the LICENSE file.
|
||||
|
||||
|
||||
|
||||
Benchmarks.
|
||||
|
||||
The golang/snappy benchmarks include compressing (Z) and decompressing (U) ten
|
||||
or so files, the same set used by the C++ Snappy code (github.com/google/snappy
|
||||
and note the "google", not "golang"). On an "Intel(R) Core(TM) i7-3770 CPU @
|
||||
3.40GHz", Go's GOARCH=amd64 numbers as of 2016-05-29:
|
||||
|
||||
"go test -test.bench=."
|
||||
|
||||
_UFlat0-8 2.19GB/s ± 0% html
|
||||
_UFlat1-8 1.41GB/s ± 0% urls
|
||||
_UFlat2-8 23.5GB/s ± 2% jpg
|
||||
_UFlat3-8 1.91GB/s ± 0% jpg_200
|
||||
_UFlat4-8 14.0GB/s ± 1% pdf
|
||||
_UFlat5-8 1.97GB/s ± 0% html4
|
||||
_UFlat6-8 814MB/s ± 0% txt1
|
||||
_UFlat7-8 785MB/s ± 0% txt2
|
||||
_UFlat8-8 857MB/s ± 0% txt3
|
||||
_UFlat9-8 719MB/s ± 1% txt4
|
||||
_UFlat10-8 2.84GB/s ± 0% pb
|
||||
_UFlat11-8 1.05GB/s ± 0% gaviota
|
||||
|
||||
_ZFlat0-8 1.04GB/s ± 0% html
|
||||
_ZFlat1-8 534MB/s ± 0% urls
|
||||
_ZFlat2-8 15.7GB/s ± 1% jpg
|
||||
_ZFlat3-8 740MB/s ± 3% jpg_200
|
||||
_ZFlat4-8 9.20GB/s ± 1% pdf
|
||||
_ZFlat5-8 991MB/s ± 0% html4
|
||||
_ZFlat6-8 379MB/s ± 0% txt1
|
||||
_ZFlat7-8 352MB/s ± 0% txt2
|
||||
_ZFlat8-8 396MB/s ± 1% txt3
|
||||
_ZFlat9-8 327MB/s ± 1% txt4
|
||||
_ZFlat10-8 1.33GB/s ± 1% pb
|
||||
_ZFlat11-8 605MB/s ± 1% gaviota
|
||||
|
||||
|
||||
|
||||
"go test -test.bench=. -tags=noasm"
|
||||
|
||||
_UFlat0-8 621MB/s ± 2% html
|
||||
_UFlat1-8 494MB/s ± 1% urls
|
||||
_UFlat2-8 23.2GB/s ± 1% jpg
|
||||
_UFlat3-8 1.12GB/s ± 1% jpg_200
|
||||
_UFlat4-8 4.35GB/s ± 1% pdf
|
||||
_UFlat5-8 609MB/s ± 0% html4
|
||||
_UFlat6-8 296MB/s ± 0% txt1
|
||||
_UFlat7-8 288MB/s ± 0% txt2
|
||||
_UFlat8-8 309MB/s ± 1% txt3
|
||||
_UFlat9-8 280MB/s ± 1% txt4
|
||||
_UFlat10-8 753MB/s ± 0% pb
|
||||
_UFlat11-8 400MB/s ± 0% gaviota
|
||||
|
||||
_ZFlat0-8 409MB/s ± 1% html
|
||||
_ZFlat1-8 250MB/s ± 1% urls
|
||||
_ZFlat2-8 12.3GB/s ± 1% jpg
|
||||
_ZFlat3-8 132MB/s ± 0% jpg_200
|
||||
_ZFlat4-8 2.92GB/s ± 0% pdf
|
||||
_ZFlat5-8 405MB/s ± 1% html4
|
||||
_ZFlat6-8 179MB/s ± 1% txt1
|
||||
_ZFlat7-8 170MB/s ± 1% txt2
|
||||
_ZFlat8-8 189MB/s ± 1% txt3
|
||||
_ZFlat9-8 164MB/s ± 1% txt4
|
||||
_ZFlat10-8 479MB/s ± 1% pb
|
||||
_ZFlat11-8 270MB/s ± 1% gaviota
|
||||
|
||||
|
||||
|
||||
For comparison (Go's encoded output is byte-for-byte identical to C++'s), here
|
||||
are the numbers from C++ Snappy's
|
||||
|
||||
make CXXFLAGS="-O2 -DNDEBUG -g" clean snappy_unittest.log && cat snappy_unittest.log
|
||||
|
||||
BM_UFlat/0 2.4GB/s html
|
||||
BM_UFlat/1 1.4GB/s urls
|
||||
BM_UFlat/2 21.8GB/s jpg
|
||||
BM_UFlat/3 1.5GB/s jpg_200
|
||||
BM_UFlat/4 13.3GB/s pdf
|
||||
BM_UFlat/5 2.1GB/s html4
|
||||
BM_UFlat/6 1.0GB/s txt1
|
||||
BM_UFlat/7 959.4MB/s txt2
|
||||
BM_UFlat/8 1.0GB/s txt3
|
||||
BM_UFlat/9 864.5MB/s txt4
|
||||
BM_UFlat/10 2.9GB/s pb
|
||||
BM_UFlat/11 1.2GB/s gaviota
|
||||
|
||||
BM_ZFlat/0 944.3MB/s html (22.31 %)
|
||||
BM_ZFlat/1 501.6MB/s urls (47.78 %)
|
||||
BM_ZFlat/2 14.3GB/s jpg (99.95 %)
|
||||
BM_ZFlat/3 538.3MB/s jpg_200 (73.00 %)
|
||||
BM_ZFlat/4 8.3GB/s pdf (83.30 %)
|
||||
BM_ZFlat/5 903.5MB/s html4 (22.52 %)
|
||||
BM_ZFlat/6 336.0MB/s txt1 (57.88 %)
|
||||
BM_ZFlat/7 312.3MB/s txt2 (61.91 %)
|
||||
BM_ZFlat/8 353.1MB/s txt3 (54.99 %)
|
||||
BM_ZFlat/9 289.9MB/s txt4 (66.26 %)
|
||||
BM_ZFlat/10 1.2GB/s pb (19.68 %)
|
||||
BM_ZFlat/11 527.4MB/s gaviota (37.72 %)
|
||||
1
vendor/github.com/golang/snappy/go.mod
generated
vendored
Normal file
1
vendor/github.com/golang/snappy/go.mod
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module github.com/golang/snappy
|
||||
890
vendor/github.com/google/btree/btree.go
generated
vendored
890
vendor/github.com/google/btree/btree.go
generated
vendored
@@ -1,890 +0,0 @@
|
||||
// Copyright 2014 Google Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package btree implements in-memory B-Trees of arbitrary degree.
|
||||
//
|
||||
// btree implements an in-memory B-Tree for use as an ordered data structure.
|
||||
// It is not meant for persistent storage solutions.
|
||||
//
|
||||
// It has a flatter structure than an equivalent red-black or other binary tree,
|
||||
// which in some cases yields better memory usage and/or performance.
|
||||
// See some discussion on the matter here:
|
||||
// http://google-opensource.blogspot.com/2013/01/c-containers-that-save-memory-and-time.html
|
||||
// Note, though, that this project is in no way related to the C++ B-Tree
|
||||
// implementation written about there.
|
||||
//
|
||||
// Within this tree, each node contains a slice of items and a (possibly nil)
|
||||
// slice of children. For basic numeric values or raw structs, this can cause
|
||||
// efficiency differences when compared to equivalent C++ template code that
|
||||
// stores values in arrays within the node:
|
||||
// * Due to the overhead of storing values as interfaces (each
|
||||
// value needs to be stored as the value itself, then 2 words for the
|
||||
// interface pointing to that value and its type), resulting in higher
|
||||
// memory use.
|
||||
// * Since interfaces can point to values anywhere in memory, values are
|
||||
// most likely not stored in contiguous blocks, resulting in a higher
|
||||
// number of cache misses.
|
||||
// These issues don't tend to matter, though, when working with strings or other
|
||||
// heap-allocated structures, since C++-equivalent structures also must store
|
||||
// pointers and also distribute their values across the heap.
|
||||
//
|
||||
// This implementation is designed to be a drop-in replacement to gollrb.LLRB
|
||||
// trees, (http://github.com/petar/gollrb), an excellent and probably the most
|
||||
// widely used ordered tree implementation in the Go ecosystem currently.
|
||||
// Its functions, therefore, exactly mirror those of
|
||||
// llrb.LLRB where possible. Unlike gollrb, though, we currently don't
|
||||
// support storing multiple equivalent values.
|
||||
package btree
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Item represents a single object in the tree.
|
||||
type Item interface {
|
||||
// Less tests whether the current item is less than the given argument.
|
||||
//
|
||||
// This must provide a strict weak ordering.
|
||||
// If !a.Less(b) && !b.Less(a), we treat this to mean a == b (i.e. we can only
|
||||
// hold one of either a or b in the tree).
|
||||
Less(than Item) bool
|
||||
}
|
||||
|
||||
const (
|
||||
DefaultFreeListSize = 32
|
||||
)
|
||||
|
||||
var (
|
||||
nilItems = make(items, 16)
|
||||
nilChildren = make(children, 16)
|
||||
)
|
||||
|
||||
// FreeList represents a free list of btree nodes. By default each
|
||||
// BTree has its own FreeList, but multiple BTrees can share the same
|
||||
// FreeList.
|
||||
// Two Btrees using the same freelist are safe for concurrent write access.
|
||||
type FreeList struct {
|
||||
mu sync.Mutex
|
||||
freelist []*node
|
||||
}
|
||||
|
||||
// NewFreeList creates a new free list.
|
||||
// size is the maximum size of the returned free list.
|
||||
func NewFreeList(size int) *FreeList {
|
||||
return &FreeList{freelist: make([]*node, 0, size)}
|
||||
}
|
||||
|
||||
func (f *FreeList) newNode() (n *node) {
|
||||
f.mu.Lock()
|
||||
index := len(f.freelist) - 1
|
||||
if index < 0 {
|
||||
f.mu.Unlock()
|
||||
return new(node)
|
||||
}
|
||||
n = f.freelist[index]
|
||||
f.freelist[index] = nil
|
||||
f.freelist = f.freelist[:index]
|
||||
f.mu.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
// freeNode adds the given node to the list, returning true if it was added
|
||||
// and false if it was discarded.
|
||||
func (f *FreeList) freeNode(n *node) (out bool) {
|
||||
f.mu.Lock()
|
||||
if len(f.freelist) < cap(f.freelist) {
|
||||
f.freelist = append(f.freelist, n)
|
||||
out = true
|
||||
}
|
||||
f.mu.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
// ItemIterator allows callers of Ascend* to iterate in-order over portions of
|
||||
// the tree. When this function returns false, iteration will stop and the
|
||||
// associated Ascend* function will immediately return.
|
||||
type ItemIterator func(i Item) bool
|
||||
|
||||
// New creates a new B-Tree with the given degree.
|
||||
//
|
||||
// New(2), for example, will create a 2-3-4 tree (each node contains 1-3 items
|
||||
// and 2-4 children).
|
||||
func New(degree int) *BTree {
|
||||
return NewWithFreeList(degree, NewFreeList(DefaultFreeListSize))
|
||||
}
|
||||
|
||||
// NewWithFreeList creates a new B-Tree that uses the given node free list.
|
||||
func NewWithFreeList(degree int, f *FreeList) *BTree {
|
||||
if degree <= 1 {
|
||||
panic("bad degree")
|
||||
}
|
||||
return &BTree{
|
||||
degree: degree,
|
||||
cow: ©OnWriteContext{freelist: f},
|
||||
}
|
||||
}
|
||||
|
||||
// items stores items in a node.
|
||||
type items []Item
|
||||
|
||||
// insertAt inserts a value into the given index, pushing all subsequent values
|
||||
// forward.
|
||||
func (s *items) insertAt(index int, item Item) {
|
||||
*s = append(*s, nil)
|
||||
if index < len(*s) {
|
||||
copy((*s)[index+1:], (*s)[index:])
|
||||
}
|
||||
(*s)[index] = item
|
||||
}
|
||||
|
||||
// removeAt removes a value at a given index, pulling all subsequent values
|
||||
// back.
|
||||
func (s *items) removeAt(index int) Item {
|
||||
item := (*s)[index]
|
||||
copy((*s)[index:], (*s)[index+1:])
|
||||
(*s)[len(*s)-1] = nil
|
||||
*s = (*s)[:len(*s)-1]
|
||||
return item
|
||||
}
|
||||
|
||||
// pop removes and returns the last element in the list.
|
||||
func (s *items) pop() (out Item) {
|
||||
index := len(*s) - 1
|
||||
out = (*s)[index]
|
||||
(*s)[index] = nil
|
||||
*s = (*s)[:index]
|
||||
return
|
||||
}
|
||||
|
||||
// truncate truncates this instance at index so that it contains only the
|
||||
// first index items. index must be less than or equal to length.
|
||||
func (s *items) truncate(index int) {
|
||||
var toClear items
|
||||
*s, toClear = (*s)[:index], (*s)[index:]
|
||||
for len(toClear) > 0 {
|
||||
toClear = toClear[copy(toClear, nilItems):]
|
||||
}
|
||||
}
|
||||
|
||||
// find returns the index where the given item should be inserted into this
|
||||
// list. 'found' is true if the item already exists in the list at the given
|
||||
// index.
|
||||
func (s items) find(item Item) (index int, found bool) {
|
||||
i := sort.Search(len(s), func(i int) bool {
|
||||
return item.Less(s[i])
|
||||
})
|
||||
if i > 0 && !s[i-1].Less(item) {
|
||||
return i - 1, true
|
||||
}
|
||||
return i, false
|
||||
}
|
||||
|
||||
// children stores child nodes in a node.
|
||||
type children []*node
|
||||
|
||||
// insertAt inserts a value into the given index, pushing all subsequent values
|
||||
// forward.
|
||||
func (s *children) insertAt(index int, n *node) {
|
||||
*s = append(*s, nil)
|
||||
if index < len(*s) {
|
||||
copy((*s)[index+1:], (*s)[index:])
|
||||
}
|
||||
(*s)[index] = n
|
||||
}
|
||||
|
||||
// removeAt removes a value at a given index, pulling all subsequent values
|
||||
// back.
|
||||
func (s *children) removeAt(index int) *node {
|
||||
n := (*s)[index]
|
||||
copy((*s)[index:], (*s)[index+1:])
|
||||
(*s)[len(*s)-1] = nil
|
||||
*s = (*s)[:len(*s)-1]
|
||||
return n
|
||||
}
|
||||
|
||||
// pop removes and returns the last element in the list.
|
||||
func (s *children) pop() (out *node) {
|
||||
index := len(*s) - 1
|
||||
out = (*s)[index]
|
||||
(*s)[index] = nil
|
||||
*s = (*s)[:index]
|
||||
return
|
||||
}
|
||||
|
||||
// truncate truncates this instance at index so that it contains only the
|
||||
// first index children. index must be less than or equal to length.
|
||||
func (s *children) truncate(index int) {
|
||||
var toClear children
|
||||
*s, toClear = (*s)[:index], (*s)[index:]
|
||||
for len(toClear) > 0 {
|
||||
toClear = toClear[copy(toClear, nilChildren):]
|
||||
}
|
||||
}
|
||||
|
||||
// node is an internal node in a tree.
|
||||
//
|
||||
// It must at all times maintain the invariant that either
|
||||
// * len(children) == 0, len(items) unconstrained
|
||||
// * len(children) == len(items) + 1
|
||||
type node struct {
|
||||
items items
|
||||
children children
|
||||
cow *copyOnWriteContext
|
||||
}
|
||||
|
||||
func (n *node) mutableFor(cow *copyOnWriteContext) *node {
|
||||
if n.cow == cow {
|
||||
return n
|
||||
}
|
||||
out := cow.newNode()
|
||||
if cap(out.items) >= len(n.items) {
|
||||
out.items = out.items[:len(n.items)]
|
||||
} else {
|
||||
out.items = make(items, len(n.items), cap(n.items))
|
||||
}
|
||||
copy(out.items, n.items)
|
||||
// Copy children
|
||||
if cap(out.children) >= len(n.children) {
|
||||
out.children = out.children[:len(n.children)]
|
||||
} else {
|
||||
out.children = make(children, len(n.children), cap(n.children))
|
||||
}
|
||||
copy(out.children, n.children)
|
||||
return out
|
||||
}
|
||||
|
||||
func (n *node) mutableChild(i int) *node {
|
||||
c := n.children[i].mutableFor(n.cow)
|
||||
n.children[i] = c
|
||||
return c
|
||||
}
|
||||
|
||||
// split splits the given node at the given index. The current node shrinks,
|
||||
// and this function returns the item that existed at that index and a new node
|
||||
// containing all items/children after it.
|
||||
func (n *node) split(i int) (Item, *node) {
|
||||
item := n.items[i]
|
||||
next := n.cow.newNode()
|
||||
next.items = append(next.items, n.items[i+1:]...)
|
||||
n.items.truncate(i)
|
||||
if len(n.children) > 0 {
|
||||
next.children = append(next.children, n.children[i+1:]...)
|
||||
n.children.truncate(i + 1)
|
||||
}
|
||||
return item, next
|
||||
}
|
||||
|
||||
// maybeSplitChild checks if a child should be split, and if so splits it.
|
||||
// Returns whether or not a split occurred.
|
||||
func (n *node) maybeSplitChild(i, maxItems int) bool {
|
||||
if len(n.children[i].items) < maxItems {
|
||||
return false
|
||||
}
|
||||
first := n.mutableChild(i)
|
||||
item, second := first.split(maxItems / 2)
|
||||
n.items.insertAt(i, item)
|
||||
n.children.insertAt(i+1, second)
|
||||
return true
|
||||
}
|
||||
|
||||
// insert inserts an item into the subtree rooted at this node, making sure
|
||||
// no nodes in the subtree exceed maxItems items. Should an equivalent item be
|
||||
// be found/replaced by insert, it will be returned.
|
||||
func (n *node) insert(item Item, maxItems int) Item {
|
||||
i, found := n.items.find(item)
|
||||
if found {
|
||||
out := n.items[i]
|
||||
n.items[i] = item
|
||||
return out
|
||||
}
|
||||
if len(n.children) == 0 {
|
||||
n.items.insertAt(i, item)
|
||||
return nil
|
||||
}
|
||||
if n.maybeSplitChild(i, maxItems) {
|
||||
inTree := n.items[i]
|
||||
switch {
|
||||
case item.Less(inTree):
|
||||
// no change, we want first split node
|
||||
case inTree.Less(item):
|
||||
i++ // we want second split node
|
||||
default:
|
||||
out := n.items[i]
|
||||
n.items[i] = item
|
||||
return out
|
||||
}
|
||||
}
|
||||
return n.mutableChild(i).insert(item, maxItems)
|
||||
}
|
||||
|
||||
// get finds the given key in the subtree and returns it.
|
||||
func (n *node) get(key Item) Item {
|
||||
i, found := n.items.find(key)
|
||||
if found {
|
||||
return n.items[i]
|
||||
} else if len(n.children) > 0 {
|
||||
return n.children[i].get(key)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// min returns the first item in the subtree.
|
||||
func min(n *node) Item {
|
||||
if n == nil {
|
||||
return nil
|
||||
}
|
||||
for len(n.children) > 0 {
|
||||
n = n.children[0]
|
||||
}
|
||||
if len(n.items) == 0 {
|
||||
return nil
|
||||
}
|
||||
return n.items[0]
|
||||
}
|
||||
|
||||
// max returns the last item in the subtree.
|
||||
func max(n *node) Item {
|
||||
if n == nil {
|
||||
return nil
|
||||
}
|
||||
for len(n.children) > 0 {
|
||||
n = n.children[len(n.children)-1]
|
||||
}
|
||||
if len(n.items) == 0 {
|
||||
return nil
|
||||
}
|
||||
return n.items[len(n.items)-1]
|
||||
}
|
||||
|
||||
// toRemove details what item to remove in a node.remove call.
|
||||
type toRemove int
|
||||
|
||||
const (
|
||||
removeItem toRemove = iota // removes the given item
|
||||
removeMin // removes smallest item in the subtree
|
||||
removeMax // removes largest item in the subtree
|
||||
)
|
||||
|
||||
// remove removes an item from the subtree rooted at this node.
|
||||
func (n *node) remove(item Item, minItems int, typ toRemove) Item {
|
||||
var i int
|
||||
var found bool
|
||||
switch typ {
|
||||
case removeMax:
|
||||
if len(n.children) == 0 {
|
||||
return n.items.pop()
|
||||
}
|
||||
i = len(n.items)
|
||||
case removeMin:
|
||||
if len(n.children) == 0 {
|
||||
return n.items.removeAt(0)
|
||||
}
|
||||
i = 0
|
||||
case removeItem:
|
||||
i, found = n.items.find(item)
|
||||
if len(n.children) == 0 {
|
||||
if found {
|
||||
return n.items.removeAt(i)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
default:
|
||||
panic("invalid type")
|
||||
}
|
||||
// If we get to here, we have children.
|
||||
if len(n.children[i].items) <= minItems {
|
||||
return n.growChildAndRemove(i, item, minItems, typ)
|
||||
}
|
||||
child := n.mutableChild(i)
|
||||
// Either we had enough items to begin with, or we've done some
|
||||
// merging/stealing, because we've got enough now and we're ready to return
|
||||
// stuff.
|
||||
if found {
|
||||
// The item exists at index 'i', and the child we've selected can give us a
|
||||
// predecessor, since if we've gotten here it's got > minItems items in it.
|
||||
out := n.items[i]
|
||||
// We use our special-case 'remove' call with typ=maxItem to pull the
|
||||
// predecessor of item i (the rightmost leaf of our immediate left child)
|
||||
// and set it into where we pulled the item from.
|
||||
n.items[i] = child.remove(nil, minItems, removeMax)
|
||||
return out
|
||||
}
|
||||
// Final recursive call. Once we're here, we know that the item isn't in this
|
||||
// node and that the child is big enough to remove from.
|
||||
return child.remove(item, minItems, typ)
|
||||
}
|
||||
|
||||
// growChildAndRemove grows child 'i' to make sure it's possible to remove an
|
||||
// item from it while keeping it at minItems, then calls remove to actually
|
||||
// remove it.
|
||||
//
|
||||
// Most documentation says we have to do two sets of special casing:
|
||||
// 1) item is in this node
|
||||
// 2) item is in child
|
||||
// In both cases, we need to handle the two subcases:
|
||||
// A) node has enough values that it can spare one
|
||||
// B) node doesn't have enough values
|
||||
// For the latter, we have to check:
|
||||
// a) left sibling has node to spare
|
||||
// b) right sibling has node to spare
|
||||
// c) we must merge
|
||||
// To simplify our code here, we handle cases #1 and #2 the same:
|
||||
// If a node doesn't have enough items, we make sure it does (using a,b,c).
|
||||
// We then simply redo our remove call, and the second time (regardless of
|
||||
// whether we're in case 1 or 2), we'll have enough items and can guarantee
|
||||
// that we hit case A.
|
||||
func (n *node) growChildAndRemove(i int, item Item, minItems int, typ toRemove) Item {
|
||||
if i > 0 && len(n.children[i-1].items) > minItems {
|
||||
// Steal from left child
|
||||
child := n.mutableChild(i)
|
||||
stealFrom := n.mutableChild(i - 1)
|
||||
stolenItem := stealFrom.items.pop()
|
||||
child.items.insertAt(0, n.items[i-1])
|
||||
n.items[i-1] = stolenItem
|
||||
if len(stealFrom.children) > 0 {
|
||||
child.children.insertAt(0, stealFrom.children.pop())
|
||||
}
|
||||
} else if i < len(n.items) && len(n.children[i+1].items) > minItems {
|
||||
// steal from right child
|
||||
child := n.mutableChild(i)
|
||||
stealFrom := n.mutableChild(i + 1)
|
||||
stolenItem := stealFrom.items.removeAt(0)
|
||||
child.items = append(child.items, n.items[i])
|
||||
n.items[i] = stolenItem
|
||||
if len(stealFrom.children) > 0 {
|
||||
child.children = append(child.children, stealFrom.children.removeAt(0))
|
||||
}
|
||||
} else {
|
||||
if i >= len(n.items) {
|
||||
i--
|
||||
}
|
||||
child := n.mutableChild(i)
|
||||
// merge with right child
|
||||
mergeItem := n.items.removeAt(i)
|
||||
mergeChild := n.children.removeAt(i + 1)
|
||||
child.items = append(child.items, mergeItem)
|
||||
child.items = append(child.items, mergeChild.items...)
|
||||
child.children = append(child.children, mergeChild.children...)
|
||||
n.cow.freeNode(mergeChild)
|
||||
}
|
||||
return n.remove(item, minItems, typ)
|
||||
}
|
||||
|
||||
type direction int
|
||||
|
||||
const (
|
||||
descend = direction(-1)
|
||||
ascend = direction(+1)
|
||||
)
|
||||
|
||||
// iterate provides a simple method for iterating over elements in the tree.
|
||||
//
|
||||
// When ascending, the 'start' should be less than 'stop' and when descending,
|
||||
// the 'start' should be greater than 'stop'. Setting 'includeStart' to true
|
||||
// will force the iterator to include the first item when it equals 'start',
|
||||
// thus creating a "greaterOrEqual" or "lessThanEqual" rather than just a
|
||||
// "greaterThan" or "lessThan" queries.
|
||||
func (n *node) iterate(dir direction, start, stop Item, includeStart bool, hit bool, iter ItemIterator) (bool, bool) {
|
||||
var ok, found bool
|
||||
var index int
|
||||
switch dir {
|
||||
case ascend:
|
||||
if start != nil {
|
||||
index, _ = n.items.find(start)
|
||||
}
|
||||
for i := index; i < len(n.items); i++ {
|
||||
if len(n.children) > 0 {
|
||||
if hit, ok = n.children[i].iterate(dir, start, stop, includeStart, hit, iter); !ok {
|
||||
return hit, false
|
||||
}
|
||||
}
|
||||
if !includeStart && !hit && start != nil && !start.Less(n.items[i]) {
|
||||
hit = true
|
||||
continue
|
||||
}
|
||||
hit = true
|
||||
if stop != nil && !n.items[i].Less(stop) {
|
||||
return hit, false
|
||||
}
|
||||
if !iter(n.items[i]) {
|
||||
return hit, false
|
||||
}
|
||||
}
|
||||
if len(n.children) > 0 {
|
||||
if hit, ok = n.children[len(n.children)-1].iterate(dir, start, stop, includeStart, hit, iter); !ok {
|
||||
return hit, false
|
||||
}
|
||||
}
|
||||
case descend:
|
||||
if start != nil {
|
||||
index, found = n.items.find(start)
|
||||
if !found {
|
||||
index = index - 1
|
||||
}
|
||||
} else {
|
||||
index = len(n.items) - 1
|
||||
}
|
||||
for i := index; i >= 0; i-- {
|
||||
if start != nil && !n.items[i].Less(start) {
|
||||
if !includeStart || hit || start.Less(n.items[i]) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
if len(n.children) > 0 {
|
||||
if hit, ok = n.children[i+1].iterate(dir, start, stop, includeStart, hit, iter); !ok {
|
||||
return hit, false
|
||||
}
|
||||
}
|
||||
if stop != nil && !stop.Less(n.items[i]) {
|
||||
return hit, false // continue
|
||||
}
|
||||
hit = true
|
||||
if !iter(n.items[i]) {
|
||||
return hit, false
|
||||
}
|
||||
}
|
||||
if len(n.children) > 0 {
|
||||
if hit, ok = n.children[0].iterate(dir, start, stop, includeStart, hit, iter); !ok {
|
||||
return hit, false
|
||||
}
|
||||
}
|
||||
}
|
||||
return hit, true
|
||||
}
|
||||
|
||||
// Used for testing/debugging purposes.
|
||||
func (n *node) print(w io.Writer, level int) {
|
||||
fmt.Fprintf(w, "%sNODE:%v\n", strings.Repeat(" ", level), n.items)
|
||||
for _, c := range n.children {
|
||||
c.print(w, level+1)
|
||||
}
|
||||
}
|
||||
|
||||
// BTree is an implementation of a B-Tree.
|
||||
//
|
||||
// BTree stores Item instances in an ordered structure, allowing easy insertion,
|
||||
// removal, and iteration.
|
||||
//
|
||||
// Write operations are not safe for concurrent mutation by multiple
|
||||
// goroutines, but Read operations are.
|
||||
type BTree struct {
|
||||
degree int
|
||||
length int
|
||||
root *node
|
||||
cow *copyOnWriteContext
|
||||
}
|
||||
|
||||
// copyOnWriteContext pointers determine node ownership... a tree with a write
|
||||
// context equivalent to a node's write context is allowed to modify that node.
|
||||
// A tree whose write context does not match a node's is not allowed to modify
|
||||
// it, and must create a new, writable copy (IE: it's a Clone).
|
||||
//
|
||||
// When doing any write operation, we maintain the invariant that the current
|
||||
// node's context is equal to the context of the tree that requested the write.
|
||||
// We do this by, before we descend into any node, creating a copy with the
|
||||
// correct context if the contexts don't match.
|
||||
//
|
||||
// Since the node we're currently visiting on any write has the requesting
|
||||
// tree's context, that node is modifiable in place. Children of that node may
|
||||
// not share context, but before we descend into them, we'll make a mutable
|
||||
// copy.
|
||||
type copyOnWriteContext struct {
|
||||
freelist *FreeList
|
||||
}
|
||||
|
||||
// Clone clones the btree, lazily. Clone should not be called concurrently,
|
||||
// but the original tree (t) and the new tree (t2) can be used concurrently
|
||||
// once the Clone call completes.
|
||||
//
|
||||
// The internal tree structure of b is marked read-only and shared between t and
|
||||
// t2. Writes to both t and t2 use copy-on-write logic, creating new nodes
|
||||
// whenever one of b's original nodes would have been modified. Read operations
|
||||
// should have no performance degredation. Write operations for both t and t2
|
||||
// will initially experience minor slow-downs caused by additional allocs and
|
||||
// copies due to the aforementioned copy-on-write logic, but should converge to
|
||||
// the original performance characteristics of the original tree.
|
||||
func (t *BTree) Clone() (t2 *BTree) {
|
||||
// Create two entirely new copy-on-write contexts.
|
||||
// This operation effectively creates three trees:
|
||||
// the original, shared nodes (old b.cow)
|
||||
// the new b.cow nodes
|
||||
// the new out.cow nodes
|
||||
cow1, cow2 := *t.cow, *t.cow
|
||||
out := *t
|
||||
t.cow = &cow1
|
||||
out.cow = &cow2
|
||||
return &out
|
||||
}
|
||||
|
||||
// maxItems returns the max number of items to allow per node.
|
||||
func (t *BTree) maxItems() int {
|
||||
return t.degree*2 - 1
|
||||
}
|
||||
|
||||
// minItems returns the min number of items to allow per node (ignored for the
|
||||
// root node).
|
||||
func (t *BTree) minItems() int {
|
||||
return t.degree - 1
|
||||
}
|
||||
|
||||
func (c *copyOnWriteContext) newNode() (n *node) {
|
||||
n = c.freelist.newNode()
|
||||
n.cow = c
|
||||
return
|
||||
}
|
||||
|
||||
type freeType int
|
||||
|
||||
const (
|
||||
ftFreelistFull freeType = iota // node was freed (available for GC, not stored in freelist)
|
||||
ftStored // node was stored in the freelist for later use
|
||||
ftNotOwned // node was ignored by COW, since it's owned by another one
|
||||
)
|
||||
|
||||
// freeNode frees a node within a given COW context, if it's owned by that
|
||||
// context. It returns what happened to the node (see freeType const
|
||||
// documentation).
|
||||
func (c *copyOnWriteContext) freeNode(n *node) freeType {
|
||||
if n.cow == c {
|
||||
// clear to allow GC
|
||||
n.items.truncate(0)
|
||||
n.children.truncate(0)
|
||||
n.cow = nil
|
||||
if c.freelist.freeNode(n) {
|
||||
return ftStored
|
||||
} else {
|
||||
return ftFreelistFull
|
||||
}
|
||||
} else {
|
||||
return ftNotOwned
|
||||
}
|
||||
}
|
||||
|
||||
// ReplaceOrInsert adds the given item to the tree. If an item in the tree
|
||||
// already equals the given one, it is removed from the tree and returned.
|
||||
// Otherwise, nil is returned.
|
||||
//
|
||||
// nil cannot be added to the tree (will panic).
|
||||
func (t *BTree) ReplaceOrInsert(item Item) Item {
|
||||
if item == nil {
|
||||
panic("nil item being added to BTree")
|
||||
}
|
||||
if t.root == nil {
|
||||
t.root = t.cow.newNode()
|
||||
t.root.items = append(t.root.items, item)
|
||||
t.length++
|
||||
return nil
|
||||
} else {
|
||||
t.root = t.root.mutableFor(t.cow)
|
||||
if len(t.root.items) >= t.maxItems() {
|
||||
item2, second := t.root.split(t.maxItems() / 2)
|
||||
oldroot := t.root
|
||||
t.root = t.cow.newNode()
|
||||
t.root.items = append(t.root.items, item2)
|
||||
t.root.children = append(t.root.children, oldroot, second)
|
||||
}
|
||||
}
|
||||
out := t.root.insert(item, t.maxItems())
|
||||
if out == nil {
|
||||
t.length++
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// Delete removes an item equal to the passed in item from the tree, returning
|
||||
// it. If no such item exists, returns nil.
|
||||
func (t *BTree) Delete(item Item) Item {
|
||||
return t.deleteItem(item, removeItem)
|
||||
}
|
||||
|
||||
// DeleteMin removes the smallest item in the tree and returns it.
|
||||
// If no such item exists, returns nil.
|
||||
func (t *BTree) DeleteMin() Item {
|
||||
return t.deleteItem(nil, removeMin)
|
||||
}
|
||||
|
||||
// DeleteMax removes the largest item in the tree and returns it.
|
||||
// If no such item exists, returns nil.
|
||||
func (t *BTree) DeleteMax() Item {
|
||||
return t.deleteItem(nil, removeMax)
|
||||
}
|
||||
|
||||
func (t *BTree) deleteItem(item Item, typ toRemove) Item {
|
||||
if t.root == nil || len(t.root.items) == 0 {
|
||||
return nil
|
||||
}
|
||||
t.root = t.root.mutableFor(t.cow)
|
||||
out := t.root.remove(item, t.minItems(), typ)
|
||||
if len(t.root.items) == 0 && len(t.root.children) > 0 {
|
||||
oldroot := t.root
|
||||
t.root = t.root.children[0]
|
||||
t.cow.freeNode(oldroot)
|
||||
}
|
||||
if out != nil {
|
||||
t.length--
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// AscendRange calls the iterator for every value in the tree within the range
|
||||
// [greaterOrEqual, lessThan), until iterator returns false.
|
||||
func (t *BTree) AscendRange(greaterOrEqual, lessThan Item, iterator ItemIterator) {
|
||||
if t.root == nil {
|
||||
return
|
||||
}
|
||||
t.root.iterate(ascend, greaterOrEqual, lessThan, true, false, iterator)
|
||||
}
|
||||
|
||||
// AscendLessThan calls the iterator for every value in the tree within the range
|
||||
// [first, pivot), until iterator returns false.
|
||||
func (t *BTree) AscendLessThan(pivot Item, iterator ItemIterator) {
|
||||
if t.root == nil {
|
||||
return
|
||||
}
|
||||
t.root.iterate(ascend, nil, pivot, false, false, iterator)
|
||||
}
|
||||
|
||||
// AscendGreaterOrEqual calls the iterator for every value in the tree within
|
||||
// the range [pivot, last], until iterator returns false.
|
||||
func (t *BTree) AscendGreaterOrEqual(pivot Item, iterator ItemIterator) {
|
||||
if t.root == nil {
|
||||
return
|
||||
}
|
||||
t.root.iterate(ascend, pivot, nil, true, false, iterator)
|
||||
}
|
||||
|
||||
// Ascend calls the iterator for every value in the tree within the range
|
||||
// [first, last], until iterator returns false.
|
||||
func (t *BTree) Ascend(iterator ItemIterator) {
|
||||
if t.root == nil {
|
||||
return
|
||||
}
|
||||
t.root.iterate(ascend, nil, nil, false, false, iterator)
|
||||
}
|
||||
|
||||
// DescendRange calls the iterator for every value in the tree within the range
|
||||
// [lessOrEqual, greaterThan), until iterator returns false.
|
||||
func (t *BTree) DescendRange(lessOrEqual, greaterThan Item, iterator ItemIterator) {
|
||||
if t.root == nil {
|
||||
return
|
||||
}
|
||||
t.root.iterate(descend, lessOrEqual, greaterThan, true, false, iterator)
|
||||
}
|
||||
|
||||
// DescendLessOrEqual calls the iterator for every value in the tree within the range
|
||||
// [pivot, first], until iterator returns false.
|
||||
func (t *BTree) DescendLessOrEqual(pivot Item, iterator ItemIterator) {
|
||||
if t.root == nil {
|
||||
return
|
||||
}
|
||||
t.root.iterate(descend, pivot, nil, true, false, iterator)
|
||||
}
|
||||
|
||||
// DescendGreaterThan calls the iterator for every value in the tree within
|
||||
// the range (pivot, last], until iterator returns false.
|
||||
func (t *BTree) DescendGreaterThan(pivot Item, iterator ItemIterator) {
|
||||
if t.root == nil {
|
||||
return
|
||||
}
|
||||
t.root.iterate(descend, nil, pivot, false, false, iterator)
|
||||
}
|
||||
|
||||
// Descend calls the iterator for every value in the tree within the range
|
||||
// [last, first], until iterator returns false.
|
||||
func (t *BTree) Descend(iterator ItemIterator) {
|
||||
if t.root == nil {
|
||||
return
|
||||
}
|
||||
t.root.iterate(descend, nil, nil, false, false, iterator)
|
||||
}
|
||||
|
||||
// Get looks for the key item in the tree, returning it. It returns nil if
|
||||
// unable to find that item.
|
||||
func (t *BTree) Get(key Item) Item {
|
||||
if t.root == nil {
|
||||
return nil
|
||||
}
|
||||
return t.root.get(key)
|
||||
}
|
||||
|
||||
// Min returns the smallest item in the tree, or nil if the tree is empty.
|
||||
func (t *BTree) Min() Item {
|
||||
return min(t.root)
|
||||
}
|
||||
|
||||
// Max returns the largest item in the tree, or nil if the tree is empty.
|
||||
func (t *BTree) Max() Item {
|
||||
return max(t.root)
|
||||
}
|
||||
|
||||
// Has returns true if the given key is in the tree.
|
||||
func (t *BTree) Has(key Item) bool {
|
||||
return t.Get(key) != nil
|
||||
}
|
||||
|
||||
// Len returns the number of items currently in the tree.
|
||||
func (t *BTree) Len() int {
|
||||
return t.length
|
||||
}
|
||||
|
||||
// Clear removes all items from the btree. If addNodesToFreelist is true,
|
||||
// t's nodes are added to its freelist as part of this call, until the freelist
|
||||
// is full. Otherwise, the root node is simply dereferenced and the subtree
|
||||
// left to Go's normal GC processes.
|
||||
//
|
||||
// This can be much faster
|
||||
// than calling Delete on all elements, because that requires finding/removing
|
||||
// each element in the tree and updating the tree accordingly. It also is
|
||||
// somewhat faster than creating a new tree to replace the old one, because
|
||||
// nodes from the old tree are reclaimed into the freelist for use by the new
|
||||
// one, instead of being lost to the garbage collector.
|
||||
//
|
||||
// This call takes:
|
||||
// O(1): when addNodesToFreelist is false, this is a single operation.
|
||||
// O(1): when the freelist is already full, it breaks out immediately
|
||||
// O(freelist size): when the freelist is empty and the nodes are all owned
|
||||
// by this tree, nodes are added to the freelist until full.
|
||||
// O(tree size): when all nodes are owned by another tree, all nodes are
|
||||
// iterated over looking for nodes to add to the freelist, and due to
|
||||
// ownership, none are.
|
||||
func (t *BTree) Clear(addNodesToFreelist bool) {
|
||||
if t.root != nil && addNodesToFreelist {
|
||||
t.root.reset(t.cow)
|
||||
}
|
||||
t.root, t.length = nil, 0
|
||||
}
|
||||
|
||||
// reset returns a subtree to the freelist. It breaks out immediately if the
|
||||
// freelist is full, since the only benefit of iterating is to fill that
|
||||
// freelist up. Returns true if parent reset call should continue.
|
||||
func (n *node) reset(c *copyOnWriteContext) bool {
|
||||
for _, child := range n.children {
|
||||
if !child.reset(c) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return c.freeNode(n) != ftFreelistFull
|
||||
}
|
||||
|
||||
// Int implements the Item interface for integers.
|
||||
type Int int
|
||||
|
||||
// Less returns true if int(a) < int(b).
|
||||
func (a Int) Less(b Item) bool {
|
||||
return a < b.(Int)
|
||||
}
|
||||
76
vendor/github.com/google/btree/btree_mem.go
generated
vendored
76
vendor/github.com/google/btree/btree_mem.go
generated
vendored
@@ -1,76 +0,0 @@
|
||||
// Copyright 2014 Google Inc.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// +build ignore
|
||||
|
||||
// This binary compares memory usage between btree and gollrb.
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"runtime"
|
||||
"time"
|
||||
|
||||
"github.com/google/btree"
|
||||
"github.com/petar/GoLLRB/llrb"
|
||||
)
|
||||
|
||||
var (
|
||||
size = flag.Int("size", 1000000, "size of the tree to build")
|
||||
degree = flag.Int("degree", 8, "degree of btree")
|
||||
gollrb = flag.Bool("llrb", false, "use llrb instead of btree")
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
vals := rand.Perm(*size)
|
||||
var t, v interface{}
|
||||
v = vals
|
||||
var stats runtime.MemStats
|
||||
for i := 0; i < 10; i++ {
|
||||
runtime.GC()
|
||||
}
|
||||
fmt.Println("-------- BEFORE ----------")
|
||||
runtime.ReadMemStats(&stats)
|
||||
fmt.Printf("%+v\n", stats)
|
||||
start := time.Now()
|
||||
if *gollrb {
|
||||
tr := llrb.New()
|
||||
for _, v := range vals {
|
||||
tr.ReplaceOrInsert(llrb.Int(v))
|
||||
}
|
||||
t = tr // keep it around
|
||||
} else {
|
||||
tr := btree.New(*degree)
|
||||
for _, v := range vals {
|
||||
tr.ReplaceOrInsert(btree.Int(v))
|
||||
}
|
||||
t = tr // keep it around
|
||||
}
|
||||
fmt.Printf("%v inserts in %v\n", *size, time.Since(start))
|
||||
fmt.Println("-------- AFTER ----------")
|
||||
runtime.ReadMemStats(&stats)
|
||||
fmt.Printf("%+v\n", stats)
|
||||
for i := 0; i < 10; i++ {
|
||||
runtime.GC()
|
||||
}
|
||||
fmt.Println("-------- AFTER GC ----------")
|
||||
runtime.ReadMemStats(&stats)
|
||||
fmt.Printf("%+v\n", stats)
|
||||
if t == v {
|
||||
fmt.Println("to make sure vals and tree aren't GC'd")
|
||||
}
|
||||
}
|
||||
62
vendor/github.com/google/go-cmp/cmp/cmpopts/ignore.go
generated
vendored
62
vendor/github.com/google/go-cmp/cmp/cmpopts/ignore.go
generated
vendored
@@ -11,6 +11,7 @@ import (
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/internal/function"
|
||||
)
|
||||
|
||||
// IgnoreFields returns an Option that ignores exported fields of the
|
||||
@@ -112,6 +113,10 @@ func (tf ifaceFilter) filter(p cmp.Path) bool {
|
||||
// In particular, unexported fields within the struct's exported fields
|
||||
// of struct types, including anonymous fields, will not be ignored unless the
|
||||
// type of the field itself is also passed to IgnoreUnexported.
|
||||
//
|
||||
// Avoid ignoring unexported fields of a type which you do not control (i.e. a
|
||||
// type from another repository), as changes to the implementation of such types
|
||||
// may change how the comparison behaves. Prefer a custom Comparer instead.
|
||||
func IgnoreUnexported(typs ...interface{}) cmp.Option {
|
||||
ux := newUnexportedFilter(typs...)
|
||||
return cmp.FilterPath(ux.filter, cmp.Ignore())
|
||||
@@ -143,3 +148,60 @@ func isExported(id string) bool {
|
||||
r, _ := utf8.DecodeRuneInString(id)
|
||||
return unicode.IsUpper(r)
|
||||
}
|
||||
|
||||
// IgnoreSliceElements returns an Option that ignores elements of []V.
|
||||
// The discard function must be of the form "func(T) bool" which is used to
|
||||
// ignore slice elements of type V, where V is assignable to T.
|
||||
// Elements are ignored if the function reports true.
|
||||
func IgnoreSliceElements(discardFunc interface{}) cmp.Option {
|
||||
vf := reflect.ValueOf(discardFunc)
|
||||
if !function.IsType(vf.Type(), function.ValuePredicate) || vf.IsNil() {
|
||||
panic(fmt.Sprintf("invalid discard function: %T", discardFunc))
|
||||
}
|
||||
return cmp.FilterPath(func(p cmp.Path) bool {
|
||||
si, ok := p.Index(-1).(cmp.SliceIndex)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
if !si.Type().AssignableTo(vf.Type().In(0)) {
|
||||
return false
|
||||
}
|
||||
vx, vy := si.Values()
|
||||
if vx.IsValid() && vf.Call([]reflect.Value{vx})[0].Bool() {
|
||||
return true
|
||||
}
|
||||
if vy.IsValid() && vf.Call([]reflect.Value{vy})[0].Bool() {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}, cmp.Ignore())
|
||||
}
|
||||
|
||||
// IgnoreMapEntries returns an Option that ignores entries of map[K]V.
|
||||
// The discard function must be of the form "func(T, R) bool" which is used to
|
||||
// ignore map entries of type K and V, where K and V are assignable to T and R.
|
||||
// Entries are ignored if the function reports true.
|
||||
func IgnoreMapEntries(discardFunc interface{}) cmp.Option {
|
||||
vf := reflect.ValueOf(discardFunc)
|
||||
if !function.IsType(vf.Type(), function.KeyValuePredicate) || vf.IsNil() {
|
||||
panic(fmt.Sprintf("invalid discard function: %T", discardFunc))
|
||||
}
|
||||
return cmp.FilterPath(func(p cmp.Path) bool {
|
||||
mi, ok := p.Index(-1).(cmp.MapIndex)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
if !mi.Key().Type().AssignableTo(vf.Type().In(0)) || !mi.Type().AssignableTo(vf.Type().In(1)) {
|
||||
return false
|
||||
}
|
||||
k := mi.Key()
|
||||
vx, vy := mi.Values()
|
||||
if vx.IsValid() && vf.Call([]reflect.Value{k, vx})[0].Bool() {
|
||||
return true
|
||||
}
|
||||
if vy.IsValid() && vf.Call([]reflect.Value{k, vy})[0].Bool() {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}, cmp.Ignore())
|
||||
}
|
||||
|
||||
33
vendor/github.com/google/go-cmp/cmp/cmpopts/sort.go
generated
vendored
33
vendor/github.com/google/go-cmp/cmp/cmpopts/sort.go
generated
vendored
@@ -7,6 +7,7 @@ package cmpopts
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/internal/function"
|
||||
@@ -25,13 +26,13 @@ import (
|
||||
// !less(y, x) for two elements x and y, their relative order is maintained.
|
||||
//
|
||||
// SortSlices can be used in conjunction with EquateEmpty.
|
||||
func SortSlices(less interface{}) cmp.Option {
|
||||
vf := reflect.ValueOf(less)
|
||||
func SortSlices(lessFunc interface{}) cmp.Option {
|
||||
vf := reflect.ValueOf(lessFunc)
|
||||
if !function.IsType(vf.Type(), function.Less) || vf.IsNil() {
|
||||
panic(fmt.Sprintf("invalid less function: %T", less))
|
||||
panic(fmt.Sprintf("invalid less function: %T", lessFunc))
|
||||
}
|
||||
ss := sliceSorter{vf.Type().In(0), vf}
|
||||
return cmp.FilterValues(ss.filter, cmp.Transformer("Sort", ss.sort))
|
||||
return cmp.FilterValues(ss.filter, cmp.Transformer("cmpopts.SortSlices", ss.sort))
|
||||
}
|
||||
|
||||
type sliceSorter struct {
|
||||
@@ -48,8 +49,8 @@ func (ss sliceSorter) filter(x, y interface{}) bool {
|
||||
}
|
||||
// Check whether the slices are already sorted to avoid an infinite
|
||||
// recursion cycle applying the same transform to itself.
|
||||
ok1 := sliceIsSorted(x, func(i, j int) bool { return ss.less(vx, i, j) })
|
||||
ok2 := sliceIsSorted(y, func(i, j int) bool { return ss.less(vy, i, j) })
|
||||
ok1 := sort.SliceIsSorted(x, func(i, j int) bool { return ss.less(vx, i, j) })
|
||||
ok2 := sort.SliceIsSorted(y, func(i, j int) bool { return ss.less(vy, i, j) })
|
||||
return !ok1 || !ok2
|
||||
}
|
||||
func (ss sliceSorter) sort(x interface{}) interface{} {
|
||||
@@ -58,7 +59,7 @@ func (ss sliceSorter) sort(x interface{}) interface{} {
|
||||
for i := 0; i < src.Len(); i++ {
|
||||
dst.Index(i).Set(src.Index(i))
|
||||
}
|
||||
sortSliceStable(dst.Interface(), func(i, j int) bool { return ss.less(dst, i, j) })
|
||||
sort.SliceStable(dst.Interface(), func(i, j int) bool { return ss.less(dst, i, j) })
|
||||
ss.checkSort(dst)
|
||||
return dst.Interface()
|
||||
}
|
||||
@@ -96,13 +97,13 @@ func (ss sliceSorter) less(v reflect.Value, i, j int) bool {
|
||||
// • Total: if x != y, then either less(x, y) or less(y, x)
|
||||
//
|
||||
// SortMaps can be used in conjunction with EquateEmpty.
|
||||
func SortMaps(less interface{}) cmp.Option {
|
||||
vf := reflect.ValueOf(less)
|
||||
func SortMaps(lessFunc interface{}) cmp.Option {
|
||||
vf := reflect.ValueOf(lessFunc)
|
||||
if !function.IsType(vf.Type(), function.Less) || vf.IsNil() {
|
||||
panic(fmt.Sprintf("invalid less function: %T", less))
|
||||
panic(fmt.Sprintf("invalid less function: %T", lessFunc))
|
||||
}
|
||||
ms := mapSorter{vf.Type().In(0), vf}
|
||||
return cmp.FilterValues(ms.filter, cmp.Transformer("Sort", ms.sort))
|
||||
return cmp.FilterValues(ms.filter, cmp.Transformer("cmpopts.SortMaps", ms.sort))
|
||||
}
|
||||
|
||||
type mapSorter struct {
|
||||
@@ -118,7 +119,10 @@ func (ms mapSorter) filter(x, y interface{}) bool {
|
||||
}
|
||||
func (ms mapSorter) sort(x interface{}) interface{} {
|
||||
src := reflect.ValueOf(x)
|
||||
outType := mapEntryType(src.Type())
|
||||
outType := reflect.StructOf([]reflect.StructField{
|
||||
{Name: "K", Type: src.Type().Key()},
|
||||
{Name: "V", Type: src.Type().Elem()},
|
||||
})
|
||||
dst := reflect.MakeSlice(reflect.SliceOf(outType), src.Len(), src.Len())
|
||||
for i, k := range src.MapKeys() {
|
||||
v := reflect.New(outType).Elem()
|
||||
@@ -126,7 +130,7 @@ func (ms mapSorter) sort(x interface{}) interface{} {
|
||||
v.Field(1).Set(src.MapIndex(k))
|
||||
dst.Index(i).Set(v)
|
||||
}
|
||||
sortSlice(dst.Interface(), func(i, j int) bool { return ms.less(dst, i, j) })
|
||||
sort.Slice(dst.Interface(), func(i, j int) bool { return ms.less(dst, i, j) })
|
||||
ms.checkSort(dst)
|
||||
return dst.Interface()
|
||||
}
|
||||
@@ -139,8 +143,5 @@ func (ms mapSorter) checkSort(v reflect.Value) {
|
||||
}
|
||||
func (ms mapSorter) less(v reflect.Value, i, j int) bool {
|
||||
vx, vy := v.Index(i).Field(0), v.Index(j).Field(0)
|
||||
if !hasReflectStructOf {
|
||||
vx, vy = vx.Elem(), vy.Elem()
|
||||
}
|
||||
return ms.fnc.Call([]reflect.Value{vx, vy})[0].Bool()
|
||||
}
|
||||
|
||||
46
vendor/github.com/google/go-cmp/cmp/cmpopts/sort_go17.go
generated
vendored
46
vendor/github.com/google/go-cmp/cmp/cmpopts/sort_go17.go
generated
vendored
@@ -1,46 +0,0 @@
|
||||
// Copyright 2017, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build !go1.8
|
||||
|
||||
package cmpopts
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
)
|
||||
|
||||
const hasReflectStructOf = false
|
||||
|
||||
func mapEntryType(reflect.Type) reflect.Type {
|
||||
return reflect.TypeOf(struct{ K, V interface{} }{})
|
||||
}
|
||||
|
||||
func sliceIsSorted(slice interface{}, less func(i, j int) bool) bool {
|
||||
return sort.IsSorted(reflectSliceSorter{reflect.ValueOf(slice), less})
|
||||
}
|
||||
func sortSlice(slice interface{}, less func(i, j int) bool) {
|
||||
sort.Sort(reflectSliceSorter{reflect.ValueOf(slice), less})
|
||||
}
|
||||
func sortSliceStable(slice interface{}, less func(i, j int) bool) {
|
||||
sort.Stable(reflectSliceSorter{reflect.ValueOf(slice), less})
|
||||
}
|
||||
|
||||
type reflectSliceSorter struct {
|
||||
slice reflect.Value
|
||||
less func(i, j int) bool
|
||||
}
|
||||
|
||||
func (ss reflectSliceSorter) Len() int {
|
||||
return ss.slice.Len()
|
||||
}
|
||||
func (ss reflectSliceSorter) Less(i, j int) bool {
|
||||
return ss.less(i, j)
|
||||
}
|
||||
func (ss reflectSliceSorter) Swap(i, j int) {
|
||||
vi := ss.slice.Index(i).Interface()
|
||||
vj := ss.slice.Index(j).Interface()
|
||||
ss.slice.Index(i).Set(reflect.ValueOf(vj))
|
||||
ss.slice.Index(j).Set(reflect.ValueOf(vi))
|
||||
}
|
||||
31
vendor/github.com/google/go-cmp/cmp/cmpopts/sort_go18.go
generated
vendored
31
vendor/github.com/google/go-cmp/cmp/cmpopts/sort_go18.go
generated
vendored
@@ -1,31 +0,0 @@
|
||||
// Copyright 2017, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build go1.8
|
||||
|
||||
package cmpopts
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
)
|
||||
|
||||
const hasReflectStructOf = true
|
||||
|
||||
func mapEntryType(t reflect.Type) reflect.Type {
|
||||
return reflect.StructOf([]reflect.StructField{
|
||||
{Name: "K", Type: t.Key()},
|
||||
{Name: "V", Type: t.Elem()},
|
||||
})
|
||||
}
|
||||
|
||||
func sliceIsSorted(slice interface{}, less func(i, j int) bool) bool {
|
||||
return sort.SliceIsSorted(slice, less)
|
||||
}
|
||||
func sortSlice(slice interface{}, less func(i, j int) bool) {
|
||||
sort.Slice(slice, less)
|
||||
}
|
||||
func sortSliceStable(slice interface{}, less func(i, j int) bool) {
|
||||
sort.SliceStable(slice, less)
|
||||
}
|
||||
35
vendor/github.com/google/go-cmp/cmp/cmpopts/xform.go
generated
vendored
Normal file
35
vendor/github.com/google/go-cmp/cmp/cmpopts/xform.go
generated
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
// Copyright 2018, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package cmpopts
|
||||
|
||||
import (
|
||||
"github.com/google/go-cmp/cmp"
|
||||
)
|
||||
|
||||
type xformFilter struct{ xform cmp.Option }
|
||||
|
||||
func (xf xformFilter) filter(p cmp.Path) bool {
|
||||
for _, ps := range p {
|
||||
if t, ok := ps.(cmp.Transform); ok && t.Option() == xf.xform {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// AcyclicTransformer returns a Transformer with a filter applied that ensures
|
||||
// that the transformer cannot be recursively applied upon its own output.
|
||||
//
|
||||
// An example use case is a transformer that splits a string by lines:
|
||||
// AcyclicTransformer("SplitLines", func(s string) []string{
|
||||
// return strings.Split(s, "\n")
|
||||
// })
|
||||
//
|
||||
// Had this been an unfiltered Transformer instead, this would result in an
|
||||
// infinite cycle converting a string to []string to [][]string and so on.
|
||||
func AcyclicTransformer(name string, xformFunc interface{}) cmp.Option {
|
||||
xf := xformFilter{cmp.Transformer(name, xformFunc)}
|
||||
return cmp.FilterPath(xf.filter, xf.xform)
|
||||
}
|
||||
583
vendor/github.com/google/go-cmp/cmp/compare.go
generated
vendored
583
vendor/github.com/google/go-cmp/cmp/compare.go
generated
vendored
@@ -29,26 +29,17 @@ package cmp
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/google/go-cmp/cmp/internal/diff"
|
||||
"github.com/google/go-cmp/cmp/internal/flags"
|
||||
"github.com/google/go-cmp/cmp/internal/function"
|
||||
"github.com/google/go-cmp/cmp/internal/value"
|
||||
)
|
||||
|
||||
// BUG(dsnet): Maps with keys containing NaN values cannot be properly compared due to
|
||||
// the reflection package's inability to retrieve such entries. Equal will panic
|
||||
// anytime it comes across a NaN key, but this behavior may change.
|
||||
//
|
||||
// See https://golang.org/issue/11104 for more details.
|
||||
|
||||
var nothing = reflect.Value{}
|
||||
|
||||
// Equal reports whether x and y are equal by recursively applying the
|
||||
// following rules in the given order to x and y and all of their sub-values:
|
||||
//
|
||||
// • If two values are not of the same type, then they are never equal
|
||||
// and the overall result is false.
|
||||
//
|
||||
// • Let S be the set of all Ignore, Transformer, and Comparer options that
|
||||
// remain after applying all path filters, value filters, and type filters.
|
||||
// If at least one Ignore exists in S, then the comparison is ignored.
|
||||
@@ -61,43 +52,79 @@ var nothing = reflect.Value{}
|
||||
//
|
||||
// • If the values have an Equal method of the form "(T) Equal(T) bool" or
|
||||
// "(T) Equal(I) bool" where T is assignable to I, then use the result of
|
||||
// x.Equal(y) even if x or y is nil.
|
||||
// Otherwise, no such method exists and evaluation proceeds to the next rule.
|
||||
// x.Equal(y) even if x or y is nil. Otherwise, no such method exists and
|
||||
// evaluation proceeds to the next rule.
|
||||
//
|
||||
// • Lastly, try to compare x and y based on their basic kinds.
|
||||
// Simple kinds like booleans, integers, floats, complex numbers, strings, and
|
||||
// channels are compared using the equivalent of the == operator in Go.
|
||||
// Functions are only equal if they are both nil, otherwise they are unequal.
|
||||
// Pointers are equal if the underlying values they point to are also equal.
|
||||
// Interfaces are equal if their underlying concrete values are also equal.
|
||||
//
|
||||
// Structs are equal if all of their fields are equal. If a struct contains
|
||||
// unexported fields, Equal panics unless the AllowUnexported option is used or
|
||||
// an Ignore option (e.g., cmpopts.IgnoreUnexported) ignores that field.
|
||||
// Structs are equal if recursively calling Equal on all fields report equal.
|
||||
// If a struct contains unexported fields, Equal panics unless an Ignore option
|
||||
// (e.g., cmpopts.IgnoreUnexported) ignores that field or the AllowUnexported
|
||||
// option explicitly permits comparing the unexported field.
|
||||
//
|
||||
// Arrays, slices, and maps are equal if they are both nil or both non-nil
|
||||
// with the same length and the elements at each index or key are equal.
|
||||
// Note that a non-nil empty slice and a nil slice are not equal.
|
||||
// To equate empty slices and maps, consider using cmpopts.EquateEmpty.
|
||||
// Slices are equal if they are both nil or both non-nil, where recursively
|
||||
// calling Equal on all non-ignored slice or array elements report equal.
|
||||
// Empty non-nil slices and nil slices are not equal; to equate empty slices,
|
||||
// consider using cmpopts.EquateEmpty.
|
||||
//
|
||||
// Maps are equal if they are both nil or both non-nil, where recursively
|
||||
// calling Equal on all non-ignored map entries report equal.
|
||||
// Map keys are equal according to the == operator.
|
||||
// To use custom comparisons for map keys, consider using cmpopts.SortMaps.
|
||||
// Empty non-nil maps and nil maps are not equal; to equate empty maps,
|
||||
// consider using cmpopts.EquateEmpty.
|
||||
//
|
||||
// Pointers and interfaces are equal if they are both nil or both non-nil,
|
||||
// where they have the same underlying concrete type and recursively
|
||||
// calling Equal on the underlying values reports equal.
|
||||
func Equal(x, y interface{}, opts ...Option) bool {
|
||||
vx := reflect.ValueOf(x)
|
||||
vy := reflect.ValueOf(y)
|
||||
|
||||
// If the inputs are different types, auto-wrap them in an empty interface
|
||||
// so that they have the same parent type.
|
||||
var t reflect.Type
|
||||
if !vx.IsValid() || !vy.IsValid() || vx.Type() != vy.Type() {
|
||||
t = reflect.TypeOf((*interface{})(nil)).Elem()
|
||||
if vx.IsValid() {
|
||||
vvx := reflect.New(t).Elem()
|
||||
vvx.Set(vx)
|
||||
vx = vvx
|
||||
}
|
||||
if vy.IsValid() {
|
||||
vvy := reflect.New(t).Elem()
|
||||
vvy.Set(vy)
|
||||
vy = vvy
|
||||
}
|
||||
} else {
|
||||
t = vx.Type()
|
||||
}
|
||||
|
||||
s := newState(opts)
|
||||
s.compareAny(reflect.ValueOf(x), reflect.ValueOf(y))
|
||||
s.compareAny(&pathStep{t, vx, vy})
|
||||
return s.result.Equal()
|
||||
}
|
||||
|
||||
// Diff returns a human-readable report of the differences between two values.
|
||||
// It returns an empty string if and only if Equal returns true for the same
|
||||
// input values and options. The output string will use the "-" symbol to
|
||||
// indicate elements removed from x, and the "+" symbol to indicate elements
|
||||
// added to y.
|
||||
// input values and options.
|
||||
//
|
||||
// Do not depend on this output being stable.
|
||||
// The output is displayed as a literal in pseudo-Go syntax.
|
||||
// At the start of each line, a "-" prefix indicates an element removed from x,
|
||||
// a "+" prefix to indicates an element added to y, and the lack of a prefix
|
||||
// indicates an element common to both x and y. If possible, the output
|
||||
// uses fmt.Stringer.String or error.Error methods to produce more humanly
|
||||
// readable outputs. In such cases, the string is prefixed with either an
|
||||
// 's' or 'e' character, respectively, to indicate that the method was called.
|
||||
//
|
||||
// Do not depend on this output being stable. If you need the ability to
|
||||
// programmatically interpret the difference, consider using a custom Reporter.
|
||||
func Diff(x, y interface{}, opts ...Option) string {
|
||||
r := new(defaultReporter)
|
||||
opts = Options{Options(opts), r}
|
||||
eq := Equal(x, y, opts...)
|
||||
eq := Equal(x, y, Options(opts), Reporter(r))
|
||||
d := r.String()
|
||||
if (d == "") != eq {
|
||||
panic("inconsistent difference and equality results")
|
||||
@@ -108,9 +135,13 @@ func Diff(x, y interface{}, opts ...Option) string {
|
||||
type state struct {
|
||||
// These fields represent the "comparison state".
|
||||
// Calling statelessCompare must not result in observable changes to these.
|
||||
result diff.Result // The current result of comparison
|
||||
curPath Path // The current path in the value tree
|
||||
reporter reporter // Optional reporter used for difference formatting
|
||||
result diff.Result // The current result of comparison
|
||||
curPath Path // The current path in the value tree
|
||||
reporters []reporter // Optional reporters
|
||||
|
||||
// recChecker checks for infinite cycles applying the same set of
|
||||
// transformers upon the output of itself.
|
||||
recChecker recChecker
|
||||
|
||||
// dynChecker triggers pseudo-random checks for option correctness.
|
||||
// It is safe for statelessCompare to mutate this value.
|
||||
@@ -122,10 +153,9 @@ type state struct {
|
||||
}
|
||||
|
||||
func newState(opts []Option) *state {
|
||||
s := new(state)
|
||||
for _, opt := range opts {
|
||||
s.processOption(opt)
|
||||
}
|
||||
// Always ensure a validator option exists to validate the inputs.
|
||||
s := &state{opts: Options{validator{}}}
|
||||
s.processOption(Options(opts))
|
||||
return s
|
||||
}
|
||||
|
||||
@@ -152,10 +182,7 @@ func (s *state) processOption(opt Option) {
|
||||
s.exporters[t] = true
|
||||
}
|
||||
case reporter:
|
||||
if s.reporter != nil {
|
||||
panic("difference reporter already registered")
|
||||
}
|
||||
s.reporter = opt
|
||||
s.reporters = append(s.reporters, opt)
|
||||
default:
|
||||
panic(fmt.Sprintf("unknown option %T", opt))
|
||||
}
|
||||
@@ -164,153 +191,88 @@ func (s *state) processOption(opt Option) {
|
||||
// statelessCompare compares two values and returns the result.
|
||||
// This function is stateless in that it does not alter the current result,
|
||||
// or output to any registered reporters.
|
||||
func (s *state) statelessCompare(vx, vy reflect.Value) diff.Result {
|
||||
func (s *state) statelessCompare(step PathStep) diff.Result {
|
||||
// We do not save and restore the curPath because all of the compareX
|
||||
// methods should properly push and pop from the path.
|
||||
// It is an implementation bug if the contents of curPath differs from
|
||||
// when calling this function to when returning from it.
|
||||
|
||||
oldResult, oldReporter := s.result, s.reporter
|
||||
oldResult, oldReporters := s.result, s.reporters
|
||||
s.result = diff.Result{} // Reset result
|
||||
s.reporter = nil // Remove reporter to avoid spurious printouts
|
||||
s.compareAny(vx, vy)
|
||||
s.reporters = nil // Remove reporters to avoid spurious printouts
|
||||
s.compareAny(step)
|
||||
res := s.result
|
||||
s.result, s.reporter = oldResult, oldReporter
|
||||
s.result, s.reporters = oldResult, oldReporters
|
||||
return res
|
||||
}
|
||||
|
||||
func (s *state) compareAny(vx, vy reflect.Value) {
|
||||
// TODO: Support cyclic data structures.
|
||||
func (s *state) compareAny(step PathStep) {
|
||||
// Update the path stack.
|
||||
s.curPath.push(step)
|
||||
defer s.curPath.pop()
|
||||
for _, r := range s.reporters {
|
||||
r.PushStep(step)
|
||||
defer r.PopStep()
|
||||
}
|
||||
s.recChecker.Check(s.curPath)
|
||||
|
||||
// Rule 0: Differing types are never equal.
|
||||
if !vx.IsValid() || !vy.IsValid() {
|
||||
s.report(vx.IsValid() == vy.IsValid(), vx, vy)
|
||||
return
|
||||
}
|
||||
if vx.Type() != vy.Type() {
|
||||
s.report(false, vx, vy) // Possible for path to be empty
|
||||
return
|
||||
}
|
||||
t := vx.Type()
|
||||
if len(s.curPath) == 0 {
|
||||
s.curPath.push(&pathStep{typ: t})
|
||||
defer s.curPath.pop()
|
||||
}
|
||||
vx, vy = s.tryExporting(vx, vy)
|
||||
// Obtain the current type and values.
|
||||
t := step.Type()
|
||||
vx, vy := step.Values()
|
||||
|
||||
// Rule 1: Check whether an option applies on this node in the value tree.
|
||||
if s.tryOptions(vx, vy, t) {
|
||||
if s.tryOptions(t, vx, vy) {
|
||||
return
|
||||
}
|
||||
|
||||
// Rule 2: Check whether the type has a valid Equal method.
|
||||
if s.tryMethod(vx, vy, t) {
|
||||
if s.tryMethod(t, vx, vy) {
|
||||
return
|
||||
}
|
||||
|
||||
// Rule 3: Recursively descend into each value's underlying kind.
|
||||
// Rule 3: Compare based on the underlying kind.
|
||||
switch t.Kind() {
|
||||
case reflect.Bool:
|
||||
s.report(vx.Bool() == vy.Bool(), vx, vy)
|
||||
return
|
||||
s.report(vx.Bool() == vy.Bool(), 0)
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
s.report(vx.Int() == vy.Int(), vx, vy)
|
||||
return
|
||||
s.report(vx.Int() == vy.Int(), 0)
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
s.report(vx.Uint() == vy.Uint(), vx, vy)
|
||||
return
|
||||
s.report(vx.Uint() == vy.Uint(), 0)
|
||||
case reflect.Float32, reflect.Float64:
|
||||
s.report(vx.Float() == vy.Float(), vx, vy)
|
||||
return
|
||||
s.report(vx.Float() == vy.Float(), 0)
|
||||
case reflect.Complex64, reflect.Complex128:
|
||||
s.report(vx.Complex() == vy.Complex(), vx, vy)
|
||||
return
|
||||
s.report(vx.Complex() == vy.Complex(), 0)
|
||||
case reflect.String:
|
||||
s.report(vx.String() == vy.String(), vx, vy)
|
||||
return
|
||||
s.report(vx.String() == vy.String(), 0)
|
||||
case reflect.Chan, reflect.UnsafePointer:
|
||||
s.report(vx.Pointer() == vy.Pointer(), vx, vy)
|
||||
return
|
||||
s.report(vx.Pointer() == vy.Pointer(), 0)
|
||||
case reflect.Func:
|
||||
s.report(vx.IsNil() && vy.IsNil(), vx, vy)
|
||||
return
|
||||
case reflect.Ptr:
|
||||
if vx.IsNil() || vy.IsNil() {
|
||||
s.report(vx.IsNil() && vy.IsNil(), vx, vy)
|
||||
return
|
||||
}
|
||||
s.curPath.push(&indirect{pathStep{t.Elem()}})
|
||||
defer s.curPath.pop()
|
||||
s.compareAny(vx.Elem(), vy.Elem())
|
||||
return
|
||||
case reflect.Interface:
|
||||
if vx.IsNil() || vy.IsNil() {
|
||||
s.report(vx.IsNil() && vy.IsNil(), vx, vy)
|
||||
return
|
||||
}
|
||||
if vx.Elem().Type() != vy.Elem().Type() {
|
||||
s.report(false, vx.Elem(), vy.Elem())
|
||||
return
|
||||
}
|
||||
s.curPath.push(&typeAssertion{pathStep{vx.Elem().Type()}})
|
||||
defer s.curPath.pop()
|
||||
s.compareAny(vx.Elem(), vy.Elem())
|
||||
return
|
||||
case reflect.Slice:
|
||||
if vx.IsNil() || vy.IsNil() {
|
||||
s.report(vx.IsNil() && vy.IsNil(), vx, vy)
|
||||
return
|
||||
}
|
||||
fallthrough
|
||||
case reflect.Array:
|
||||
s.compareArray(vx, vy, t)
|
||||
return
|
||||
case reflect.Map:
|
||||
s.compareMap(vx, vy, t)
|
||||
return
|
||||
s.report(vx.IsNil() && vy.IsNil(), 0)
|
||||
case reflect.Struct:
|
||||
s.compareStruct(vx, vy, t)
|
||||
return
|
||||
s.compareStruct(t, vx, vy)
|
||||
case reflect.Slice, reflect.Array:
|
||||
s.compareSlice(t, vx, vy)
|
||||
case reflect.Map:
|
||||
s.compareMap(t, vx, vy)
|
||||
case reflect.Ptr:
|
||||
s.comparePtr(t, vx, vy)
|
||||
case reflect.Interface:
|
||||
s.compareInterface(t, vx, vy)
|
||||
default:
|
||||
panic(fmt.Sprintf("%v kind not handled", t.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
func (s *state) tryExporting(vx, vy reflect.Value) (reflect.Value, reflect.Value) {
|
||||
if sf, ok := s.curPath[len(s.curPath)-1].(*structField); ok && sf.unexported {
|
||||
if sf.force {
|
||||
// Use unsafe pointer arithmetic to get read-write access to an
|
||||
// unexported field in the struct.
|
||||
vx = unsafeRetrieveField(sf.pvx, sf.field)
|
||||
vy = unsafeRetrieveField(sf.pvy, sf.field)
|
||||
} else {
|
||||
// We are not allowed to export the value, so invalidate them
|
||||
// so that tryOptions can panic later if not explicitly ignored.
|
||||
vx = nothing
|
||||
vy = nothing
|
||||
}
|
||||
}
|
||||
return vx, vy
|
||||
}
|
||||
|
||||
func (s *state) tryOptions(vx, vy reflect.Value, t reflect.Type) bool {
|
||||
// If there were no FilterValues, we will not detect invalid inputs,
|
||||
// so manually check for them and append invalid if necessary.
|
||||
// We still evaluate the options since an ignore can override invalid.
|
||||
opts := s.opts
|
||||
if !vx.IsValid() || !vy.IsValid() {
|
||||
opts = Options{opts, invalid{}}
|
||||
}
|
||||
|
||||
func (s *state) tryOptions(t reflect.Type, vx, vy reflect.Value) bool {
|
||||
// Evaluate all filters and apply the remaining options.
|
||||
if opt := opts.filter(s, vx, vy, t); opt != nil {
|
||||
if opt := s.opts.filter(s, t, vx, vy); opt != nil {
|
||||
opt.apply(s, vx, vy)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (s *state) tryMethod(vx, vy reflect.Value, t reflect.Type) bool {
|
||||
func (s *state) tryMethod(t reflect.Type, vx, vy reflect.Value) bool {
|
||||
// Check if this type even has an Equal method.
|
||||
m, ok := t.MethodByName("Equal")
|
||||
if !ok || !function.IsType(m.Type, function.EqualAssignable) {
|
||||
@@ -318,11 +280,11 @@ func (s *state) tryMethod(vx, vy reflect.Value, t reflect.Type) bool {
|
||||
}
|
||||
|
||||
eq := s.callTTBFunc(m.Func, vx, vy)
|
||||
s.report(eq, vx, vy)
|
||||
s.report(eq, reportByMethod)
|
||||
return true
|
||||
}
|
||||
|
||||
func (s *state) callTRFunc(f, v reflect.Value) reflect.Value {
|
||||
func (s *state) callTRFunc(f, v reflect.Value, step Transform) reflect.Value {
|
||||
v = sanitizeValue(v, f.Type().In(0))
|
||||
if !s.dynChecker.Next() {
|
||||
return f.Call([]reflect.Value{v})[0]
|
||||
@@ -333,15 +295,15 @@ func (s *state) callTRFunc(f, v reflect.Value) reflect.Value {
|
||||
// unsafe mutations to the input.
|
||||
c := make(chan reflect.Value)
|
||||
go detectRaces(c, f, v)
|
||||
got := <-c
|
||||
want := f.Call([]reflect.Value{v})[0]
|
||||
if got := <-c; !s.statelessCompare(got, want).Equal() {
|
||||
if step.vx, step.vy = got, want; !s.statelessCompare(step).Equal() {
|
||||
// To avoid false-positives with non-reflexive equality operations,
|
||||
// we sanity check whether a value is equal to itself.
|
||||
if !s.statelessCompare(want, want).Equal() {
|
||||
if step.vx, step.vy = want, want; !s.statelessCompare(step).Equal() {
|
||||
return want
|
||||
}
|
||||
fn := getFuncName(f.Pointer())
|
||||
panic(fmt.Sprintf("non-deterministic function detected: %s", fn))
|
||||
panic(fmt.Sprintf("non-deterministic function detected: %s", function.NameOf(f)))
|
||||
}
|
||||
return want
|
||||
}
|
||||
@@ -359,10 +321,10 @@ func (s *state) callTTBFunc(f, x, y reflect.Value) bool {
|
||||
// unsafe mutations to the input.
|
||||
c := make(chan reflect.Value)
|
||||
go detectRaces(c, f, y, x)
|
||||
got := <-c
|
||||
want := f.Call([]reflect.Value{x, y})[0].Bool()
|
||||
if got := <-c; !got.IsValid() || got.Bool() != want {
|
||||
fn := getFuncName(f.Pointer())
|
||||
panic(fmt.Sprintf("non-deterministic or non-symmetric function detected: %s", fn))
|
||||
if !got.IsValid() || got.Bool() != want {
|
||||
panic(fmt.Sprintf("non-deterministic or non-symmetric function detected: %s", function.NameOf(f)))
|
||||
}
|
||||
return want
|
||||
}
|
||||
@@ -380,140 +342,241 @@ func detectRaces(c chan<- reflect.Value, f reflect.Value, vs ...reflect.Value) {
|
||||
// assuming that T is assignable to R.
|
||||
// Otherwise, it returns the input value as is.
|
||||
func sanitizeValue(v reflect.Value, t reflect.Type) reflect.Value {
|
||||
// TODO(dsnet): Remove this hacky workaround.
|
||||
// See https://golang.org/issue/22143
|
||||
if v.Kind() == reflect.Interface && v.IsNil() && v.Type() != t {
|
||||
return reflect.New(t).Elem()
|
||||
// TODO(dsnet): Workaround for reflect bug (https://golang.org/issue/22143).
|
||||
if !flags.AtLeastGo110 {
|
||||
if v.Kind() == reflect.Interface && v.IsNil() && v.Type() != t {
|
||||
return reflect.New(t).Elem()
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func (s *state) compareArray(vx, vy reflect.Value, t reflect.Type) {
|
||||
step := &sliceIndex{pathStep{t.Elem()}, 0, 0}
|
||||
s.curPath.push(step)
|
||||
|
||||
// Compute an edit-script for slices vx and vy.
|
||||
es := diff.Difference(vx.Len(), vy.Len(), func(ix, iy int) diff.Result {
|
||||
step.xkey, step.ykey = ix, iy
|
||||
return s.statelessCompare(vx.Index(ix), vy.Index(iy))
|
||||
})
|
||||
|
||||
// Report the entire slice as is if the arrays are of primitive kind,
|
||||
// and the arrays are different enough.
|
||||
isPrimitive := false
|
||||
switch t.Elem().Kind() {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr,
|
||||
reflect.Bool, reflect.Float32, reflect.Float64, reflect.Complex64, reflect.Complex128:
|
||||
isPrimitive = true
|
||||
}
|
||||
if isPrimitive && es.Dist() > (vx.Len()+vy.Len())/4 {
|
||||
s.curPath.pop() // Pop first since we are reporting the whole slice
|
||||
s.report(false, vx, vy)
|
||||
return
|
||||
}
|
||||
|
||||
// Replay the edit-script.
|
||||
var ix, iy int
|
||||
for _, e := range es {
|
||||
switch e {
|
||||
case diff.UniqueX:
|
||||
step.xkey, step.ykey = ix, -1
|
||||
s.report(false, vx.Index(ix), nothing)
|
||||
ix++
|
||||
case diff.UniqueY:
|
||||
step.xkey, step.ykey = -1, iy
|
||||
s.report(false, nothing, vy.Index(iy))
|
||||
iy++
|
||||
default:
|
||||
step.xkey, step.ykey = ix, iy
|
||||
if e == diff.Identity {
|
||||
s.report(true, vx.Index(ix), vy.Index(iy))
|
||||
} else {
|
||||
s.compareAny(vx.Index(ix), vy.Index(iy))
|
||||
}
|
||||
ix++
|
||||
iy++
|
||||
}
|
||||
}
|
||||
s.curPath.pop()
|
||||
return
|
||||
}
|
||||
|
||||
func (s *state) compareMap(vx, vy reflect.Value, t reflect.Type) {
|
||||
if vx.IsNil() || vy.IsNil() {
|
||||
s.report(vx.IsNil() && vy.IsNil(), vx, vy)
|
||||
return
|
||||
}
|
||||
|
||||
// We combine and sort the two map keys so that we can perform the
|
||||
// comparisons in a deterministic order.
|
||||
step := &mapIndex{pathStep: pathStep{t.Elem()}}
|
||||
s.curPath.push(step)
|
||||
defer s.curPath.pop()
|
||||
for _, k := range value.SortKeys(append(vx.MapKeys(), vy.MapKeys()...)) {
|
||||
step.key = k
|
||||
vvx := vx.MapIndex(k)
|
||||
vvy := vy.MapIndex(k)
|
||||
switch {
|
||||
case vvx.IsValid() && vvy.IsValid():
|
||||
s.compareAny(vvx, vvy)
|
||||
case vvx.IsValid() && !vvy.IsValid():
|
||||
s.report(false, vvx, nothing)
|
||||
case !vvx.IsValid() && vvy.IsValid():
|
||||
s.report(false, nothing, vvy)
|
||||
default:
|
||||
// It is possible for both vvx and vvy to be invalid if the
|
||||
// key contained a NaN value in it. There is no way in
|
||||
// reflection to be able to retrieve these values.
|
||||
// See https://golang.org/issue/11104
|
||||
panic(fmt.Sprintf("%#v has map key with NaNs", s.curPath))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *state) compareStruct(vx, vy reflect.Value, t reflect.Type) {
|
||||
func (s *state) compareStruct(t reflect.Type, vx, vy reflect.Value) {
|
||||
var vax, vay reflect.Value // Addressable versions of vx and vy
|
||||
|
||||
step := &structField{}
|
||||
s.curPath.push(step)
|
||||
defer s.curPath.pop()
|
||||
step := StructField{&structField{}}
|
||||
for i := 0; i < t.NumField(); i++ {
|
||||
vvx := vx.Field(i)
|
||||
vvy := vy.Field(i)
|
||||
step.typ = t.Field(i).Type
|
||||
step.vx = vx.Field(i)
|
||||
step.vy = vy.Field(i)
|
||||
step.name = t.Field(i).Name
|
||||
step.idx = i
|
||||
step.unexported = !isExported(step.name)
|
||||
if step.unexported {
|
||||
if step.name == "_" {
|
||||
continue
|
||||
}
|
||||
// Defer checking of unexported fields until later to give an
|
||||
// Ignore a chance to ignore the field.
|
||||
if !vax.IsValid() || !vay.IsValid() {
|
||||
// For unsafeRetrieveField to work, the parent struct must
|
||||
// For retrieveUnexportedField to work, the parent struct must
|
||||
// be addressable. Create a new copy of the values if
|
||||
// necessary to make them addressable.
|
||||
vax = makeAddressable(vx)
|
||||
vay = makeAddressable(vy)
|
||||
}
|
||||
step.force = s.exporters[t]
|
||||
step.mayForce = s.exporters[t]
|
||||
step.pvx = vax
|
||||
step.pvy = vay
|
||||
step.field = t.Field(i)
|
||||
}
|
||||
s.compareAny(vvx, vvy)
|
||||
s.compareAny(step)
|
||||
}
|
||||
}
|
||||
|
||||
// report records the result of a single comparison.
|
||||
// It also calls Report if any reporter is registered.
|
||||
func (s *state) report(eq bool, vx, vy reflect.Value) {
|
||||
if eq {
|
||||
s.result.NSame++
|
||||
} else {
|
||||
s.result.NDiff++
|
||||
func (s *state) compareSlice(t reflect.Type, vx, vy reflect.Value) {
|
||||
isSlice := t.Kind() == reflect.Slice
|
||||
if isSlice && (vx.IsNil() || vy.IsNil()) {
|
||||
s.report(vx.IsNil() && vy.IsNil(), 0)
|
||||
return
|
||||
}
|
||||
if s.reporter != nil {
|
||||
s.reporter.Report(vx, vy, eq, s.curPath)
|
||||
|
||||
// TODO: Support cyclic data structures.
|
||||
|
||||
step := SliceIndex{&sliceIndex{pathStep: pathStep{typ: t.Elem()}}}
|
||||
withIndexes := func(ix, iy int) SliceIndex {
|
||||
if ix >= 0 {
|
||||
step.vx, step.xkey = vx.Index(ix), ix
|
||||
} else {
|
||||
step.vx, step.xkey = reflect.Value{}, -1
|
||||
}
|
||||
if iy >= 0 {
|
||||
step.vy, step.ykey = vy.Index(iy), iy
|
||||
} else {
|
||||
step.vy, step.ykey = reflect.Value{}, -1
|
||||
}
|
||||
return step
|
||||
}
|
||||
|
||||
// Ignore options are able to ignore missing elements in a slice.
|
||||
// However, detecting these reliably requires an optimal differencing
|
||||
// algorithm, for which diff.Difference is not.
|
||||
//
|
||||
// Instead, we first iterate through both slices to detect which elements
|
||||
// would be ignored if standing alone. The index of non-discarded elements
|
||||
// are stored in a separate slice, which diffing is then performed on.
|
||||
var indexesX, indexesY []int
|
||||
var ignoredX, ignoredY []bool
|
||||
for ix := 0; ix < vx.Len(); ix++ {
|
||||
ignored := s.statelessCompare(withIndexes(ix, -1)).NumDiff == 0
|
||||
if !ignored {
|
||||
indexesX = append(indexesX, ix)
|
||||
}
|
||||
ignoredX = append(ignoredX, ignored)
|
||||
}
|
||||
for iy := 0; iy < vy.Len(); iy++ {
|
||||
ignored := s.statelessCompare(withIndexes(-1, iy)).NumDiff == 0
|
||||
if !ignored {
|
||||
indexesY = append(indexesY, iy)
|
||||
}
|
||||
ignoredY = append(ignoredY, ignored)
|
||||
}
|
||||
|
||||
// Compute an edit-script for slices vx and vy (excluding ignored elements).
|
||||
edits := diff.Difference(len(indexesX), len(indexesY), func(ix, iy int) diff.Result {
|
||||
return s.statelessCompare(withIndexes(indexesX[ix], indexesY[iy]))
|
||||
})
|
||||
|
||||
// Replay the ignore-scripts and the edit-script.
|
||||
var ix, iy int
|
||||
for ix < vx.Len() || iy < vy.Len() {
|
||||
var e diff.EditType
|
||||
switch {
|
||||
case ix < len(ignoredX) && ignoredX[ix]:
|
||||
e = diff.UniqueX
|
||||
case iy < len(ignoredY) && ignoredY[iy]:
|
||||
e = diff.UniqueY
|
||||
default:
|
||||
e, edits = edits[0], edits[1:]
|
||||
}
|
||||
switch e {
|
||||
case diff.UniqueX:
|
||||
s.compareAny(withIndexes(ix, -1))
|
||||
ix++
|
||||
case diff.UniqueY:
|
||||
s.compareAny(withIndexes(-1, iy))
|
||||
iy++
|
||||
default:
|
||||
s.compareAny(withIndexes(ix, iy))
|
||||
ix++
|
||||
iy++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *state) compareMap(t reflect.Type, vx, vy reflect.Value) {
|
||||
if vx.IsNil() || vy.IsNil() {
|
||||
s.report(vx.IsNil() && vy.IsNil(), 0)
|
||||
return
|
||||
}
|
||||
|
||||
// TODO: Support cyclic data structures.
|
||||
|
||||
// We combine and sort the two map keys so that we can perform the
|
||||
// comparisons in a deterministic order.
|
||||
step := MapIndex{&mapIndex{pathStep: pathStep{typ: t.Elem()}}}
|
||||
for _, k := range value.SortKeys(append(vx.MapKeys(), vy.MapKeys()...)) {
|
||||
step.vx = vx.MapIndex(k)
|
||||
step.vy = vy.MapIndex(k)
|
||||
step.key = k
|
||||
if !step.vx.IsValid() && !step.vy.IsValid() {
|
||||
// It is possible for both vx and vy to be invalid if the
|
||||
// key contained a NaN value in it.
|
||||
//
|
||||
// Even with the ability to retrieve NaN keys in Go 1.12,
|
||||
// there still isn't a sensible way to compare the values since
|
||||
// a NaN key may map to multiple unordered values.
|
||||
// The most reasonable way to compare NaNs would be to compare the
|
||||
// set of values. However, this is impossible to do efficiently
|
||||
// since set equality is provably an O(n^2) operation given only
|
||||
// an Equal function. If we had a Less function or Hash function,
|
||||
// this could be done in O(n*log(n)) or O(n), respectively.
|
||||
//
|
||||
// Rather than adding complex logic to deal with NaNs, make it
|
||||
// the user's responsibility to compare such obscure maps.
|
||||
const help = "consider providing a Comparer to compare the map"
|
||||
panic(fmt.Sprintf("%#v has map key with NaNs\n%s", s.curPath, help))
|
||||
}
|
||||
s.compareAny(step)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *state) comparePtr(t reflect.Type, vx, vy reflect.Value) {
|
||||
if vx.IsNil() || vy.IsNil() {
|
||||
s.report(vx.IsNil() && vy.IsNil(), 0)
|
||||
return
|
||||
}
|
||||
|
||||
// TODO: Support cyclic data structures.
|
||||
|
||||
vx, vy = vx.Elem(), vy.Elem()
|
||||
s.compareAny(Indirect{&indirect{pathStep{t.Elem(), vx, vy}}})
|
||||
}
|
||||
|
||||
func (s *state) compareInterface(t reflect.Type, vx, vy reflect.Value) {
|
||||
if vx.IsNil() || vy.IsNil() {
|
||||
s.report(vx.IsNil() && vy.IsNil(), 0)
|
||||
return
|
||||
}
|
||||
vx, vy = vx.Elem(), vy.Elem()
|
||||
if vx.Type() != vy.Type() {
|
||||
s.report(false, 0)
|
||||
return
|
||||
}
|
||||
s.compareAny(TypeAssertion{&typeAssertion{pathStep{vx.Type(), vx, vy}}})
|
||||
}
|
||||
|
||||
func (s *state) report(eq bool, rf resultFlags) {
|
||||
if rf&reportByIgnore == 0 {
|
||||
if eq {
|
||||
s.result.NumSame++
|
||||
rf |= reportEqual
|
||||
} else {
|
||||
s.result.NumDiff++
|
||||
rf |= reportUnequal
|
||||
}
|
||||
}
|
||||
for _, r := range s.reporters {
|
||||
r.Report(Result{flags: rf})
|
||||
}
|
||||
}
|
||||
|
||||
// recChecker tracks the state needed to periodically perform checks that
|
||||
// user provided transformers are not stuck in an infinitely recursive cycle.
|
||||
type recChecker struct{ next int }
|
||||
|
||||
// Check scans the Path for any recursive transformers and panics when any
|
||||
// recursive transformers are detected. Note that the presence of a
|
||||
// recursive Transformer does not necessarily imply an infinite cycle.
|
||||
// As such, this check only activates after some minimal number of path steps.
|
||||
func (rc *recChecker) Check(p Path) {
|
||||
const minLen = 1 << 16
|
||||
if rc.next == 0 {
|
||||
rc.next = minLen
|
||||
}
|
||||
if len(p) < rc.next {
|
||||
return
|
||||
}
|
||||
rc.next <<= 1
|
||||
|
||||
// Check whether the same transformer has appeared at least twice.
|
||||
var ss []string
|
||||
m := map[Option]int{}
|
||||
for _, ps := range p {
|
||||
if t, ok := ps.(Transform); ok {
|
||||
t := t.Option()
|
||||
if m[t] == 1 { // Transformer was used exactly once before
|
||||
tf := t.(*transformer).fnc.Type()
|
||||
ss = append(ss, fmt.Sprintf("%v: %v => %v", t, tf.In(0), tf.Out(0)))
|
||||
}
|
||||
m[t]++
|
||||
}
|
||||
}
|
||||
if len(ss) > 0 {
|
||||
const warning = "recursive set of Transformers detected"
|
||||
const help = "consider using cmpopts.AcyclicTransformer"
|
||||
set := strings.Join(ss, "\n\t")
|
||||
panic(fmt.Sprintf("%s:\n\t%s\n%s", warning, set, help))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build purego appengine js
|
||||
// +build purego
|
||||
|
||||
package cmp
|
||||
|
||||
@@ -10,6 +10,6 @@ import "reflect"
|
||||
|
||||
const supportAllowUnexported = false
|
||||
|
||||
func unsafeRetrieveField(reflect.Value, reflect.StructField) reflect.Value {
|
||||
panic("unsafeRetrieveField is not implemented")
|
||||
func retrieveUnexportedField(reflect.Value, reflect.StructField) reflect.Value {
|
||||
panic("retrieveUnexportedField is not implemented")
|
||||
}
|
||||
@@ -2,7 +2,7 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build !purego,!appengine,!js
|
||||
// +build !purego
|
||||
|
||||
package cmp
|
||||
|
||||
@@ -13,11 +13,11 @@ import (
|
||||
|
||||
const supportAllowUnexported = true
|
||||
|
||||
// unsafeRetrieveField uses unsafe to forcibly retrieve any field from a struct
|
||||
// such that the value has read-write permissions.
|
||||
// retrieveUnexportedField uses unsafe to forcibly retrieve any field from
|
||||
// a struct such that the value has read-write permissions.
|
||||
//
|
||||
// The parent struct, v, must be addressable, while f must be a StructField
|
||||
// describing the field to retrieve.
|
||||
func unsafeRetrieveField(v reflect.Value, f reflect.StructField) reflect.Value {
|
||||
func retrieveUnexportedField(v reflect.Value, f reflect.StructField) reflect.Value {
|
||||
return reflect.NewAt(f.Type, unsafe.Pointer(v.UnsafeAddr()+f.Offset)).Elem()
|
||||
}
|
||||
2
vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go
generated
vendored
2
vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go
generated
vendored
@@ -2,7 +2,7 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build !debug
|
||||
// +build !cmp_debug
|
||||
|
||||
package diff
|
||||
|
||||
|
||||
4
vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go
generated
vendored
4
vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go
generated
vendored
@@ -2,7 +2,7 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build debug
|
||||
// +build cmp_debug
|
||||
|
||||
package diff
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
)
|
||||
|
||||
// The algorithm can be seen running in real-time by enabling debugging:
|
||||
// go test -tags=debug -v
|
||||
// go test -tags=cmp_debug -v
|
||||
//
|
||||
// Example output:
|
||||
// === RUN TestDifference/#34
|
||||
|
||||
31
vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go
generated
vendored
31
vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go
generated
vendored
@@ -85,22 +85,31 @@ func (es EditScript) LenY() int { return len(es) - es.stats().NX }
|
||||
type EqualFunc func(ix int, iy int) Result
|
||||
|
||||
// Result is the result of comparison.
|
||||
// NSame is the number of sub-elements that are equal.
|
||||
// NDiff is the number of sub-elements that are not equal.
|
||||
type Result struct{ NSame, NDiff int }
|
||||
// NumSame is the number of sub-elements that are equal.
|
||||
// NumDiff is the number of sub-elements that are not equal.
|
||||
type Result struct{ NumSame, NumDiff int }
|
||||
|
||||
// BoolResult returns a Result that is either Equal or not Equal.
|
||||
func BoolResult(b bool) Result {
|
||||
if b {
|
||||
return Result{NumSame: 1} // Equal, Similar
|
||||
} else {
|
||||
return Result{NumDiff: 2} // Not Equal, not Similar
|
||||
}
|
||||
}
|
||||
|
||||
// Equal indicates whether the symbols are equal. Two symbols are equal
|
||||
// if and only if NDiff == 0. If Equal, then they are also Similar.
|
||||
func (r Result) Equal() bool { return r.NDiff == 0 }
|
||||
// if and only if NumDiff == 0. If Equal, then they are also Similar.
|
||||
func (r Result) Equal() bool { return r.NumDiff == 0 }
|
||||
|
||||
// Similar indicates whether two symbols are similar and may be represented
|
||||
// by using the Modified type. As a special case, we consider binary comparisons
|
||||
// (i.e., those that return Result{1, 0} or Result{0, 1}) to be similar.
|
||||
//
|
||||
// The exact ratio of NSame to NDiff to determine similarity may change.
|
||||
// The exact ratio of NumSame to NumDiff to determine similarity may change.
|
||||
func (r Result) Similar() bool {
|
||||
// Use NSame+1 to offset NSame so that binary comparisons are similar.
|
||||
return r.NSame+1 >= r.NDiff
|
||||
// Use NumSame+1 to offset NumSame so that binary comparisons are similar.
|
||||
return r.NumSame+1 >= r.NumDiff
|
||||
}
|
||||
|
||||
// Difference reports whether two lists of lengths nx and ny are equal
|
||||
@@ -191,9 +200,9 @@ func Difference(nx, ny int, f EqualFunc) (es EditScript) {
|
||||
// that two lists commonly differ because elements were added to the front
|
||||
// or end of the other list.
|
||||
//
|
||||
// Running the tests with the "debug" build tag prints a visualization of
|
||||
// the algorithm running in real-time. This is educational for understanding
|
||||
// how the algorithm works. See debug_enable.go.
|
||||
// Running the tests with the "cmp_debug" build tag prints a visualization
|
||||
// of the algorithm running in real-time. This is educational for
|
||||
// understanding how the algorithm works. See debug_enable.go.
|
||||
f = debug.Begin(nx, ny, f, &fwdPath.es, &revPath.es)
|
||||
for {
|
||||
// Forward search from the beginning.
|
||||
|
||||
9
vendor/github.com/google/go-cmp/cmp/internal/flags/flags.go
generated
vendored
Normal file
9
vendor/github.com/google/go-cmp/cmp/internal/flags/flags.go
generated
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
// Copyright 2019, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package flags
|
||||
|
||||
// Deterministic controls whether the output of Diff should be deterministic.
|
||||
// This is only used for testing.
|
||||
var Deterministic bool
|
||||
10
vendor/github.com/google/go-cmp/cmp/internal/flags/toolchain_legacy.go
generated
vendored
Normal file
10
vendor/github.com/google/go-cmp/cmp/internal/flags/toolchain_legacy.go
generated
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
// Copyright 2019, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build !go1.10
|
||||
|
||||
package flags
|
||||
|
||||
// AtLeastGo110 reports whether the Go toolchain is at least Go 1.10.
|
||||
const AtLeastGo110 = false
|
||||
10
vendor/github.com/google/go-cmp/cmp/internal/flags/toolchain_recent.go
generated
vendored
Normal file
10
vendor/github.com/google/go-cmp/cmp/internal/flags/toolchain_recent.go
generated
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
// Copyright 2019, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build go1.10
|
||||
|
||||
package flags
|
||||
|
||||
// AtLeastGo110 reports whether the Go toolchain is at least Go 1.10.
|
||||
const AtLeastGo110 = true
|
||||
64
vendor/github.com/google/go-cmp/cmp/internal/function/func.go
generated
vendored
64
vendor/github.com/google/go-cmp/cmp/internal/function/func.go
generated
vendored
@@ -2,25 +2,34 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// Package function identifies function types.
|
||||
// Package function provides functionality for identifying function types.
|
||||
package function
|
||||
|
||||
import "reflect"
|
||||
import (
|
||||
"reflect"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type funcType int
|
||||
|
||||
const (
|
||||
_ funcType = iota
|
||||
|
||||
tbFunc // func(T) bool
|
||||
ttbFunc // func(T, T) bool
|
||||
trbFunc // func(T, R) bool
|
||||
tibFunc // func(T, I) bool
|
||||
trFunc // func(T) R
|
||||
|
||||
Equal = ttbFunc // func(T, T) bool
|
||||
EqualAssignable = tibFunc // func(T, I) bool; encapsulates func(T, T) bool
|
||||
Transformer = trFunc // func(T) R
|
||||
ValueFilter = ttbFunc // func(T, T) bool
|
||||
Less = ttbFunc // func(T, T) bool
|
||||
Equal = ttbFunc // func(T, T) bool
|
||||
EqualAssignable = tibFunc // func(T, I) bool; encapsulates func(T, T) bool
|
||||
Transformer = trFunc // func(T) R
|
||||
ValueFilter = ttbFunc // func(T, T) bool
|
||||
Less = ttbFunc // func(T, T) bool
|
||||
ValuePredicate = tbFunc // func(T) bool
|
||||
KeyValuePredicate = trbFunc // func(T, R) bool
|
||||
)
|
||||
|
||||
var boolType = reflect.TypeOf(true)
|
||||
@@ -32,10 +41,18 @@ func IsType(t reflect.Type, ft funcType) bool {
|
||||
}
|
||||
ni, no := t.NumIn(), t.NumOut()
|
||||
switch ft {
|
||||
case tbFunc: // func(T) bool
|
||||
if ni == 1 && no == 1 && t.Out(0) == boolType {
|
||||
return true
|
||||
}
|
||||
case ttbFunc: // func(T, T) bool
|
||||
if ni == 2 && no == 1 && t.In(0) == t.In(1) && t.Out(0) == boolType {
|
||||
return true
|
||||
}
|
||||
case trbFunc: // func(T, R) bool
|
||||
if ni == 2 && no == 1 && t.Out(0) == boolType {
|
||||
return true
|
||||
}
|
||||
case tibFunc: // func(T, I) bool
|
||||
if ni == 2 && no == 1 && t.In(0).AssignableTo(t.In(1)) && t.Out(0) == boolType {
|
||||
return true
|
||||
@@ -47,3 +64,36 @@ func IsType(t reflect.Type, ft funcType) bool {
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
var lastIdentRx = regexp.MustCompile(`[_\p{L}][_\p{L}\p{N}]*$`)
|
||||
|
||||
// NameOf returns the name of the function value.
|
||||
func NameOf(v reflect.Value) string {
|
||||
fnc := runtime.FuncForPC(v.Pointer())
|
||||
if fnc == nil {
|
||||
return "<unknown>"
|
||||
}
|
||||
fullName := fnc.Name() // e.g., "long/path/name/mypkg.(*MyType).(long/path/name/mypkg.myMethod)-fm"
|
||||
|
||||
// Method closures have a "-fm" suffix.
|
||||
fullName = strings.TrimSuffix(fullName, "-fm")
|
||||
|
||||
var name string
|
||||
for len(fullName) > 0 {
|
||||
inParen := strings.HasSuffix(fullName, ")")
|
||||
fullName = strings.TrimSuffix(fullName, ")")
|
||||
|
||||
s := lastIdentRx.FindString(fullName)
|
||||
if s == "" {
|
||||
break
|
||||
}
|
||||
name = s + "." + name
|
||||
fullName = strings.TrimSuffix(fullName, s)
|
||||
|
||||
if i := strings.LastIndexByte(fullName, '('); inParen && i >= 0 {
|
||||
fullName = fullName[:i]
|
||||
}
|
||||
fullName = strings.TrimSuffix(fullName, ".")
|
||||
}
|
||||
return strings.TrimSuffix(name, ".")
|
||||
}
|
||||
|
||||
277
vendor/github.com/google/go-cmp/cmp/internal/value/format.go
generated
vendored
277
vendor/github.com/google/go-cmp/cmp/internal/value/format.go
generated
vendored
@@ -1,277 +0,0 @@
|
||||
// Copyright 2017, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// Package value provides functionality for reflect.Value types.
|
||||
package value
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
var stringerIface = reflect.TypeOf((*fmt.Stringer)(nil)).Elem()
|
||||
|
||||
// Format formats the value v as a string.
|
||||
//
|
||||
// This is similar to fmt.Sprintf("%+v", v) except this:
|
||||
// * Prints the type unless it can be elided
|
||||
// * Avoids printing struct fields that are zero
|
||||
// * Prints a nil-slice as being nil, not empty
|
||||
// * Prints map entries in deterministic order
|
||||
func Format(v reflect.Value, conf FormatConfig) string {
|
||||
conf.printType = true
|
||||
conf.followPointers = true
|
||||
conf.realPointers = true
|
||||
return formatAny(v, conf, nil)
|
||||
}
|
||||
|
||||
type FormatConfig struct {
|
||||
UseStringer bool // Should the String method be used if available?
|
||||
printType bool // Should we print the type before the value?
|
||||
PrintPrimitiveType bool // Should we print the type of primitives?
|
||||
followPointers bool // Should we recursively follow pointers?
|
||||
realPointers bool // Should we print the real address of pointers?
|
||||
}
|
||||
|
||||
func formatAny(v reflect.Value, conf FormatConfig, visited map[uintptr]bool) string {
|
||||
// TODO: Should this be a multi-line printout in certain situations?
|
||||
|
||||
if !v.IsValid() {
|
||||
return "<non-existent>"
|
||||
}
|
||||
if conf.UseStringer && v.Type().Implements(stringerIface) && v.CanInterface() {
|
||||
if (v.Kind() == reflect.Ptr || v.Kind() == reflect.Interface) && v.IsNil() {
|
||||
return "<nil>"
|
||||
}
|
||||
|
||||
const stringerPrefix = "s" // Indicates that the String method was used
|
||||
s := v.Interface().(fmt.Stringer).String()
|
||||
return stringerPrefix + formatString(s)
|
||||
}
|
||||
|
||||
switch v.Kind() {
|
||||
case reflect.Bool:
|
||||
return formatPrimitive(v.Type(), v.Bool(), conf)
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return formatPrimitive(v.Type(), v.Int(), conf)
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
if v.Type().PkgPath() == "" || v.Kind() == reflect.Uintptr {
|
||||
// Unnamed uints are usually bytes or words, so use hexadecimal.
|
||||
return formatPrimitive(v.Type(), formatHex(v.Uint()), conf)
|
||||
}
|
||||
return formatPrimitive(v.Type(), v.Uint(), conf)
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return formatPrimitive(v.Type(), v.Float(), conf)
|
||||
case reflect.Complex64, reflect.Complex128:
|
||||
return formatPrimitive(v.Type(), v.Complex(), conf)
|
||||
case reflect.String:
|
||||
return formatPrimitive(v.Type(), formatString(v.String()), conf)
|
||||
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
|
||||
return formatPointer(v, conf)
|
||||
case reflect.Ptr:
|
||||
if v.IsNil() {
|
||||
if conf.printType {
|
||||
return fmt.Sprintf("(%v)(nil)", v.Type())
|
||||
}
|
||||
return "<nil>"
|
||||
}
|
||||
if visited[v.Pointer()] || !conf.followPointers {
|
||||
return formatPointer(v, conf)
|
||||
}
|
||||
visited = insertPointer(visited, v.Pointer())
|
||||
return "&" + formatAny(v.Elem(), conf, visited)
|
||||
case reflect.Interface:
|
||||
if v.IsNil() {
|
||||
if conf.printType {
|
||||
return fmt.Sprintf("%v(nil)", v.Type())
|
||||
}
|
||||
return "<nil>"
|
||||
}
|
||||
return formatAny(v.Elem(), conf, visited)
|
||||
case reflect.Slice:
|
||||
if v.IsNil() {
|
||||
if conf.printType {
|
||||
return fmt.Sprintf("%v(nil)", v.Type())
|
||||
}
|
||||
return "<nil>"
|
||||
}
|
||||
if visited[v.Pointer()] {
|
||||
return formatPointer(v, conf)
|
||||
}
|
||||
visited = insertPointer(visited, v.Pointer())
|
||||
fallthrough
|
||||
case reflect.Array:
|
||||
var ss []string
|
||||
subConf := conf
|
||||
subConf.printType = v.Type().Elem().Kind() == reflect.Interface
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
s := formatAny(v.Index(i), subConf, visited)
|
||||
ss = append(ss, s)
|
||||
}
|
||||
s := fmt.Sprintf("{%s}", strings.Join(ss, ", "))
|
||||
if conf.printType {
|
||||
return v.Type().String() + s
|
||||
}
|
||||
return s
|
||||
case reflect.Map:
|
||||
if v.IsNil() {
|
||||
if conf.printType {
|
||||
return fmt.Sprintf("%v(nil)", v.Type())
|
||||
}
|
||||
return "<nil>"
|
||||
}
|
||||
if visited[v.Pointer()] {
|
||||
return formatPointer(v, conf)
|
||||
}
|
||||
visited = insertPointer(visited, v.Pointer())
|
||||
|
||||
var ss []string
|
||||
keyConf, valConf := conf, conf
|
||||
keyConf.printType = v.Type().Key().Kind() == reflect.Interface
|
||||
keyConf.followPointers = false
|
||||
valConf.printType = v.Type().Elem().Kind() == reflect.Interface
|
||||
for _, k := range SortKeys(v.MapKeys()) {
|
||||
sk := formatAny(k, keyConf, visited)
|
||||
sv := formatAny(v.MapIndex(k), valConf, visited)
|
||||
ss = append(ss, fmt.Sprintf("%s: %s", sk, sv))
|
||||
}
|
||||
s := fmt.Sprintf("{%s}", strings.Join(ss, ", "))
|
||||
if conf.printType {
|
||||
return v.Type().String() + s
|
||||
}
|
||||
return s
|
||||
case reflect.Struct:
|
||||
var ss []string
|
||||
subConf := conf
|
||||
subConf.printType = true
|
||||
for i := 0; i < v.NumField(); i++ {
|
||||
vv := v.Field(i)
|
||||
if isZero(vv) {
|
||||
continue // Elide zero value fields
|
||||
}
|
||||
name := v.Type().Field(i).Name
|
||||
subConf.UseStringer = conf.UseStringer
|
||||
s := formatAny(vv, subConf, visited)
|
||||
ss = append(ss, fmt.Sprintf("%s: %s", name, s))
|
||||
}
|
||||
s := fmt.Sprintf("{%s}", strings.Join(ss, ", "))
|
||||
if conf.printType {
|
||||
return v.Type().String() + s
|
||||
}
|
||||
return s
|
||||
default:
|
||||
panic(fmt.Sprintf("%v kind not handled", v.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
func formatString(s string) string {
|
||||
// Use quoted string if it the same length as a raw string literal.
|
||||
// Otherwise, attempt to use the raw string form.
|
||||
qs := strconv.Quote(s)
|
||||
if len(qs) == 1+len(s)+1 {
|
||||
return qs
|
||||
}
|
||||
|
||||
// Disallow newlines to ensure output is a single line.
|
||||
// Only allow printable runes for readability purposes.
|
||||
rawInvalid := func(r rune) bool {
|
||||
return r == '`' || r == '\n' || !unicode.IsPrint(r)
|
||||
}
|
||||
if strings.IndexFunc(s, rawInvalid) < 0 {
|
||||
return "`" + s + "`"
|
||||
}
|
||||
return qs
|
||||
}
|
||||
|
||||
func formatPrimitive(t reflect.Type, v interface{}, conf FormatConfig) string {
|
||||
if conf.printType && (conf.PrintPrimitiveType || t.PkgPath() != "") {
|
||||
return fmt.Sprintf("%v(%v)", t, v)
|
||||
}
|
||||
return fmt.Sprintf("%v", v)
|
||||
}
|
||||
|
||||
func formatPointer(v reflect.Value, conf FormatConfig) string {
|
||||
p := v.Pointer()
|
||||
if !conf.realPointers {
|
||||
p = 0 // For deterministic printing purposes
|
||||
}
|
||||
s := formatHex(uint64(p))
|
||||
if conf.printType {
|
||||
return fmt.Sprintf("(%v)(%s)", v.Type(), s)
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func formatHex(u uint64) string {
|
||||
var f string
|
||||
switch {
|
||||
case u <= 0xff:
|
||||
f = "0x%02x"
|
||||
case u <= 0xffff:
|
||||
f = "0x%04x"
|
||||
case u <= 0xffffff:
|
||||
f = "0x%06x"
|
||||
case u <= 0xffffffff:
|
||||
f = "0x%08x"
|
||||
case u <= 0xffffffffff:
|
||||
f = "0x%010x"
|
||||
case u <= 0xffffffffffff:
|
||||
f = "0x%012x"
|
||||
case u <= 0xffffffffffffff:
|
||||
f = "0x%014x"
|
||||
case u <= 0xffffffffffffffff:
|
||||
f = "0x%016x"
|
||||
}
|
||||
return fmt.Sprintf(f, u)
|
||||
}
|
||||
|
||||
// insertPointer insert p into m, allocating m if necessary.
|
||||
func insertPointer(m map[uintptr]bool, p uintptr) map[uintptr]bool {
|
||||
if m == nil {
|
||||
m = make(map[uintptr]bool)
|
||||
}
|
||||
m[p] = true
|
||||
return m
|
||||
}
|
||||
|
||||
// isZero reports whether v is the zero value.
|
||||
// This does not rely on Interface and so can be used on unexported fields.
|
||||
func isZero(v reflect.Value) bool {
|
||||
switch v.Kind() {
|
||||
case reflect.Bool:
|
||||
return v.Bool() == false
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return v.Int() == 0
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
return v.Uint() == 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return v.Float() == 0
|
||||
case reflect.Complex64, reflect.Complex128:
|
||||
return v.Complex() == 0
|
||||
case reflect.String:
|
||||
return v.String() == ""
|
||||
case reflect.UnsafePointer:
|
||||
return v.Pointer() == 0
|
||||
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Ptr, reflect.Map, reflect.Slice:
|
||||
return v.IsNil()
|
||||
case reflect.Array:
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
if !isZero(v.Index(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
case reflect.Struct:
|
||||
for i := 0; i < v.NumField(); i++ {
|
||||
if !isZero(v.Field(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
23
vendor/github.com/google/go-cmp/cmp/internal/value/pointer_purego.go
generated
vendored
Normal file
23
vendor/github.com/google/go-cmp/cmp/internal/value/pointer_purego.go
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
// Copyright 2018, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build purego
|
||||
|
||||
package value
|
||||
|
||||
import "reflect"
|
||||
|
||||
// Pointer is an opaque typed pointer and is guaranteed to be comparable.
|
||||
type Pointer struct {
|
||||
p uintptr
|
||||
t reflect.Type
|
||||
}
|
||||
|
||||
// PointerOf returns a Pointer from v, which must be a
|
||||
// reflect.Ptr, reflect.Slice, or reflect.Map.
|
||||
func PointerOf(v reflect.Value) Pointer {
|
||||
// NOTE: Storing a pointer as an uintptr is technically incorrect as it
|
||||
// assumes that the GC implementation does not use a moving collector.
|
||||
return Pointer{v.Pointer(), v.Type()}
|
||||
}
|
||||
26
vendor/github.com/google/go-cmp/cmp/internal/value/pointer_unsafe.go
generated
vendored
Normal file
26
vendor/github.com/google/go-cmp/cmp/internal/value/pointer_unsafe.go
generated
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
// Copyright 2018, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
// +build !purego
|
||||
|
||||
package value
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// Pointer is an opaque typed pointer and is guaranteed to be comparable.
|
||||
type Pointer struct {
|
||||
p unsafe.Pointer
|
||||
t reflect.Type
|
||||
}
|
||||
|
||||
// PointerOf returns a Pointer from v, which must be a
|
||||
// reflect.Ptr, reflect.Slice, or reflect.Map.
|
||||
func PointerOf(v reflect.Value) Pointer {
|
||||
// The proper representation of a pointer is unsafe.Pointer,
|
||||
// which is necessary if the GC ever uses a moving collector.
|
||||
return Pointer{unsafe.Pointer(v.Pointer()), v.Type()}
|
||||
}
|
||||
9
vendor/github.com/google/go-cmp/cmp/internal/value/sort.go
generated
vendored
9
vendor/github.com/google/go-cmp/cmp/internal/value/sort.go
generated
vendored
@@ -19,7 +19,7 @@ func SortKeys(vs []reflect.Value) []reflect.Value {
|
||||
}
|
||||
|
||||
// Sort the map keys.
|
||||
sort.Sort(valueSorter(vs))
|
||||
sort.Slice(vs, func(i, j int) bool { return isLess(vs[i], vs[j]) })
|
||||
|
||||
// Deduplicate keys (fails for NaNs).
|
||||
vs2 := vs[:1]
|
||||
@@ -31,13 +31,6 @@ func SortKeys(vs []reflect.Value) []reflect.Value {
|
||||
return vs2
|
||||
}
|
||||
|
||||
// TODO: Use sort.Slice once Google AppEngine is on Go1.8 or above.
|
||||
type valueSorter []reflect.Value
|
||||
|
||||
func (vs valueSorter) Len() int { return len(vs) }
|
||||
func (vs valueSorter) Less(i, j int) bool { return isLess(vs[i], vs[j]) }
|
||||
func (vs valueSorter) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] }
|
||||
|
||||
// isLess is a generic function for sorting arbitrary map keys.
|
||||
// The inputs must be of the same type and must be comparable.
|
||||
func isLess(x, y reflect.Value) bool {
|
||||
|
||||
45
vendor/github.com/google/go-cmp/cmp/internal/value/zero.go
generated
vendored
Normal file
45
vendor/github.com/google/go-cmp/cmp/internal/value/zero.go
generated
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
// Copyright 2017, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package value
|
||||
|
||||
import "reflect"
|
||||
|
||||
// IsZero reports whether v is the zero value.
|
||||
// This does not rely on Interface and so can be used on unexported fields.
|
||||
func IsZero(v reflect.Value) bool {
|
||||
switch v.Kind() {
|
||||
case reflect.Bool:
|
||||
return v.Bool() == false
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return v.Int() == 0
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
return v.Uint() == 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return v.Float() == 0
|
||||
case reflect.Complex64, reflect.Complex128:
|
||||
return v.Complex() == 0
|
||||
case reflect.String:
|
||||
return v.String() == ""
|
||||
case reflect.UnsafePointer:
|
||||
return v.Pointer() == 0
|
||||
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Ptr, reflect.Map, reflect.Slice:
|
||||
return v.IsNil()
|
||||
case reflect.Array:
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
if !IsZero(v.Index(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
case reflect.Struct:
|
||||
for i := 0; i < v.NumField(); i++ {
|
||||
if !IsZero(v.Field(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
255
vendor/github.com/google/go-cmp/cmp/options.go
generated
vendored
255
vendor/github.com/google/go-cmp/cmp/options.go
generated
vendored
@@ -7,7 +7,7 @@ package cmp
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/google/go-cmp/cmp/internal/function"
|
||||
@@ -29,11 +29,11 @@ type Option interface {
|
||||
// An Options is returned only if multiple comparers or transformers
|
||||
// can apply simultaneously and will only contain values of those types
|
||||
// or sub-Options containing values of those types.
|
||||
filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption
|
||||
filter(s *state, t reflect.Type, vx, vy reflect.Value) applicableOption
|
||||
}
|
||||
|
||||
// applicableOption represents the following types:
|
||||
// Fundamental: ignore | invalid | *comparer | *transformer
|
||||
// Fundamental: ignore | validator | *comparer | *transformer
|
||||
// Grouping: Options
|
||||
type applicableOption interface {
|
||||
Option
|
||||
@@ -43,7 +43,7 @@ type applicableOption interface {
|
||||
}
|
||||
|
||||
// coreOption represents the following types:
|
||||
// Fundamental: ignore | invalid | *comparer | *transformer
|
||||
// Fundamental: ignore | validator | *comparer | *transformer
|
||||
// Filters: *pathFilter | *valuesFilter
|
||||
type coreOption interface {
|
||||
Option
|
||||
@@ -63,19 +63,19 @@ func (core) isCore() {}
|
||||
// on all individual options held within.
|
||||
type Options []Option
|
||||
|
||||
func (opts Options) filter(s *state, vx, vy reflect.Value, t reflect.Type) (out applicableOption) {
|
||||
func (opts Options) filter(s *state, t reflect.Type, vx, vy reflect.Value) (out applicableOption) {
|
||||
for _, opt := range opts {
|
||||
switch opt := opt.filter(s, vx, vy, t); opt.(type) {
|
||||
switch opt := opt.filter(s, t, vx, vy); opt.(type) {
|
||||
case ignore:
|
||||
return ignore{} // Only ignore can short-circuit evaluation
|
||||
case invalid:
|
||||
out = invalid{} // Takes precedence over comparer or transformer
|
||||
case validator:
|
||||
out = validator{} // Takes precedence over comparer or transformer
|
||||
case *comparer, *transformer, Options:
|
||||
switch out.(type) {
|
||||
case nil:
|
||||
out = opt
|
||||
case invalid:
|
||||
// Keep invalid
|
||||
case validator:
|
||||
// Keep validator
|
||||
case *comparer, *transformer, Options:
|
||||
out = Options{out, opt} // Conflicting comparers or transformers
|
||||
}
|
||||
@@ -106,6 +106,11 @@ func (opts Options) String() string {
|
||||
// FilterPath returns a new Option where opt is only evaluated if filter f
|
||||
// returns true for the current Path in the value tree.
|
||||
//
|
||||
// This filter is called even if a slice element or map entry is missing and
|
||||
// provides an opportunity to ignore such cases. The filter function must be
|
||||
// symmetric such that the filter result is identical regardless of whether the
|
||||
// missing value is from x or y.
|
||||
//
|
||||
// The option passed in may be an Ignore, Transformer, Comparer, Options, or
|
||||
// a previously filtered Option.
|
||||
func FilterPath(f func(Path) bool, opt Option) Option {
|
||||
@@ -124,22 +129,22 @@ type pathFilter struct {
|
||||
opt Option
|
||||
}
|
||||
|
||||
func (f pathFilter) filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption {
|
||||
func (f pathFilter) filter(s *state, t reflect.Type, vx, vy reflect.Value) applicableOption {
|
||||
if f.fnc(s.curPath) {
|
||||
return f.opt.filter(s, vx, vy, t)
|
||||
return f.opt.filter(s, t, vx, vy)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f pathFilter) String() string {
|
||||
fn := getFuncName(reflect.ValueOf(f.fnc).Pointer())
|
||||
return fmt.Sprintf("FilterPath(%s, %v)", fn, f.opt)
|
||||
return fmt.Sprintf("FilterPath(%s, %v)", function.NameOf(reflect.ValueOf(f.fnc)), f.opt)
|
||||
}
|
||||
|
||||
// FilterValues returns a new Option where opt is only evaluated if filter f,
|
||||
// which is a function of the form "func(T, T) bool", returns true for the
|
||||
// current pair of values being compared. If the type of the values is not
|
||||
// assignable to T, then this filter implicitly returns false.
|
||||
// current pair of values being compared. If either value is invalid or
|
||||
// the type of the values is not assignable to T, then this filter implicitly
|
||||
// returns false.
|
||||
//
|
||||
// The filter function must be
|
||||
// symmetric (i.e., agnostic to the order of the inputs) and
|
||||
@@ -171,19 +176,18 @@ type valuesFilter struct {
|
||||
opt Option
|
||||
}
|
||||
|
||||
func (f valuesFilter) filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption {
|
||||
if !vx.IsValid() || !vy.IsValid() {
|
||||
return invalid{}
|
||||
func (f valuesFilter) filter(s *state, t reflect.Type, vx, vy reflect.Value) applicableOption {
|
||||
if !vx.IsValid() || !vx.CanInterface() || !vy.IsValid() || !vy.CanInterface() {
|
||||
return nil
|
||||
}
|
||||
if (f.typ == nil || t.AssignableTo(f.typ)) && s.callTTBFunc(f.fnc, vx, vy) {
|
||||
return f.opt.filter(s, vx, vy, t)
|
||||
return f.opt.filter(s, t, vx, vy)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f valuesFilter) String() string {
|
||||
fn := getFuncName(f.fnc.Pointer())
|
||||
return fmt.Sprintf("FilterValues(%s, %v)", fn, f.opt)
|
||||
return fmt.Sprintf("FilterValues(%s, %v)", function.NameOf(f.fnc), f.opt)
|
||||
}
|
||||
|
||||
// Ignore is an Option that causes all comparisons to be ignored.
|
||||
@@ -194,19 +198,44 @@ func Ignore() Option { return ignore{} }
|
||||
type ignore struct{ core }
|
||||
|
||||
func (ignore) isFiltered() bool { return false }
|
||||
func (ignore) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption { return ignore{} }
|
||||
func (ignore) apply(_ *state, _, _ reflect.Value) { return }
|
||||
func (ignore) filter(_ *state, _ reflect.Type, _, _ reflect.Value) applicableOption { return ignore{} }
|
||||
func (ignore) apply(s *state, _, _ reflect.Value) { s.report(true, reportByIgnore) }
|
||||
func (ignore) String() string { return "Ignore()" }
|
||||
|
||||
// invalid is a sentinel Option type to indicate that some options could not
|
||||
// be evaluated due to unexported fields.
|
||||
type invalid struct{ core }
|
||||
// validator is a sentinel Option type to indicate that some options could not
|
||||
// be evaluated due to unexported fields, missing slice elements, or
|
||||
// missing map entries. Both values are validator only for unexported fields.
|
||||
type validator struct{ core }
|
||||
|
||||
func (invalid) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption { return invalid{} }
|
||||
func (invalid) apply(s *state, _, _ reflect.Value) {
|
||||
const help = "consider using AllowUnexported or cmpopts.IgnoreUnexported"
|
||||
panic(fmt.Sprintf("cannot handle unexported field: %#v\n%s", s.curPath, help))
|
||||
func (validator) filter(_ *state, _ reflect.Type, vx, vy reflect.Value) applicableOption {
|
||||
if !vx.IsValid() || !vy.IsValid() {
|
||||
return validator{}
|
||||
}
|
||||
if !vx.CanInterface() || !vy.CanInterface() {
|
||||
return validator{}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (validator) apply(s *state, vx, vy reflect.Value) {
|
||||
// Implies missing slice element or map entry.
|
||||
if !vx.IsValid() || !vy.IsValid() {
|
||||
s.report(vx.IsValid() == vy.IsValid(), 0)
|
||||
return
|
||||
}
|
||||
|
||||
// Unable to Interface implies unexported field without visibility access.
|
||||
if !vx.CanInterface() || !vy.CanInterface() {
|
||||
const help = "consider using a custom Comparer; if you control the implementation of type, you can also consider AllowUnexported or cmpopts.IgnoreUnexported"
|
||||
panic(fmt.Sprintf("cannot handle unexported field: %#v\n%s", s.curPath, help))
|
||||
}
|
||||
|
||||
panic("not reachable")
|
||||
}
|
||||
|
||||
// identRx represents a valid identifier according to the Go specification.
|
||||
const identRx = `[_\p{L}][_\p{L}\p{N}]*`
|
||||
|
||||
var identsRx = regexp.MustCompile(`^` + identRx + `(\.` + identRx + `)*$`)
|
||||
|
||||
// Transformer returns an Option that applies a transformation function that
|
||||
// converts values of a certain type into that of another.
|
||||
@@ -220,18 +249,25 @@ func (invalid) apply(s *state, _, _ reflect.Value) {
|
||||
// input and output types are the same), an implicit filter is added such that
|
||||
// a transformer is applicable only if that exact transformer is not already
|
||||
// in the tail of the Path since the last non-Transform step.
|
||||
// For situations where the implicit filter is still insufficient,
|
||||
// consider using cmpopts.AcyclicTransformer, which adds a filter
|
||||
// to prevent the transformer from being recursively applied upon itself.
|
||||
//
|
||||
// The name is a user provided label that is used as the Transform.Name in the
|
||||
// transformation PathStep. If empty, an arbitrary name is used.
|
||||
// transformation PathStep (and eventually shown in the Diff output).
|
||||
// The name must be a valid identifier or qualified identifier in Go syntax.
|
||||
// If empty, an arbitrary name is used.
|
||||
func Transformer(name string, f interface{}) Option {
|
||||
v := reflect.ValueOf(f)
|
||||
if !function.IsType(v.Type(), function.Transformer) || v.IsNil() {
|
||||
panic(fmt.Sprintf("invalid transformer function: %T", f))
|
||||
}
|
||||
if name == "" {
|
||||
name = "λ" // Lambda-symbol as place-holder for anonymous transformer
|
||||
}
|
||||
if !isValid(name) {
|
||||
name = function.NameOf(v)
|
||||
if !identsRx.MatchString(name) {
|
||||
name = "λ" // Lambda-symbol as placeholder name
|
||||
}
|
||||
} else if !identsRx.MatchString(name) {
|
||||
panic(fmt.Sprintf("invalid name: %q", name))
|
||||
}
|
||||
tr := &transformer{name: name, fnc: reflect.ValueOf(f)}
|
||||
@@ -250,9 +286,9 @@ type transformer struct {
|
||||
|
||||
func (tr *transformer) isFiltered() bool { return tr.typ != nil }
|
||||
|
||||
func (tr *transformer) filter(s *state, _, _ reflect.Value, t reflect.Type) applicableOption {
|
||||
func (tr *transformer) filter(s *state, t reflect.Type, _, _ reflect.Value) applicableOption {
|
||||
for i := len(s.curPath) - 1; i >= 0; i-- {
|
||||
if t, ok := s.curPath[i].(*transform); !ok {
|
||||
if t, ok := s.curPath[i].(Transform); !ok {
|
||||
break // Hit most recent non-Transform step
|
||||
} else if tr == t.trans {
|
||||
return nil // Cannot directly use same Transform
|
||||
@@ -265,18 +301,15 @@ func (tr *transformer) filter(s *state, _, _ reflect.Value, t reflect.Type) appl
|
||||
}
|
||||
|
||||
func (tr *transformer) apply(s *state, vx, vy reflect.Value) {
|
||||
// Update path before calling the Transformer so that dynamic checks
|
||||
// will use the updated path.
|
||||
s.curPath.push(&transform{pathStep{tr.fnc.Type().Out(0)}, tr})
|
||||
defer s.curPath.pop()
|
||||
|
||||
vx = s.callTRFunc(tr.fnc, vx)
|
||||
vy = s.callTRFunc(tr.fnc, vy)
|
||||
s.compareAny(vx, vy)
|
||||
step := Transform{&transform{pathStep{typ: tr.fnc.Type().Out(0)}, tr}}
|
||||
vvx := s.callTRFunc(tr.fnc, vx, step)
|
||||
vvy := s.callTRFunc(tr.fnc, vy, step)
|
||||
step.vx, step.vy = vvx, vvy
|
||||
s.compareAny(step)
|
||||
}
|
||||
|
||||
func (tr transformer) String() string {
|
||||
return fmt.Sprintf("Transformer(%s, %s)", tr.name, getFuncName(tr.fnc.Pointer()))
|
||||
return fmt.Sprintf("Transformer(%s, %s)", tr.name, function.NameOf(tr.fnc))
|
||||
}
|
||||
|
||||
// Comparer returns an Option that determines whether two values are equal
|
||||
@@ -311,7 +344,7 @@ type comparer struct {
|
||||
|
||||
func (cm *comparer) isFiltered() bool { return cm.typ != nil }
|
||||
|
||||
func (cm *comparer) filter(_ *state, _, _ reflect.Value, t reflect.Type) applicableOption {
|
||||
func (cm *comparer) filter(_ *state, t reflect.Type, _, _ reflect.Value) applicableOption {
|
||||
if cm.typ == nil || t.AssignableTo(cm.typ) {
|
||||
return cm
|
||||
}
|
||||
@@ -320,11 +353,11 @@ func (cm *comparer) filter(_ *state, _, _ reflect.Value, t reflect.Type) applica
|
||||
|
||||
func (cm *comparer) apply(s *state, vx, vy reflect.Value) {
|
||||
eq := s.callTTBFunc(cm.fnc, vx, vy)
|
||||
s.report(eq, vx, vy)
|
||||
s.report(eq, reportByFunc)
|
||||
}
|
||||
|
||||
func (cm comparer) String() string {
|
||||
return fmt.Sprintf("Comparer(%s)", getFuncName(cm.fnc.Pointer()))
|
||||
return fmt.Sprintf("Comparer(%s)", function.NameOf(cm.fnc))
|
||||
}
|
||||
|
||||
// AllowUnexported returns an Option that forcibly allows operations on
|
||||
@@ -338,7 +371,7 @@ func (cm comparer) String() string {
|
||||
// defined in an internal package where the semantic meaning of an unexported
|
||||
// field is in the control of the user.
|
||||
//
|
||||
// For some cases, a custom Comparer should be used instead that defines
|
||||
// In many cases, a custom Comparer should be used instead that defines
|
||||
// equality as a function of the public API of a type rather than the underlying
|
||||
// unexported implementation.
|
||||
//
|
||||
@@ -370,27 +403,92 @@ func AllowUnexported(types ...interface{}) Option {
|
||||
|
||||
type visibleStructs map[reflect.Type]bool
|
||||
|
||||
func (visibleStructs) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption {
|
||||
func (visibleStructs) filter(_ *state, _ reflect.Type, _, _ reflect.Value) applicableOption {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// reporter is an Option that configures how differences are reported.
|
||||
type reporter interface {
|
||||
// TODO: Not exported yet.
|
||||
// Result represents the comparison result for a single node and
|
||||
// is provided by cmp when calling Result (see Reporter).
|
||||
type Result struct {
|
||||
_ [0]func() // Make Result incomparable
|
||||
flags resultFlags
|
||||
}
|
||||
|
||||
// Equal reports whether the node was determined to be equal or not.
|
||||
// As a special case, ignored nodes are considered equal.
|
||||
func (r Result) Equal() bool {
|
||||
return r.flags&(reportEqual|reportByIgnore) != 0
|
||||
}
|
||||
|
||||
// ByIgnore reports whether the node is equal because it was ignored.
|
||||
// This never reports true if Equal reports false.
|
||||
func (r Result) ByIgnore() bool {
|
||||
return r.flags&reportByIgnore != 0
|
||||
}
|
||||
|
||||
// ByMethod reports whether the Equal method determined equality.
|
||||
func (r Result) ByMethod() bool {
|
||||
return r.flags&reportByMethod != 0
|
||||
}
|
||||
|
||||
// ByFunc reports whether a Comparer function determined equality.
|
||||
func (r Result) ByFunc() bool {
|
||||
return r.flags&reportByFunc != 0
|
||||
}
|
||||
|
||||
type resultFlags uint
|
||||
|
||||
const (
|
||||
_ resultFlags = (1 << iota) / 2
|
||||
|
||||
reportEqual
|
||||
reportUnequal
|
||||
reportByIgnore
|
||||
reportByMethod
|
||||
reportByFunc
|
||||
)
|
||||
|
||||
// Reporter is an Option that can be passed to Equal. When Equal traverses
|
||||
// the value trees, it calls PushStep as it descends into each node in the
|
||||
// tree and PopStep as it ascend out of the node. The leaves of the tree are
|
||||
// either compared (determined to be equal or not equal) or ignored and reported
|
||||
// as such by calling the Report method.
|
||||
func Reporter(r interface {
|
||||
// PushStep is called when a tree-traversal operation is performed.
|
||||
// The PathStep itself is only valid until the step is popped.
|
||||
// The PathStep.Values are valid for the duration of the entire traversal
|
||||
// and must not be mutated.
|
||||
//
|
||||
// Perhaps add PushStep and PopStep and change Report to only accept
|
||||
// a PathStep instead of the full-path? Adding a PushStep and PopStep makes
|
||||
// it clear that we are traversing the value tree in a depth-first-search
|
||||
// manner, which has an effect on how values are printed.
|
||||
// Equal always calls PushStep at the start to provide an operation-less
|
||||
// PathStep used to report the root values.
|
||||
//
|
||||
// Within a slice, the exact set of inserted, removed, or modified elements
|
||||
// is unspecified and may change in future implementations.
|
||||
// The entries of a map are iterated through in an unspecified order.
|
||||
PushStep(PathStep)
|
||||
|
||||
Option
|
||||
// Report is called exactly once on leaf nodes to report whether the
|
||||
// comparison identified the node as equal, unequal, or ignored.
|
||||
// A leaf node is one that is immediately preceded by and followed by
|
||||
// a pair of PushStep and PopStep calls.
|
||||
Report(Result)
|
||||
|
||||
// Report is called for every comparison made and will be provided with
|
||||
// the two values being compared, the equality result, and the
|
||||
// current path in the value tree. It is possible for x or y to be an
|
||||
// invalid reflect.Value if one of the values is non-existent;
|
||||
// which is possible with maps and slices.
|
||||
Report(x, y reflect.Value, eq bool, p Path)
|
||||
// PopStep ascends back up the value tree.
|
||||
// There is always a matching pop call for every push call.
|
||||
PopStep()
|
||||
}) Option {
|
||||
return reporter{r}
|
||||
}
|
||||
|
||||
type reporter struct{ reporterIface }
|
||||
type reporterIface interface {
|
||||
PushStep(PathStep)
|
||||
Report(Result)
|
||||
PopStep()
|
||||
}
|
||||
|
||||
func (reporter) filter(_ *state, _ reflect.Type, _, _ reflect.Value) applicableOption {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// normalizeOption normalizes the input options such that all Options groups
|
||||
@@ -424,30 +522,3 @@ func flattenOptions(dst, src Options) Options {
|
||||
}
|
||||
return dst
|
||||
}
|
||||
|
||||
// getFuncName returns a short function name from the pointer.
|
||||
// The string parsing logic works up until Go1.9.
|
||||
func getFuncName(p uintptr) string {
|
||||
fnc := runtime.FuncForPC(p)
|
||||
if fnc == nil {
|
||||
return "<unknown>"
|
||||
}
|
||||
name := fnc.Name() // E.g., "long/path/name/mypkg.(mytype).(long/path/name/mypkg.myfunc)-fm"
|
||||
if strings.HasSuffix(name, ")-fm") || strings.HasSuffix(name, ")·fm") {
|
||||
// Strip the package name from method name.
|
||||
name = strings.TrimSuffix(name, ")-fm")
|
||||
name = strings.TrimSuffix(name, ")·fm")
|
||||
if i := strings.LastIndexByte(name, '('); i >= 0 {
|
||||
methodName := name[i+1:] // E.g., "long/path/name/mypkg.myfunc"
|
||||
if j := strings.LastIndexByte(methodName, '.'); j >= 0 {
|
||||
methodName = methodName[j+1:] // E.g., "myfunc"
|
||||
}
|
||||
name = name[:i] + methodName // E.g., "long/path/name/mypkg.(mytype)." + "myfunc"
|
||||
}
|
||||
}
|
||||
if i := strings.LastIndexByte(name, '/'); i >= 0 {
|
||||
// Strip the package name.
|
||||
name = name[i+1:] // E.g., "mypkg.(mytype).myfunc"
|
||||
}
|
||||
return name
|
||||
}
|
||||
|
||||
337
vendor/github.com/google/go-cmp/cmp/path.go
generated
vendored
337
vendor/github.com/google/go-cmp/cmp/path.go
generated
vendored
@@ -12,80 +12,52 @@ import (
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
type (
|
||||
// Path is a list of PathSteps describing the sequence of operations to get
|
||||
// from some root type to the current position in the value tree.
|
||||
// The first Path element is always an operation-less PathStep that exists
|
||||
// simply to identify the initial type.
|
||||
// Path is a list of PathSteps describing the sequence of operations to get
|
||||
// from some root type to the current position in the value tree.
|
||||
// The first Path element is always an operation-less PathStep that exists
|
||||
// simply to identify the initial type.
|
||||
//
|
||||
// When traversing structs with embedded structs, the embedded struct will
|
||||
// always be accessed as a field before traversing the fields of the
|
||||
// embedded struct themselves. That is, an exported field from the
|
||||
// embedded struct will never be accessed directly from the parent struct.
|
||||
type Path []PathStep
|
||||
|
||||
// PathStep is a union-type for specific operations to traverse
|
||||
// a value's tree structure. Users of this package never need to implement
|
||||
// these types as values of this type will be returned by this package.
|
||||
//
|
||||
// Implementations of this interface are
|
||||
// StructField, SliceIndex, MapIndex, Indirect, TypeAssertion, and Transform.
|
||||
type PathStep interface {
|
||||
String() string
|
||||
|
||||
// Type is the resulting type after performing the path step.
|
||||
Type() reflect.Type
|
||||
|
||||
// Values is the resulting values after performing the path step.
|
||||
// The type of each valid value is guaranteed to be identical to Type.
|
||||
//
|
||||
// When traversing structs with embedded structs, the embedded struct will
|
||||
// always be accessed as a field before traversing the fields of the
|
||||
// embedded struct themselves. That is, an exported field from the
|
||||
// embedded struct will never be accessed directly from the parent struct.
|
||||
Path []PathStep
|
||||
// In some cases, one or both may be invalid or have restrictions:
|
||||
// • For StructField, both are not interface-able if the current field
|
||||
// is unexported and the struct type is not explicitly permitted by
|
||||
// AllowUnexported to traverse unexported fields.
|
||||
// • For SliceIndex, one may be invalid if an element is missing from
|
||||
// either the x or y slice.
|
||||
// • For MapIndex, one may be invalid if an entry is missing from
|
||||
// either the x or y map.
|
||||
//
|
||||
// The provided values must not be mutated.
|
||||
Values() (vx, vy reflect.Value)
|
||||
}
|
||||
|
||||
// PathStep is a union-type for specific operations to traverse
|
||||
// a value's tree structure. Users of this package never need to implement
|
||||
// these types as values of this type will be returned by this package.
|
||||
PathStep interface {
|
||||
String() string
|
||||
Type() reflect.Type // Resulting type after performing the path step
|
||||
isPathStep()
|
||||
}
|
||||
|
||||
// SliceIndex is an index operation on a slice or array at some index Key.
|
||||
SliceIndex interface {
|
||||
PathStep
|
||||
Key() int // May return -1 if in a split state
|
||||
|
||||
// SplitKeys returns the indexes for indexing into slices in the
|
||||
// x and y values, respectively. These indexes may differ due to the
|
||||
// insertion or removal of an element in one of the slices, causing
|
||||
// all of the indexes to be shifted. If an index is -1, then that
|
||||
// indicates that the element does not exist in the associated slice.
|
||||
//
|
||||
// Key is guaranteed to return -1 if and only if the indexes returned
|
||||
// by SplitKeys are not the same. SplitKeys will never return -1 for
|
||||
// both indexes.
|
||||
SplitKeys() (x int, y int)
|
||||
|
||||
isSliceIndex()
|
||||
}
|
||||
// MapIndex is an index operation on a map at some index Key.
|
||||
MapIndex interface {
|
||||
PathStep
|
||||
Key() reflect.Value
|
||||
isMapIndex()
|
||||
}
|
||||
// TypeAssertion represents a type assertion on an interface.
|
||||
TypeAssertion interface {
|
||||
PathStep
|
||||
isTypeAssertion()
|
||||
}
|
||||
// StructField represents a struct field access on a field called Name.
|
||||
StructField interface {
|
||||
PathStep
|
||||
Name() string
|
||||
Index() int
|
||||
isStructField()
|
||||
}
|
||||
// Indirect represents pointer indirection on the parent type.
|
||||
Indirect interface {
|
||||
PathStep
|
||||
isIndirect()
|
||||
}
|
||||
// Transform is a transformation from the parent type to the current type.
|
||||
Transform interface {
|
||||
PathStep
|
||||
Name() string
|
||||
Func() reflect.Value
|
||||
|
||||
// Option returns the originally constructed Transformer option.
|
||||
// The == operator can be used to detect the exact option used.
|
||||
Option() Option
|
||||
|
||||
isTransform()
|
||||
}
|
||||
var (
|
||||
_ PathStep = StructField{}
|
||||
_ PathStep = SliceIndex{}
|
||||
_ PathStep = MapIndex{}
|
||||
_ PathStep = Indirect{}
|
||||
_ PathStep = TypeAssertion{}
|
||||
_ PathStep = Transform{}
|
||||
)
|
||||
|
||||
func (pa *Path) push(s PathStep) {
|
||||
@@ -124,7 +96,7 @@ func (pa Path) Index(i int) PathStep {
|
||||
func (pa Path) String() string {
|
||||
var ss []string
|
||||
for _, s := range pa {
|
||||
if _, ok := s.(*structField); ok {
|
||||
if _, ok := s.(StructField); ok {
|
||||
ss = append(ss, s.String())
|
||||
}
|
||||
}
|
||||
@@ -144,13 +116,13 @@ func (pa Path) GoString() string {
|
||||
nextStep = pa[i+1]
|
||||
}
|
||||
switch s := s.(type) {
|
||||
case *indirect:
|
||||
case Indirect:
|
||||
numIndirect++
|
||||
pPre, pPost := "(", ")"
|
||||
switch nextStep.(type) {
|
||||
case *indirect:
|
||||
case Indirect:
|
||||
continue // Next step is indirection, so let them batch up
|
||||
case *structField:
|
||||
case StructField:
|
||||
numIndirect-- // Automatic indirection on struct fields
|
||||
case nil:
|
||||
pPre, pPost = "", "" // Last step; no need for parenthesis
|
||||
@@ -161,19 +133,10 @@ func (pa Path) GoString() string {
|
||||
}
|
||||
numIndirect = 0
|
||||
continue
|
||||
case *transform:
|
||||
case Transform:
|
||||
ssPre = append(ssPre, s.trans.name+"(")
|
||||
ssPost = append(ssPost, ")")
|
||||
continue
|
||||
case *typeAssertion:
|
||||
// As a special-case, elide type assertions on anonymous types
|
||||
// since they are typically generated dynamically and can be very
|
||||
// verbose. For example, some transforms return interface{} because
|
||||
// of Go's lack of generics, but typically take in and return the
|
||||
// exact same concrete type.
|
||||
if s.Type().PkgPath() == "" {
|
||||
continue
|
||||
}
|
||||
}
|
||||
ssPost = append(ssPost, s.String())
|
||||
}
|
||||
@@ -183,44 +146,13 @@ func (pa Path) GoString() string {
|
||||
return strings.Join(ssPre, "") + strings.Join(ssPost, "")
|
||||
}
|
||||
|
||||
type (
|
||||
pathStep struct {
|
||||
typ reflect.Type
|
||||
}
|
||||
type pathStep struct {
|
||||
typ reflect.Type
|
||||
vx, vy reflect.Value
|
||||
}
|
||||
|
||||
sliceIndex struct {
|
||||
pathStep
|
||||
xkey, ykey int
|
||||
}
|
||||
mapIndex struct {
|
||||
pathStep
|
||||
key reflect.Value
|
||||
}
|
||||
typeAssertion struct {
|
||||
pathStep
|
||||
}
|
||||
structField struct {
|
||||
pathStep
|
||||
name string
|
||||
idx int
|
||||
|
||||
// These fields are used for forcibly accessing an unexported field.
|
||||
// pvx, pvy, and field are only valid if unexported is true.
|
||||
unexported bool
|
||||
force bool // Forcibly allow visibility
|
||||
pvx, pvy reflect.Value // Parent values
|
||||
field reflect.StructField // Field information
|
||||
}
|
||||
indirect struct {
|
||||
pathStep
|
||||
}
|
||||
transform struct {
|
||||
pathStep
|
||||
trans *transformer
|
||||
}
|
||||
)
|
||||
|
||||
func (ps pathStep) Type() reflect.Type { return ps.typ }
|
||||
func (ps pathStep) Type() reflect.Type { return ps.typ }
|
||||
func (ps pathStep) Values() (vx, vy reflect.Value) { return ps.vx, ps.vy }
|
||||
func (ps pathStep) String() string {
|
||||
if ps.typ == nil {
|
||||
return "<nil>"
|
||||
@@ -232,7 +164,54 @@ func (ps pathStep) String() string {
|
||||
return fmt.Sprintf("{%s}", s)
|
||||
}
|
||||
|
||||
func (si sliceIndex) String() string {
|
||||
// StructField represents a struct field access on a field called Name.
|
||||
type StructField struct{ *structField }
|
||||
type structField struct {
|
||||
pathStep
|
||||
name string
|
||||
idx int
|
||||
|
||||
// These fields are used for forcibly accessing an unexported field.
|
||||
// pvx, pvy, and field are only valid if unexported is true.
|
||||
unexported bool
|
||||
mayForce bool // Forcibly allow visibility
|
||||
pvx, pvy reflect.Value // Parent values
|
||||
field reflect.StructField // Field information
|
||||
}
|
||||
|
||||
func (sf StructField) Type() reflect.Type { return sf.typ }
|
||||
func (sf StructField) Values() (vx, vy reflect.Value) {
|
||||
if !sf.unexported {
|
||||
return sf.vx, sf.vy // CanInterface reports true
|
||||
}
|
||||
|
||||
// Forcibly obtain read-write access to an unexported struct field.
|
||||
if sf.mayForce {
|
||||
vx = retrieveUnexportedField(sf.pvx, sf.field)
|
||||
vy = retrieveUnexportedField(sf.pvy, sf.field)
|
||||
return vx, vy // CanInterface reports true
|
||||
}
|
||||
return sf.vx, sf.vy // CanInterface reports false
|
||||
}
|
||||
func (sf StructField) String() string { return fmt.Sprintf(".%s", sf.name) }
|
||||
|
||||
// Name is the field name.
|
||||
func (sf StructField) Name() string { return sf.name }
|
||||
|
||||
// Index is the index of the field in the parent struct type.
|
||||
// See reflect.Type.Field.
|
||||
func (sf StructField) Index() int { return sf.idx }
|
||||
|
||||
// SliceIndex is an index operation on a slice or array at some index Key.
|
||||
type SliceIndex struct{ *sliceIndex }
|
||||
type sliceIndex struct {
|
||||
pathStep
|
||||
xkey, ykey int
|
||||
}
|
||||
|
||||
func (si SliceIndex) Type() reflect.Type { return si.typ }
|
||||
func (si SliceIndex) Values() (vx, vy reflect.Value) { return si.vx, si.vy }
|
||||
func (si SliceIndex) String() string {
|
||||
switch {
|
||||
case si.xkey == si.ykey:
|
||||
return fmt.Sprintf("[%d]", si.xkey)
|
||||
@@ -247,63 +226,83 @@ func (si sliceIndex) String() string {
|
||||
return fmt.Sprintf("[%d->%d]", si.xkey, si.ykey)
|
||||
}
|
||||
}
|
||||
func (mi mapIndex) String() string { return fmt.Sprintf("[%#v]", mi.key) }
|
||||
func (ta typeAssertion) String() string { return fmt.Sprintf(".(%v)", ta.typ) }
|
||||
func (sf structField) String() string { return fmt.Sprintf(".%s", sf.name) }
|
||||
func (in indirect) String() string { return "*" }
|
||||
func (tf transform) String() string { return fmt.Sprintf("%s()", tf.trans.name) }
|
||||
|
||||
func (si sliceIndex) Key() int {
|
||||
// Key is the index key; it may return -1 if in a split state
|
||||
func (si SliceIndex) Key() int {
|
||||
if si.xkey != si.ykey {
|
||||
return -1
|
||||
}
|
||||
return si.xkey
|
||||
}
|
||||
func (si sliceIndex) SplitKeys() (x, y int) { return si.xkey, si.ykey }
|
||||
func (mi mapIndex) Key() reflect.Value { return mi.key }
|
||||
func (sf structField) Name() string { return sf.name }
|
||||
func (sf structField) Index() int { return sf.idx }
|
||||
func (tf transform) Name() string { return tf.trans.name }
|
||||
func (tf transform) Func() reflect.Value { return tf.trans.fnc }
|
||||
func (tf transform) Option() Option { return tf.trans }
|
||||
|
||||
func (pathStep) isPathStep() {}
|
||||
func (sliceIndex) isSliceIndex() {}
|
||||
func (mapIndex) isMapIndex() {}
|
||||
func (typeAssertion) isTypeAssertion() {}
|
||||
func (structField) isStructField() {}
|
||||
func (indirect) isIndirect() {}
|
||||
func (transform) isTransform() {}
|
||||
// SplitKeys are the indexes for indexing into slices in the
|
||||
// x and y values, respectively. These indexes may differ due to the
|
||||
// insertion or removal of an element in one of the slices, causing
|
||||
// all of the indexes to be shifted. If an index is -1, then that
|
||||
// indicates that the element does not exist in the associated slice.
|
||||
//
|
||||
// Key is guaranteed to return -1 if and only if the indexes returned
|
||||
// by SplitKeys are not the same. SplitKeys will never return -1 for
|
||||
// both indexes.
|
||||
func (si SliceIndex) SplitKeys() (ix, iy int) { return si.xkey, si.ykey }
|
||||
|
||||
var (
|
||||
_ SliceIndex = sliceIndex{}
|
||||
_ MapIndex = mapIndex{}
|
||||
_ TypeAssertion = typeAssertion{}
|
||||
_ StructField = structField{}
|
||||
_ Indirect = indirect{}
|
||||
_ Transform = transform{}
|
||||
// MapIndex is an index operation on a map at some index Key.
|
||||
type MapIndex struct{ *mapIndex }
|
||||
type mapIndex struct {
|
||||
pathStep
|
||||
key reflect.Value
|
||||
}
|
||||
|
||||
_ PathStep = sliceIndex{}
|
||||
_ PathStep = mapIndex{}
|
||||
_ PathStep = typeAssertion{}
|
||||
_ PathStep = structField{}
|
||||
_ PathStep = indirect{}
|
||||
_ PathStep = transform{}
|
||||
)
|
||||
func (mi MapIndex) Type() reflect.Type { return mi.typ }
|
||||
func (mi MapIndex) Values() (vx, vy reflect.Value) { return mi.vx, mi.vy }
|
||||
func (mi MapIndex) String() string { return fmt.Sprintf("[%#v]", mi.key) }
|
||||
|
||||
// Key is the value of the map key.
|
||||
func (mi MapIndex) Key() reflect.Value { return mi.key }
|
||||
|
||||
// Indirect represents pointer indirection on the parent type.
|
||||
type Indirect struct{ *indirect }
|
||||
type indirect struct {
|
||||
pathStep
|
||||
}
|
||||
|
||||
func (in Indirect) Type() reflect.Type { return in.typ }
|
||||
func (in Indirect) Values() (vx, vy reflect.Value) { return in.vx, in.vy }
|
||||
func (in Indirect) String() string { return "*" }
|
||||
|
||||
// TypeAssertion represents a type assertion on an interface.
|
||||
type TypeAssertion struct{ *typeAssertion }
|
||||
type typeAssertion struct {
|
||||
pathStep
|
||||
}
|
||||
|
||||
func (ta TypeAssertion) Type() reflect.Type { return ta.typ }
|
||||
func (ta TypeAssertion) Values() (vx, vy reflect.Value) { return ta.vx, ta.vy }
|
||||
func (ta TypeAssertion) String() string { return fmt.Sprintf(".(%v)", ta.typ) }
|
||||
|
||||
// Transform is a transformation from the parent type to the current type.
|
||||
type Transform struct{ *transform }
|
||||
type transform struct {
|
||||
pathStep
|
||||
trans *transformer
|
||||
}
|
||||
|
||||
func (tf Transform) Type() reflect.Type { return tf.typ }
|
||||
func (tf Transform) Values() (vx, vy reflect.Value) { return tf.vx, tf.vy }
|
||||
func (tf Transform) String() string { return fmt.Sprintf("%s()", tf.trans.name) }
|
||||
|
||||
// Name is the name of the Transformer.
|
||||
func (tf Transform) Name() string { return tf.trans.name }
|
||||
|
||||
// Func is the function pointer to the transformer function.
|
||||
func (tf Transform) Func() reflect.Value { return tf.trans.fnc }
|
||||
|
||||
// Option returns the originally constructed Transformer option.
|
||||
// The == operator can be used to detect the exact option used.
|
||||
func (tf Transform) Option() Option { return tf.trans }
|
||||
|
||||
// isExported reports whether the identifier is exported.
|
||||
func isExported(id string) bool {
|
||||
r, _ := utf8.DecodeRuneInString(id)
|
||||
return unicode.IsUpper(r)
|
||||
}
|
||||
|
||||
// isValid reports whether the identifier is valid.
|
||||
// Empty and underscore-only strings are not valid.
|
||||
func isValid(id string) bool {
|
||||
ok := id != "" && id != "_"
|
||||
for j, c := range id {
|
||||
ok = ok && (j > 0 || !unicode.IsDigit(c))
|
||||
ok = ok && (c == '_' || unicode.IsLetter(c) || unicode.IsDigit(c))
|
||||
}
|
||||
return ok
|
||||
}
|
||||
|
||||
51
vendor/github.com/google/go-cmp/cmp/report.go
generated
vendored
Normal file
51
vendor/github.com/google/go-cmp/cmp/report.go
generated
vendored
Normal file
@@ -0,0 +1,51 @@
|
||||
// Copyright 2017, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package cmp
|
||||
|
||||
// defaultReporter implements the reporter interface.
|
||||
//
|
||||
// As Equal serially calls the PushStep, Report, and PopStep methods, the
|
||||
// defaultReporter constructs a tree-based representation of the compared value
|
||||
// and the result of each comparison (see valueNode).
|
||||
//
|
||||
// When the String method is called, the FormatDiff method transforms the
|
||||
// valueNode tree into a textNode tree, which is a tree-based representation
|
||||
// of the textual output (see textNode).
|
||||
//
|
||||
// Lastly, the textNode.String method produces the final report as a string.
|
||||
type defaultReporter struct {
|
||||
root *valueNode
|
||||
curr *valueNode
|
||||
}
|
||||
|
||||
func (r *defaultReporter) PushStep(ps PathStep) {
|
||||
r.curr = r.curr.PushStep(ps)
|
||||
if r.root == nil {
|
||||
r.root = r.curr
|
||||
}
|
||||
}
|
||||
func (r *defaultReporter) Report(rs Result) {
|
||||
r.curr.Report(rs)
|
||||
}
|
||||
func (r *defaultReporter) PopStep() {
|
||||
r.curr = r.curr.PopStep()
|
||||
}
|
||||
|
||||
// String provides a full report of the differences detected as a structured
|
||||
// literal in pseudo-Go syntax. String may only be called after the entire tree
|
||||
// has been traversed.
|
||||
func (r *defaultReporter) String() string {
|
||||
assert(r.root != nil && r.curr == nil)
|
||||
if r.root.NumDiff == 0 {
|
||||
return ""
|
||||
}
|
||||
return formatOptions{}.FormatDiff(r.root).String()
|
||||
}
|
||||
|
||||
func assert(ok bool) {
|
||||
if !ok {
|
||||
panic("assertion failure")
|
||||
}
|
||||
}
|
||||
296
vendor/github.com/google/go-cmp/cmp/report_compare.go
generated
vendored
Normal file
296
vendor/github.com/google/go-cmp/cmp/report_compare.go
generated
vendored
Normal file
@@ -0,0 +1,296 @@
|
||||
// Copyright 2019, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package cmp
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
|
||||
"github.com/google/go-cmp/cmp/internal/value"
|
||||
)
|
||||
|
||||
// TODO: Enforce limits?
|
||||
// * Enforce maximum number of records to print per node?
|
||||
// * Enforce maximum size in bytes allowed?
|
||||
// * As a heuristic, use less verbosity for equal nodes than unequal nodes.
|
||||
// TODO: Enforce unique outputs?
|
||||
// * Avoid Stringer methods if it results in same output?
|
||||
// * Print pointer address if outputs still equal?
|
||||
|
||||
// numContextRecords is the number of surrounding equal records to print.
|
||||
const numContextRecords = 2
|
||||
|
||||
type diffMode byte
|
||||
|
||||
const (
|
||||
diffUnknown diffMode = 0
|
||||
diffIdentical diffMode = ' '
|
||||
diffRemoved diffMode = '-'
|
||||
diffInserted diffMode = '+'
|
||||
)
|
||||
|
||||
type typeMode int
|
||||
|
||||
const (
|
||||
// emitType always prints the type.
|
||||
emitType typeMode = iota
|
||||
// elideType never prints the type.
|
||||
elideType
|
||||
// autoType prints the type only for composite kinds
|
||||
// (i.e., structs, slices, arrays, and maps).
|
||||
autoType
|
||||
)
|
||||
|
||||
type formatOptions struct {
|
||||
// DiffMode controls the output mode of FormatDiff.
|
||||
//
|
||||
// If diffUnknown, then produce a diff of the x and y values.
|
||||
// If diffIdentical, then emit values as if they were equal.
|
||||
// If diffRemoved, then only emit x values (ignoring y values).
|
||||
// If diffInserted, then only emit y values (ignoring x values).
|
||||
DiffMode diffMode
|
||||
|
||||
// TypeMode controls whether to print the type for the current node.
|
||||
//
|
||||
// As a general rule of thumb, we always print the type of the next node
|
||||
// after an interface, and always elide the type of the next node after
|
||||
// a slice or map node.
|
||||
TypeMode typeMode
|
||||
|
||||
// formatValueOptions are options specific to printing reflect.Values.
|
||||
formatValueOptions
|
||||
}
|
||||
|
||||
func (opts formatOptions) WithDiffMode(d diffMode) formatOptions {
|
||||
opts.DiffMode = d
|
||||
return opts
|
||||
}
|
||||
func (opts formatOptions) WithTypeMode(t typeMode) formatOptions {
|
||||
opts.TypeMode = t
|
||||
return opts
|
||||
}
|
||||
|
||||
// FormatDiff converts a valueNode tree into a textNode tree, where the later
|
||||
// is a textual representation of the differences detected in the former.
|
||||
func (opts formatOptions) FormatDiff(v *valueNode) textNode {
|
||||
// Check whether we have specialized formatting for this node.
|
||||
// This is not necessary, but helpful for producing more readable outputs.
|
||||
if opts.CanFormatDiffSlice(v) {
|
||||
return opts.FormatDiffSlice(v)
|
||||
}
|
||||
|
||||
// For leaf nodes, format the value based on the reflect.Values alone.
|
||||
if v.MaxDepth == 0 {
|
||||
switch opts.DiffMode {
|
||||
case diffUnknown, diffIdentical:
|
||||
// Format Equal.
|
||||
if v.NumDiff == 0 {
|
||||
outx := opts.FormatValue(v.ValueX, visitedPointers{})
|
||||
outy := opts.FormatValue(v.ValueY, visitedPointers{})
|
||||
if v.NumIgnored > 0 && v.NumSame == 0 {
|
||||
return textEllipsis
|
||||
} else if outx.Len() < outy.Len() {
|
||||
return outx
|
||||
} else {
|
||||
return outy
|
||||
}
|
||||
}
|
||||
|
||||
// Format unequal.
|
||||
assert(opts.DiffMode == diffUnknown)
|
||||
var list textList
|
||||
outx := opts.WithTypeMode(elideType).FormatValue(v.ValueX, visitedPointers{})
|
||||
outy := opts.WithTypeMode(elideType).FormatValue(v.ValueY, visitedPointers{})
|
||||
if outx != nil {
|
||||
list = append(list, textRecord{Diff: '-', Value: outx})
|
||||
}
|
||||
if outy != nil {
|
||||
list = append(list, textRecord{Diff: '+', Value: outy})
|
||||
}
|
||||
return opts.WithTypeMode(emitType).FormatType(v.Type, list)
|
||||
case diffRemoved:
|
||||
return opts.FormatValue(v.ValueX, visitedPointers{})
|
||||
case diffInserted:
|
||||
return opts.FormatValue(v.ValueY, visitedPointers{})
|
||||
default:
|
||||
panic("invalid diff mode")
|
||||
}
|
||||
}
|
||||
|
||||
// Descend into the child value node.
|
||||
if v.TransformerName != "" {
|
||||
out := opts.WithTypeMode(emitType).FormatDiff(v.Value)
|
||||
out = textWrap{"Inverse(" + v.TransformerName + ", ", out, ")"}
|
||||
return opts.FormatType(v.Type, out)
|
||||
} else {
|
||||
switch k := v.Type.Kind(); k {
|
||||
case reflect.Struct, reflect.Array, reflect.Slice, reflect.Map:
|
||||
return opts.FormatType(v.Type, opts.formatDiffList(v.Records, k))
|
||||
case reflect.Ptr:
|
||||
return textWrap{"&", opts.FormatDiff(v.Value), ""}
|
||||
case reflect.Interface:
|
||||
return opts.WithTypeMode(emitType).FormatDiff(v.Value)
|
||||
default:
|
||||
panic(fmt.Sprintf("%v cannot have children", k))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (opts formatOptions) formatDiffList(recs []reportRecord, k reflect.Kind) textNode {
|
||||
// Derive record name based on the data structure kind.
|
||||
var name string
|
||||
var formatKey func(reflect.Value) string
|
||||
switch k {
|
||||
case reflect.Struct:
|
||||
name = "field"
|
||||
opts = opts.WithTypeMode(autoType)
|
||||
formatKey = func(v reflect.Value) string { return v.String() }
|
||||
case reflect.Slice, reflect.Array:
|
||||
name = "element"
|
||||
opts = opts.WithTypeMode(elideType)
|
||||
formatKey = func(reflect.Value) string { return "" }
|
||||
case reflect.Map:
|
||||
name = "entry"
|
||||
opts = opts.WithTypeMode(elideType)
|
||||
formatKey = formatMapKey
|
||||
}
|
||||
|
||||
// Handle unification.
|
||||
switch opts.DiffMode {
|
||||
case diffIdentical, diffRemoved, diffInserted:
|
||||
var list textList
|
||||
var deferredEllipsis bool // Add final "..." to indicate records were dropped
|
||||
for _, r := range recs {
|
||||
// Elide struct fields that are zero value.
|
||||
if k == reflect.Struct {
|
||||
var isZero bool
|
||||
switch opts.DiffMode {
|
||||
case diffIdentical:
|
||||
isZero = value.IsZero(r.Value.ValueX) || value.IsZero(r.Value.ValueX)
|
||||
case diffRemoved:
|
||||
isZero = value.IsZero(r.Value.ValueX)
|
||||
case diffInserted:
|
||||
isZero = value.IsZero(r.Value.ValueY)
|
||||
}
|
||||
if isZero {
|
||||
continue
|
||||
}
|
||||
}
|
||||
// Elide ignored nodes.
|
||||
if r.Value.NumIgnored > 0 && r.Value.NumSame+r.Value.NumDiff == 0 {
|
||||
deferredEllipsis = !(k == reflect.Slice || k == reflect.Array)
|
||||
if !deferredEllipsis {
|
||||
list.AppendEllipsis(diffStats{})
|
||||
}
|
||||
continue
|
||||
}
|
||||
if out := opts.FormatDiff(r.Value); out != nil {
|
||||
list = append(list, textRecord{Key: formatKey(r.Key), Value: out})
|
||||
}
|
||||
}
|
||||
if deferredEllipsis {
|
||||
list.AppendEllipsis(diffStats{})
|
||||
}
|
||||
return textWrap{"{", list, "}"}
|
||||
case diffUnknown:
|
||||
default:
|
||||
panic("invalid diff mode")
|
||||
}
|
||||
|
||||
// Handle differencing.
|
||||
var list textList
|
||||
groups := coalesceAdjacentRecords(name, recs)
|
||||
for i, ds := range groups {
|
||||
// Handle equal records.
|
||||
if ds.NumDiff() == 0 {
|
||||
// Compute the number of leading and trailing records to print.
|
||||
var numLo, numHi int
|
||||
numEqual := ds.NumIgnored + ds.NumIdentical
|
||||
for numLo < numContextRecords && numLo+numHi < numEqual && i != 0 {
|
||||
if r := recs[numLo].Value; r.NumIgnored > 0 && r.NumSame+r.NumDiff == 0 {
|
||||
break
|
||||
}
|
||||
numLo++
|
||||
}
|
||||
for numHi < numContextRecords && numLo+numHi < numEqual && i != len(groups)-1 {
|
||||
if r := recs[numEqual-numHi-1].Value; r.NumIgnored > 0 && r.NumSame+r.NumDiff == 0 {
|
||||
break
|
||||
}
|
||||
numHi++
|
||||
}
|
||||
if numEqual-(numLo+numHi) == 1 && ds.NumIgnored == 0 {
|
||||
numHi++ // Avoid pointless coalescing of a single equal record
|
||||
}
|
||||
|
||||
// Format the equal values.
|
||||
for _, r := range recs[:numLo] {
|
||||
out := opts.WithDiffMode(diffIdentical).FormatDiff(r.Value)
|
||||
list = append(list, textRecord{Key: formatKey(r.Key), Value: out})
|
||||
}
|
||||
if numEqual > numLo+numHi {
|
||||
ds.NumIdentical -= numLo + numHi
|
||||
list.AppendEllipsis(ds)
|
||||
}
|
||||
for _, r := range recs[numEqual-numHi : numEqual] {
|
||||
out := opts.WithDiffMode(diffIdentical).FormatDiff(r.Value)
|
||||
list = append(list, textRecord{Key: formatKey(r.Key), Value: out})
|
||||
}
|
||||
recs = recs[numEqual:]
|
||||
continue
|
||||
}
|
||||
|
||||
// Handle unequal records.
|
||||
for _, r := range recs[:ds.NumDiff()] {
|
||||
switch {
|
||||
case opts.CanFormatDiffSlice(r.Value):
|
||||
out := opts.FormatDiffSlice(r.Value)
|
||||
list = append(list, textRecord{Key: formatKey(r.Key), Value: out})
|
||||
case r.Value.NumChildren == r.Value.MaxDepth:
|
||||
outx := opts.WithDiffMode(diffRemoved).FormatDiff(r.Value)
|
||||
outy := opts.WithDiffMode(diffInserted).FormatDiff(r.Value)
|
||||
if outx != nil {
|
||||
list = append(list, textRecord{Diff: diffRemoved, Key: formatKey(r.Key), Value: outx})
|
||||
}
|
||||
if outy != nil {
|
||||
list = append(list, textRecord{Diff: diffInserted, Key: formatKey(r.Key), Value: outy})
|
||||
}
|
||||
default:
|
||||
out := opts.FormatDiff(r.Value)
|
||||
list = append(list, textRecord{Key: formatKey(r.Key), Value: out})
|
||||
}
|
||||
}
|
||||
recs = recs[ds.NumDiff():]
|
||||
}
|
||||
assert(len(recs) == 0)
|
||||
return textWrap{"{", list, "}"}
|
||||
}
|
||||
|
||||
// coalesceAdjacentRecords coalesces the list of records into groups of
|
||||
// adjacent equal, or unequal counts.
|
||||
func coalesceAdjacentRecords(name string, recs []reportRecord) (groups []diffStats) {
|
||||
var prevCase int // Arbitrary index into which case last occurred
|
||||
lastStats := func(i int) *diffStats {
|
||||
if prevCase != i {
|
||||
groups = append(groups, diffStats{Name: name})
|
||||
prevCase = i
|
||||
}
|
||||
return &groups[len(groups)-1]
|
||||
}
|
||||
for _, r := range recs {
|
||||
switch rv := r.Value; {
|
||||
case rv.NumIgnored > 0 && rv.NumSame+rv.NumDiff == 0:
|
||||
lastStats(1).NumIgnored++
|
||||
case rv.NumDiff == 0:
|
||||
lastStats(1).NumIdentical++
|
||||
case rv.NumDiff > 0 && !rv.ValueY.IsValid():
|
||||
lastStats(2).NumRemoved++
|
||||
case rv.NumDiff > 0 && !rv.ValueX.IsValid():
|
||||
lastStats(2).NumInserted++
|
||||
default:
|
||||
lastStats(2).NumModified++
|
||||
}
|
||||
}
|
||||
return groups
|
||||
}
|
||||
279
vendor/github.com/google/go-cmp/cmp/report_reflect.go
generated
vendored
Normal file
279
vendor/github.com/google/go-cmp/cmp/report_reflect.go
generated
vendored
Normal file
@@ -0,0 +1,279 @@
|
||||
// Copyright 2019, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package cmp
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"github.com/google/go-cmp/cmp/internal/flags"
|
||||
"github.com/google/go-cmp/cmp/internal/value"
|
||||
)
|
||||
|
||||
type formatValueOptions struct {
|
||||
// AvoidStringer controls whether to avoid calling custom stringer
|
||||
// methods like error.Error or fmt.Stringer.String.
|
||||
AvoidStringer bool
|
||||
|
||||
// ShallowPointers controls whether to avoid descending into pointers.
|
||||
// Useful when printing map keys, where pointer comparison is performed
|
||||
// on the pointer address rather than the pointed-at value.
|
||||
ShallowPointers bool
|
||||
|
||||
// PrintAddresses controls whether to print the address of all pointers,
|
||||
// slice elements, and maps.
|
||||
PrintAddresses bool
|
||||
}
|
||||
|
||||
// FormatType prints the type as if it were wrapping s.
|
||||
// This may return s as-is depending on the current type and TypeMode mode.
|
||||
func (opts formatOptions) FormatType(t reflect.Type, s textNode) textNode {
|
||||
// Check whether to emit the type or not.
|
||||
switch opts.TypeMode {
|
||||
case autoType:
|
||||
switch t.Kind() {
|
||||
case reflect.Struct, reflect.Slice, reflect.Array, reflect.Map:
|
||||
if s.Equal(textNil) {
|
||||
return s
|
||||
}
|
||||
default:
|
||||
return s
|
||||
}
|
||||
case elideType:
|
||||
return s
|
||||
}
|
||||
|
||||
// Determine the type label, applying special handling for unnamed types.
|
||||
typeName := t.String()
|
||||
if t.Name() == "" {
|
||||
// According to Go grammar, certain type literals contain symbols that
|
||||
// do not strongly bind to the next lexicographical token (e.g., *T).
|
||||
switch t.Kind() {
|
||||
case reflect.Chan, reflect.Func, reflect.Ptr:
|
||||
typeName = "(" + typeName + ")"
|
||||
}
|
||||
typeName = strings.Replace(typeName, "struct {", "struct{", -1)
|
||||
typeName = strings.Replace(typeName, "interface {", "interface{", -1)
|
||||
}
|
||||
|
||||
// Avoid wrap the value in parenthesis if unnecessary.
|
||||
if s, ok := s.(textWrap); ok {
|
||||
hasParens := strings.HasPrefix(s.Prefix, "(") && strings.HasSuffix(s.Suffix, ")")
|
||||
hasBraces := strings.HasPrefix(s.Prefix, "{") && strings.HasSuffix(s.Suffix, "}")
|
||||
if hasParens || hasBraces {
|
||||
return textWrap{typeName, s, ""}
|
||||
}
|
||||
}
|
||||
return textWrap{typeName + "(", s, ")"}
|
||||
}
|
||||
|
||||
// FormatValue prints the reflect.Value, taking extra care to avoid descending
|
||||
// into pointers already in m. As pointers are visited, m is also updated.
|
||||
func (opts formatOptions) FormatValue(v reflect.Value, m visitedPointers) (out textNode) {
|
||||
if !v.IsValid() {
|
||||
return nil
|
||||
}
|
||||
t := v.Type()
|
||||
|
||||
// Check whether there is an Error or String method to call.
|
||||
if !opts.AvoidStringer && v.CanInterface() {
|
||||
// Avoid calling Error or String methods on nil receivers since many
|
||||
// implementations crash when doing so.
|
||||
if (t.Kind() != reflect.Ptr && t.Kind() != reflect.Interface) || !v.IsNil() {
|
||||
switch v := v.Interface().(type) {
|
||||
case error:
|
||||
return textLine("e" + formatString(v.Error()))
|
||||
case fmt.Stringer:
|
||||
return textLine("s" + formatString(v.String()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check whether to explicitly wrap the result with the type.
|
||||
var skipType bool
|
||||
defer func() {
|
||||
if !skipType {
|
||||
out = opts.FormatType(t, out)
|
||||
}
|
||||
}()
|
||||
|
||||
var ptr string
|
||||
switch t.Kind() {
|
||||
case reflect.Bool:
|
||||
return textLine(fmt.Sprint(v.Bool()))
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return textLine(fmt.Sprint(v.Int()))
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
// Unnamed uints are usually bytes or words, so use hexadecimal.
|
||||
if t.PkgPath() == "" || t.Kind() == reflect.Uintptr {
|
||||
return textLine(formatHex(v.Uint()))
|
||||
}
|
||||
return textLine(fmt.Sprint(v.Uint()))
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return textLine(fmt.Sprint(v.Float()))
|
||||
case reflect.Complex64, reflect.Complex128:
|
||||
return textLine(fmt.Sprint(v.Complex()))
|
||||
case reflect.String:
|
||||
return textLine(formatString(v.String()))
|
||||
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
|
||||
return textLine(formatPointer(v))
|
||||
case reflect.Struct:
|
||||
var list textList
|
||||
for i := 0; i < v.NumField(); i++ {
|
||||
vv := v.Field(i)
|
||||
if value.IsZero(vv) {
|
||||
continue // Elide fields with zero values
|
||||
}
|
||||
s := opts.WithTypeMode(autoType).FormatValue(vv, m)
|
||||
list = append(list, textRecord{Key: t.Field(i).Name, Value: s})
|
||||
}
|
||||
return textWrap{"{", list, "}"}
|
||||
case reflect.Slice:
|
||||
if v.IsNil() {
|
||||
return textNil
|
||||
}
|
||||
if opts.PrintAddresses {
|
||||
ptr = formatPointer(v)
|
||||
}
|
||||
fallthrough
|
||||
case reflect.Array:
|
||||
var list textList
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
vi := v.Index(i)
|
||||
if vi.CanAddr() { // Check for cyclic elements
|
||||
p := vi.Addr()
|
||||
if m.Visit(p) {
|
||||
var out textNode
|
||||
out = textLine(formatPointer(p))
|
||||
out = opts.WithTypeMode(emitType).FormatType(p.Type(), out)
|
||||
out = textWrap{"*", out, ""}
|
||||
list = append(list, textRecord{Value: out})
|
||||
continue
|
||||
}
|
||||
}
|
||||
s := opts.WithTypeMode(elideType).FormatValue(vi, m)
|
||||
list = append(list, textRecord{Value: s})
|
||||
}
|
||||
return textWrap{ptr + "{", list, "}"}
|
||||
case reflect.Map:
|
||||
if v.IsNil() {
|
||||
return textNil
|
||||
}
|
||||
if m.Visit(v) {
|
||||
return textLine(formatPointer(v))
|
||||
}
|
||||
|
||||
var list textList
|
||||
for _, k := range value.SortKeys(v.MapKeys()) {
|
||||
sk := formatMapKey(k)
|
||||
sv := opts.WithTypeMode(elideType).FormatValue(v.MapIndex(k), m)
|
||||
list = append(list, textRecord{Key: sk, Value: sv})
|
||||
}
|
||||
if opts.PrintAddresses {
|
||||
ptr = formatPointer(v)
|
||||
}
|
||||
return textWrap{ptr + "{", list, "}"}
|
||||
case reflect.Ptr:
|
||||
if v.IsNil() {
|
||||
return textNil
|
||||
}
|
||||
if m.Visit(v) || opts.ShallowPointers {
|
||||
return textLine(formatPointer(v))
|
||||
}
|
||||
if opts.PrintAddresses {
|
||||
ptr = formatPointer(v)
|
||||
}
|
||||
skipType = true // Let the underlying value print the type instead
|
||||
return textWrap{"&" + ptr, opts.FormatValue(v.Elem(), m), ""}
|
||||
case reflect.Interface:
|
||||
if v.IsNil() {
|
||||
return textNil
|
||||
}
|
||||
// Interfaces accept different concrete types,
|
||||
// so configure the underlying value to explicitly print the type.
|
||||
skipType = true // Print the concrete type instead
|
||||
return opts.WithTypeMode(emitType).FormatValue(v.Elem(), m)
|
||||
default:
|
||||
panic(fmt.Sprintf("%v kind not handled", v.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// formatMapKey formats v as if it were a map key.
|
||||
// The result is guaranteed to be a single line.
|
||||
func formatMapKey(v reflect.Value) string {
|
||||
var opts formatOptions
|
||||
opts.TypeMode = elideType
|
||||
opts.AvoidStringer = true
|
||||
opts.ShallowPointers = true
|
||||
s := opts.FormatValue(v, visitedPointers{}).String()
|
||||
return strings.TrimSpace(s)
|
||||
}
|
||||
|
||||
// formatString prints s as a double-quoted or backtick-quoted string.
|
||||
func formatString(s string) string {
|
||||
// Use quoted string if it the same length as a raw string literal.
|
||||
// Otherwise, attempt to use the raw string form.
|
||||
qs := strconv.Quote(s)
|
||||
if len(qs) == 1+len(s)+1 {
|
||||
return qs
|
||||
}
|
||||
|
||||
// Disallow newlines to ensure output is a single line.
|
||||
// Only allow printable runes for readability purposes.
|
||||
rawInvalid := func(r rune) bool {
|
||||
return r == '`' || r == '\n' || !(unicode.IsPrint(r) || r == '\t')
|
||||
}
|
||||
if strings.IndexFunc(s, rawInvalid) < 0 {
|
||||
return "`" + s + "`"
|
||||
}
|
||||
return qs
|
||||
}
|
||||
|
||||
// formatHex prints u as a hexadecimal integer in Go notation.
|
||||
func formatHex(u uint64) string {
|
||||
var f string
|
||||
switch {
|
||||
case u <= 0xff:
|
||||
f = "0x%02x"
|
||||
case u <= 0xffff:
|
||||
f = "0x%04x"
|
||||
case u <= 0xffffff:
|
||||
f = "0x%06x"
|
||||
case u <= 0xffffffff:
|
||||
f = "0x%08x"
|
||||
case u <= 0xffffffffff:
|
||||
f = "0x%010x"
|
||||
case u <= 0xffffffffffff:
|
||||
f = "0x%012x"
|
||||
case u <= 0xffffffffffffff:
|
||||
f = "0x%014x"
|
||||
case u <= 0xffffffffffffffff:
|
||||
f = "0x%016x"
|
||||
}
|
||||
return fmt.Sprintf(f, u)
|
||||
}
|
||||
|
||||
// formatPointer prints the address of the pointer.
|
||||
func formatPointer(v reflect.Value) string {
|
||||
p := v.Pointer()
|
||||
if flags.Deterministic {
|
||||
p = 0xdeadf00f // Only used for stable testing purposes
|
||||
}
|
||||
return fmt.Sprintf("⟪0x%x⟫", p)
|
||||
}
|
||||
|
||||
type visitedPointers map[value.Pointer]struct{}
|
||||
|
||||
// Visit inserts pointer v into the visited map and reports whether it had
|
||||
// already been visited before.
|
||||
func (m visitedPointers) Visit(v reflect.Value) bool {
|
||||
p := value.PointerOf(v)
|
||||
_, visited := m[p]
|
||||
m[p] = struct{}{}
|
||||
return visited
|
||||
}
|
||||
333
vendor/github.com/google/go-cmp/cmp/report_slices.go
generated
vendored
Normal file
333
vendor/github.com/google/go-cmp/cmp/report_slices.go
generated
vendored
Normal file
@@ -0,0 +1,333 @@
|
||||
// Copyright 2019, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package cmp
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/google/go-cmp/cmp/internal/diff"
|
||||
)
|
||||
|
||||
// CanFormatDiffSlice reports whether we support custom formatting for nodes
|
||||
// that are slices of primitive kinds or strings.
|
||||
func (opts formatOptions) CanFormatDiffSlice(v *valueNode) bool {
|
||||
switch {
|
||||
case opts.DiffMode != diffUnknown:
|
||||
return false // Must be formatting in diff mode
|
||||
case v.NumDiff == 0:
|
||||
return false // No differences detected
|
||||
case v.NumIgnored+v.NumCompared+v.NumTransformed > 0:
|
||||
// TODO: Handle the case where someone uses bytes.Equal on a large slice.
|
||||
return false // Some custom option was used to determined equality
|
||||
case !v.ValueX.IsValid() || !v.ValueY.IsValid():
|
||||
return false // Both values must be valid
|
||||
}
|
||||
|
||||
switch t := v.Type; t.Kind() {
|
||||
case reflect.String:
|
||||
case reflect.Array, reflect.Slice:
|
||||
// Only slices of primitive types have specialized handling.
|
||||
switch t.Elem().Kind() {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr,
|
||||
reflect.Bool, reflect.Float32, reflect.Float64, reflect.Complex64, reflect.Complex128:
|
||||
default:
|
||||
return false
|
||||
}
|
||||
|
||||
// If a sufficient number of elements already differ,
|
||||
// use specialized formatting even if length requirement is not met.
|
||||
if v.NumDiff > v.NumSame {
|
||||
return true
|
||||
}
|
||||
default:
|
||||
return false
|
||||
}
|
||||
|
||||
// Use specialized string diffing for longer slices or strings.
|
||||
const minLength = 64
|
||||
return v.ValueX.Len() >= minLength && v.ValueY.Len() >= minLength
|
||||
}
|
||||
|
||||
// FormatDiffSlice prints a diff for the slices (or strings) represented by v.
|
||||
// This provides custom-tailored logic to make printing of differences in
|
||||
// textual strings and slices of primitive kinds more readable.
|
||||
func (opts formatOptions) FormatDiffSlice(v *valueNode) textNode {
|
||||
assert(opts.DiffMode == diffUnknown)
|
||||
t, vx, vy := v.Type, v.ValueX, v.ValueY
|
||||
|
||||
// Auto-detect the type of the data.
|
||||
var isLinedText, isText, isBinary bool
|
||||
var sx, sy string
|
||||
switch {
|
||||
case t.Kind() == reflect.String:
|
||||
sx, sy = vx.String(), vy.String()
|
||||
isText = true // Initial estimate, verify later
|
||||
case t.Kind() == reflect.Slice && t.Elem() == reflect.TypeOf(byte(0)):
|
||||
sx, sy = string(vx.Bytes()), string(vy.Bytes())
|
||||
isBinary = true // Initial estimate, verify later
|
||||
case t.Kind() == reflect.Array:
|
||||
// Arrays need to be addressable for slice operations to work.
|
||||
vx2, vy2 := reflect.New(t).Elem(), reflect.New(t).Elem()
|
||||
vx2.Set(vx)
|
||||
vy2.Set(vy)
|
||||
vx, vy = vx2, vy2
|
||||
}
|
||||
if isText || isBinary {
|
||||
var numLines, lastLineIdx, maxLineLen int
|
||||
isBinary = false
|
||||
for i, r := range sx + sy {
|
||||
if !(unicode.IsPrint(r) || unicode.IsSpace(r)) || r == utf8.RuneError {
|
||||
isBinary = true
|
||||
break
|
||||
}
|
||||
if r == '\n' {
|
||||
if maxLineLen < i-lastLineIdx {
|
||||
lastLineIdx = i - lastLineIdx
|
||||
}
|
||||
lastLineIdx = i + 1
|
||||
numLines++
|
||||
}
|
||||
}
|
||||
isText = !isBinary
|
||||
isLinedText = isText && numLines >= 4 && maxLineLen <= 256
|
||||
}
|
||||
|
||||
// Format the string into printable records.
|
||||
var list textList
|
||||
var delim string
|
||||
switch {
|
||||
// If the text appears to be multi-lined text,
|
||||
// then perform differencing across individual lines.
|
||||
case isLinedText:
|
||||
ssx := strings.Split(sx, "\n")
|
||||
ssy := strings.Split(sy, "\n")
|
||||
list = opts.formatDiffSlice(
|
||||
reflect.ValueOf(ssx), reflect.ValueOf(ssy), 1, "line",
|
||||
func(v reflect.Value, d diffMode) textRecord {
|
||||
s := formatString(v.Index(0).String())
|
||||
return textRecord{Diff: d, Value: textLine(s)}
|
||||
},
|
||||
)
|
||||
delim = "\n"
|
||||
// If the text appears to be single-lined text,
|
||||
// then perform differencing in approximately fixed-sized chunks.
|
||||
// The output is printed as quoted strings.
|
||||
case isText:
|
||||
list = opts.formatDiffSlice(
|
||||
reflect.ValueOf(sx), reflect.ValueOf(sy), 64, "byte",
|
||||
func(v reflect.Value, d diffMode) textRecord {
|
||||
s := formatString(v.String())
|
||||
return textRecord{Diff: d, Value: textLine(s)}
|
||||
},
|
||||
)
|
||||
delim = ""
|
||||
// If the text appears to be binary data,
|
||||
// then perform differencing in approximately fixed-sized chunks.
|
||||
// The output is inspired by hexdump.
|
||||
case isBinary:
|
||||
list = opts.formatDiffSlice(
|
||||
reflect.ValueOf(sx), reflect.ValueOf(sy), 16, "byte",
|
||||
func(v reflect.Value, d diffMode) textRecord {
|
||||
var ss []string
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
ss = append(ss, formatHex(v.Index(i).Uint()))
|
||||
}
|
||||
s := strings.Join(ss, ", ")
|
||||
comment := commentString(fmt.Sprintf("%c|%v|", d, formatASCII(v.String())))
|
||||
return textRecord{Diff: d, Value: textLine(s), Comment: comment}
|
||||
},
|
||||
)
|
||||
// For all other slices of primitive types,
|
||||
// then perform differencing in approximately fixed-sized chunks.
|
||||
// The size of each chunk depends on the width of the element kind.
|
||||
default:
|
||||
var chunkSize int
|
||||
if t.Elem().Kind() == reflect.Bool {
|
||||
chunkSize = 16
|
||||
} else {
|
||||
switch t.Elem().Bits() {
|
||||
case 8:
|
||||
chunkSize = 16
|
||||
case 16:
|
||||
chunkSize = 12
|
||||
case 32:
|
||||
chunkSize = 8
|
||||
default:
|
||||
chunkSize = 8
|
||||
}
|
||||
}
|
||||
list = opts.formatDiffSlice(
|
||||
vx, vy, chunkSize, t.Elem().Kind().String(),
|
||||
func(v reflect.Value, d diffMode) textRecord {
|
||||
var ss []string
|
||||
for i := 0; i < v.Len(); i++ {
|
||||
switch t.Elem().Kind() {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
ss = append(ss, fmt.Sprint(v.Index(i).Int()))
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
|
||||
ss = append(ss, formatHex(v.Index(i).Uint()))
|
||||
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Complex64, reflect.Complex128:
|
||||
ss = append(ss, fmt.Sprint(v.Index(i).Interface()))
|
||||
}
|
||||
}
|
||||
s := strings.Join(ss, ", ")
|
||||
return textRecord{Diff: d, Value: textLine(s)}
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// Wrap the output with appropriate type information.
|
||||
var out textNode = textWrap{"{", list, "}"}
|
||||
if !isText {
|
||||
// The "{...}" byte-sequence literal is not valid Go syntax for strings.
|
||||
// Emit the type for extra clarity (e.g. "string{...}").
|
||||
if t.Kind() == reflect.String {
|
||||
opts = opts.WithTypeMode(emitType)
|
||||
}
|
||||
return opts.FormatType(t, out)
|
||||
}
|
||||
switch t.Kind() {
|
||||
case reflect.String:
|
||||
out = textWrap{"strings.Join(", out, fmt.Sprintf(", %q)", delim)}
|
||||
if t != reflect.TypeOf(string("")) {
|
||||
out = opts.FormatType(t, out)
|
||||
}
|
||||
case reflect.Slice:
|
||||
out = textWrap{"bytes.Join(", out, fmt.Sprintf(", %q)", delim)}
|
||||
if t != reflect.TypeOf([]byte(nil)) {
|
||||
out = opts.FormatType(t, out)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// formatASCII formats s as an ASCII string.
|
||||
// This is useful for printing binary strings in a semi-legible way.
|
||||
func formatASCII(s string) string {
|
||||
b := bytes.Repeat([]byte{'.'}, len(s))
|
||||
for i := 0; i < len(s); i++ {
|
||||
if ' ' <= s[i] && s[i] <= '~' {
|
||||
b[i] = s[i]
|
||||
}
|
||||
}
|
||||
return string(b)
|
||||
}
|
||||
|
||||
func (opts formatOptions) formatDiffSlice(
|
||||
vx, vy reflect.Value, chunkSize int, name string,
|
||||
makeRec func(reflect.Value, diffMode) textRecord,
|
||||
) (list textList) {
|
||||
es := diff.Difference(vx.Len(), vy.Len(), func(ix int, iy int) diff.Result {
|
||||
return diff.BoolResult(vx.Index(ix).Interface() == vy.Index(iy).Interface())
|
||||
})
|
||||
|
||||
appendChunks := func(v reflect.Value, d diffMode) int {
|
||||
n0 := v.Len()
|
||||
for v.Len() > 0 {
|
||||
n := chunkSize
|
||||
if n > v.Len() {
|
||||
n = v.Len()
|
||||
}
|
||||
list = append(list, makeRec(v.Slice(0, n), d))
|
||||
v = v.Slice(n, v.Len())
|
||||
}
|
||||
return n0 - v.Len()
|
||||
}
|
||||
|
||||
groups := coalesceAdjacentEdits(name, es)
|
||||
groups = coalesceInterveningIdentical(groups, chunkSize/4)
|
||||
for i, ds := range groups {
|
||||
// Print equal.
|
||||
if ds.NumDiff() == 0 {
|
||||
// Compute the number of leading and trailing equal bytes to print.
|
||||
var numLo, numHi int
|
||||
numEqual := ds.NumIgnored + ds.NumIdentical
|
||||
for numLo < chunkSize*numContextRecords && numLo+numHi < numEqual && i != 0 {
|
||||
numLo++
|
||||
}
|
||||
for numHi < chunkSize*numContextRecords && numLo+numHi < numEqual && i != len(groups)-1 {
|
||||
numHi++
|
||||
}
|
||||
if numEqual-(numLo+numHi) <= chunkSize && ds.NumIgnored == 0 {
|
||||
numHi = numEqual - numLo // Avoid pointless coalescing of single equal row
|
||||
}
|
||||
|
||||
// Print the equal bytes.
|
||||
appendChunks(vx.Slice(0, numLo), diffIdentical)
|
||||
if numEqual > numLo+numHi {
|
||||
ds.NumIdentical -= numLo + numHi
|
||||
list.AppendEllipsis(ds)
|
||||
}
|
||||
appendChunks(vx.Slice(numEqual-numHi, numEqual), diffIdentical)
|
||||
vx = vx.Slice(numEqual, vx.Len())
|
||||
vy = vy.Slice(numEqual, vy.Len())
|
||||
continue
|
||||
}
|
||||
|
||||
// Print unequal.
|
||||
nx := appendChunks(vx.Slice(0, ds.NumIdentical+ds.NumRemoved+ds.NumModified), diffRemoved)
|
||||
vx = vx.Slice(nx, vx.Len())
|
||||
ny := appendChunks(vy.Slice(0, ds.NumIdentical+ds.NumInserted+ds.NumModified), diffInserted)
|
||||
vy = vy.Slice(ny, vy.Len())
|
||||
}
|
||||
assert(vx.Len() == 0 && vy.Len() == 0)
|
||||
return list
|
||||
}
|
||||
|
||||
// coalesceAdjacentEdits coalesces the list of edits into groups of adjacent
|
||||
// equal or unequal counts.
|
||||
func coalesceAdjacentEdits(name string, es diff.EditScript) (groups []diffStats) {
|
||||
var prevCase int // Arbitrary index into which case last occurred
|
||||
lastStats := func(i int) *diffStats {
|
||||
if prevCase != i {
|
||||
groups = append(groups, diffStats{Name: name})
|
||||
prevCase = i
|
||||
}
|
||||
return &groups[len(groups)-1]
|
||||
}
|
||||
for _, e := range es {
|
||||
switch e {
|
||||
case diff.Identity:
|
||||
lastStats(1).NumIdentical++
|
||||
case diff.UniqueX:
|
||||
lastStats(2).NumRemoved++
|
||||
case diff.UniqueY:
|
||||
lastStats(2).NumInserted++
|
||||
case diff.Modified:
|
||||
lastStats(2).NumModified++
|
||||
}
|
||||
}
|
||||
return groups
|
||||
}
|
||||
|
||||
// coalesceInterveningIdentical coalesces sufficiently short (<= windowSize)
|
||||
// equal groups into adjacent unequal groups that currently result in a
|
||||
// dual inserted/removed printout. This acts as a high-pass filter to smooth
|
||||
// out high-frequency changes within the windowSize.
|
||||
func coalesceInterveningIdentical(groups []diffStats, windowSize int) []diffStats {
|
||||
groups, groupsOrig := groups[:0], groups
|
||||
for i, ds := range groupsOrig {
|
||||
if len(groups) >= 2 && ds.NumDiff() > 0 {
|
||||
prev := &groups[len(groups)-2] // Unequal group
|
||||
curr := &groups[len(groups)-1] // Equal group
|
||||
next := &groupsOrig[i] // Unequal group
|
||||
hadX, hadY := prev.NumRemoved > 0, prev.NumInserted > 0
|
||||
hasX, hasY := next.NumRemoved > 0, next.NumInserted > 0
|
||||
if ((hadX || hasX) && (hadY || hasY)) && curr.NumIdentical <= windowSize {
|
||||
*prev = (*prev).Append(*curr).Append(*next)
|
||||
groups = groups[:len(groups)-1] // Truncate off equal group
|
||||
continue
|
||||
}
|
||||
}
|
||||
groups = append(groups, ds)
|
||||
}
|
||||
return groups
|
||||
}
|
||||
382
vendor/github.com/google/go-cmp/cmp/report_text.go
generated
vendored
Normal file
382
vendor/github.com/google/go-cmp/cmp/report_text.go
generated
vendored
Normal file
@@ -0,0 +1,382 @@
|
||||
// Copyright 2019, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package cmp
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp/internal/flags"
|
||||
)
|
||||
|
||||
var randBool = rand.New(rand.NewSource(time.Now().Unix())).Intn(2) == 0
|
||||
|
||||
type indentMode int
|
||||
|
||||
func (n indentMode) appendIndent(b []byte, d diffMode) []byte {
|
||||
if flags.Deterministic || randBool {
|
||||
// Use regular spaces (U+0020).
|
||||
switch d {
|
||||
case diffUnknown, diffIdentical:
|
||||
b = append(b, " "...)
|
||||
case diffRemoved:
|
||||
b = append(b, "- "...)
|
||||
case diffInserted:
|
||||
b = append(b, "+ "...)
|
||||
}
|
||||
} else {
|
||||
// Use non-breaking spaces (U+00a0).
|
||||
switch d {
|
||||
case diffUnknown, diffIdentical:
|
||||
b = append(b, " "...)
|
||||
case diffRemoved:
|
||||
b = append(b, "- "...)
|
||||
case diffInserted:
|
||||
b = append(b, "+ "...)
|
||||
}
|
||||
}
|
||||
return repeatCount(n).appendChar(b, '\t')
|
||||
}
|
||||
|
||||
type repeatCount int
|
||||
|
||||
func (n repeatCount) appendChar(b []byte, c byte) []byte {
|
||||
for ; n > 0; n-- {
|
||||
b = append(b, c)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// textNode is a simplified tree-based representation of structured text.
|
||||
// Possible node types are textWrap, textList, or textLine.
|
||||
type textNode interface {
|
||||
// Len reports the length in bytes of a single-line version of the tree.
|
||||
// Nested textRecord.Diff and textRecord.Comment fields are ignored.
|
||||
Len() int
|
||||
// Equal reports whether the two trees are structurally identical.
|
||||
// Nested textRecord.Diff and textRecord.Comment fields are compared.
|
||||
Equal(textNode) bool
|
||||
// String returns the string representation of the text tree.
|
||||
// It is not guaranteed that len(x.String()) == x.Len(),
|
||||
// nor that x.String() == y.String() implies that x.Equal(y).
|
||||
String() string
|
||||
|
||||
// formatCompactTo formats the contents of the tree as a single-line string
|
||||
// to the provided buffer. Any nested textRecord.Diff and textRecord.Comment
|
||||
// fields are ignored.
|
||||
//
|
||||
// However, not all nodes in the tree should be collapsed as a single-line.
|
||||
// If a node can be collapsed as a single-line, it is replaced by a textLine
|
||||
// node. Since the top-level node cannot replace itself, this also returns
|
||||
// the current node itself.
|
||||
//
|
||||
// This does not mutate the receiver.
|
||||
formatCompactTo([]byte, diffMode) ([]byte, textNode)
|
||||
// formatExpandedTo formats the contents of the tree as a multi-line string
|
||||
// to the provided buffer. In order for column alignment to operate well,
|
||||
// formatCompactTo must be called before calling formatExpandedTo.
|
||||
formatExpandedTo([]byte, diffMode, indentMode) []byte
|
||||
}
|
||||
|
||||
// textWrap is a wrapper that concatenates a prefix and/or a suffix
|
||||
// to the underlying node.
|
||||
type textWrap struct {
|
||||
Prefix string // e.g., "bytes.Buffer{"
|
||||
Value textNode // textWrap | textList | textLine
|
||||
Suffix string // e.g., "}"
|
||||
}
|
||||
|
||||
func (s textWrap) Len() int {
|
||||
return len(s.Prefix) + s.Value.Len() + len(s.Suffix)
|
||||
}
|
||||
func (s1 textWrap) Equal(s2 textNode) bool {
|
||||
if s2, ok := s2.(textWrap); ok {
|
||||
return s1.Prefix == s2.Prefix && s1.Value.Equal(s2.Value) && s1.Suffix == s2.Suffix
|
||||
}
|
||||
return false
|
||||
}
|
||||
func (s textWrap) String() string {
|
||||
var d diffMode
|
||||
var n indentMode
|
||||
_, s2 := s.formatCompactTo(nil, d)
|
||||
b := n.appendIndent(nil, d) // Leading indent
|
||||
b = s2.formatExpandedTo(b, d, n) // Main body
|
||||
b = append(b, '\n') // Trailing newline
|
||||
return string(b)
|
||||
}
|
||||
func (s textWrap) formatCompactTo(b []byte, d diffMode) ([]byte, textNode) {
|
||||
n0 := len(b) // Original buffer length
|
||||
b = append(b, s.Prefix...)
|
||||
b, s.Value = s.Value.formatCompactTo(b, d)
|
||||
b = append(b, s.Suffix...)
|
||||
if _, ok := s.Value.(textLine); ok {
|
||||
return b, textLine(b[n0:])
|
||||
}
|
||||
return b, s
|
||||
}
|
||||
func (s textWrap) formatExpandedTo(b []byte, d diffMode, n indentMode) []byte {
|
||||
b = append(b, s.Prefix...)
|
||||
b = s.Value.formatExpandedTo(b, d, n)
|
||||
b = append(b, s.Suffix...)
|
||||
return b
|
||||
}
|
||||
|
||||
// textList is a comma-separated list of textWrap or textLine nodes.
|
||||
// The list may be formatted as multi-lines or single-line at the discretion
|
||||
// of the textList.formatCompactTo method.
|
||||
type textList []textRecord
|
||||
type textRecord struct {
|
||||
Diff diffMode // e.g., 0 or '-' or '+'
|
||||
Key string // e.g., "MyField"
|
||||
Value textNode // textWrap | textLine
|
||||
Comment fmt.Stringer // e.g., "6 identical fields"
|
||||
}
|
||||
|
||||
// AppendEllipsis appends a new ellipsis node to the list if none already
|
||||
// exists at the end. If cs is non-zero it coalesces the statistics with the
|
||||
// previous diffStats.
|
||||
func (s *textList) AppendEllipsis(ds diffStats) {
|
||||
hasStats := ds != diffStats{}
|
||||
if len(*s) == 0 || !(*s)[len(*s)-1].Value.Equal(textEllipsis) {
|
||||
if hasStats {
|
||||
*s = append(*s, textRecord{Value: textEllipsis, Comment: ds})
|
||||
} else {
|
||||
*s = append(*s, textRecord{Value: textEllipsis})
|
||||
}
|
||||
return
|
||||
}
|
||||
if hasStats {
|
||||
(*s)[len(*s)-1].Comment = (*s)[len(*s)-1].Comment.(diffStats).Append(ds)
|
||||
}
|
||||
}
|
||||
|
||||
func (s textList) Len() (n int) {
|
||||
for i, r := range s {
|
||||
n += len(r.Key)
|
||||
if r.Key != "" {
|
||||
n += len(": ")
|
||||
}
|
||||
n += r.Value.Len()
|
||||
if i < len(s)-1 {
|
||||
n += len(", ")
|
||||
}
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (s1 textList) Equal(s2 textNode) bool {
|
||||
if s2, ok := s2.(textList); ok {
|
||||
if len(s1) != len(s2) {
|
||||
return false
|
||||
}
|
||||
for i := range s1 {
|
||||
r1, r2 := s1[i], s2[i]
|
||||
if !(r1.Diff == r2.Diff && r1.Key == r2.Key && r1.Value.Equal(r2.Value) && r1.Comment == r2.Comment) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (s textList) String() string {
|
||||
return textWrap{"{", s, "}"}.String()
|
||||
}
|
||||
|
||||
func (s textList) formatCompactTo(b []byte, d diffMode) ([]byte, textNode) {
|
||||
s = append(textList(nil), s...) // Avoid mutating original
|
||||
|
||||
// Determine whether we can collapse this list as a single line.
|
||||
n0 := len(b) // Original buffer length
|
||||
var multiLine bool
|
||||
for i, r := range s {
|
||||
if r.Diff == diffInserted || r.Diff == diffRemoved {
|
||||
multiLine = true
|
||||
}
|
||||
b = append(b, r.Key...)
|
||||
if r.Key != "" {
|
||||
b = append(b, ": "...)
|
||||
}
|
||||
b, s[i].Value = r.Value.formatCompactTo(b, d|r.Diff)
|
||||
if _, ok := s[i].Value.(textLine); !ok {
|
||||
multiLine = true
|
||||
}
|
||||
if r.Comment != nil {
|
||||
multiLine = true
|
||||
}
|
||||
if i < len(s)-1 {
|
||||
b = append(b, ", "...)
|
||||
}
|
||||
}
|
||||
// Force multi-lined output when printing a removed/inserted node that
|
||||
// is sufficiently long.
|
||||
if (d == diffInserted || d == diffRemoved) && len(b[n0:]) > 80 {
|
||||
multiLine = true
|
||||
}
|
||||
if !multiLine {
|
||||
return b, textLine(b[n0:])
|
||||
}
|
||||
return b, s
|
||||
}
|
||||
|
||||
func (s textList) formatExpandedTo(b []byte, d diffMode, n indentMode) []byte {
|
||||
alignKeyLens := s.alignLens(
|
||||
func(r textRecord) bool {
|
||||
_, isLine := r.Value.(textLine)
|
||||
return r.Key == "" || !isLine
|
||||
},
|
||||
func(r textRecord) int { return len(r.Key) },
|
||||
)
|
||||
alignValueLens := s.alignLens(
|
||||
func(r textRecord) bool {
|
||||
_, isLine := r.Value.(textLine)
|
||||
return !isLine || r.Value.Equal(textEllipsis) || r.Comment == nil
|
||||
},
|
||||
func(r textRecord) int { return len(r.Value.(textLine)) },
|
||||
)
|
||||
|
||||
// Format the list as a multi-lined output.
|
||||
n++
|
||||
for i, r := range s {
|
||||
b = n.appendIndent(append(b, '\n'), d|r.Diff)
|
||||
if r.Key != "" {
|
||||
b = append(b, r.Key+": "...)
|
||||
}
|
||||
b = alignKeyLens[i].appendChar(b, ' ')
|
||||
|
||||
b = r.Value.formatExpandedTo(b, d|r.Diff, n)
|
||||
if !r.Value.Equal(textEllipsis) {
|
||||
b = append(b, ',')
|
||||
}
|
||||
b = alignValueLens[i].appendChar(b, ' ')
|
||||
|
||||
if r.Comment != nil {
|
||||
b = append(b, " // "+r.Comment.String()...)
|
||||
}
|
||||
}
|
||||
n--
|
||||
|
||||
return n.appendIndent(append(b, '\n'), d)
|
||||
}
|
||||
|
||||
func (s textList) alignLens(
|
||||
skipFunc func(textRecord) bool,
|
||||
lenFunc func(textRecord) int,
|
||||
) []repeatCount {
|
||||
var startIdx, endIdx, maxLen int
|
||||
lens := make([]repeatCount, len(s))
|
||||
for i, r := range s {
|
||||
if skipFunc(r) {
|
||||
for j := startIdx; j < endIdx && j < len(s); j++ {
|
||||
lens[j] = repeatCount(maxLen - lenFunc(s[j]))
|
||||
}
|
||||
startIdx, endIdx, maxLen = i+1, i+1, 0
|
||||
} else {
|
||||
if maxLen < lenFunc(r) {
|
||||
maxLen = lenFunc(r)
|
||||
}
|
||||
endIdx = i + 1
|
||||
}
|
||||
}
|
||||
for j := startIdx; j < endIdx && j < len(s); j++ {
|
||||
lens[j] = repeatCount(maxLen - lenFunc(s[j]))
|
||||
}
|
||||
return lens
|
||||
}
|
||||
|
||||
// textLine is a single-line segment of text and is always a leaf node
|
||||
// in the textNode tree.
|
||||
type textLine []byte
|
||||
|
||||
var (
|
||||
textNil = textLine("nil")
|
||||
textEllipsis = textLine("...")
|
||||
)
|
||||
|
||||
func (s textLine) Len() int {
|
||||
return len(s)
|
||||
}
|
||||
func (s1 textLine) Equal(s2 textNode) bool {
|
||||
if s2, ok := s2.(textLine); ok {
|
||||
return bytes.Equal([]byte(s1), []byte(s2))
|
||||
}
|
||||
return false
|
||||
}
|
||||
func (s textLine) String() string {
|
||||
return string(s)
|
||||
}
|
||||
func (s textLine) formatCompactTo(b []byte, d diffMode) ([]byte, textNode) {
|
||||
return append(b, s...), s
|
||||
}
|
||||
func (s textLine) formatExpandedTo(b []byte, _ diffMode, _ indentMode) []byte {
|
||||
return append(b, s...)
|
||||
}
|
||||
|
||||
type diffStats struct {
|
||||
Name string
|
||||
NumIgnored int
|
||||
NumIdentical int
|
||||
NumRemoved int
|
||||
NumInserted int
|
||||
NumModified int
|
||||
}
|
||||
|
||||
func (s diffStats) NumDiff() int {
|
||||
return s.NumRemoved + s.NumInserted + s.NumModified
|
||||
}
|
||||
|
||||
func (s diffStats) Append(ds diffStats) diffStats {
|
||||
assert(s.Name == ds.Name)
|
||||
s.NumIgnored += ds.NumIgnored
|
||||
s.NumIdentical += ds.NumIdentical
|
||||
s.NumRemoved += ds.NumRemoved
|
||||
s.NumInserted += ds.NumInserted
|
||||
s.NumModified += ds.NumModified
|
||||
return s
|
||||
}
|
||||
|
||||
// String prints a humanly-readable summary of coalesced records.
|
||||
//
|
||||
// Example:
|
||||
// diffStats{Name: "Field", NumIgnored: 5}.String() => "5 ignored fields"
|
||||
func (s diffStats) String() string {
|
||||
var ss []string
|
||||
var sum int
|
||||
labels := [...]string{"ignored", "identical", "removed", "inserted", "modified"}
|
||||
counts := [...]int{s.NumIgnored, s.NumIdentical, s.NumRemoved, s.NumInserted, s.NumModified}
|
||||
for i, n := range counts {
|
||||
if n > 0 {
|
||||
ss = append(ss, fmt.Sprintf("%d %v", n, labels[i]))
|
||||
}
|
||||
sum += n
|
||||
}
|
||||
|
||||
// Pluralize the name (adjusting for some obscure English grammar rules).
|
||||
name := s.Name
|
||||
if sum > 1 {
|
||||
name = name + "s"
|
||||
if strings.HasSuffix(name, "ys") {
|
||||
name = name[:len(name)-2] + "ies" // e.g., "entrys" => "entries"
|
||||
}
|
||||
}
|
||||
|
||||
// Format the list according to English grammar (with Oxford comma).
|
||||
switch n := len(ss); n {
|
||||
case 0:
|
||||
return ""
|
||||
case 1, 2:
|
||||
return strings.Join(ss, " and ") + " " + name
|
||||
default:
|
||||
return strings.Join(ss[:n-1], ", ") + ", and " + ss[n-1] + " " + name
|
||||
}
|
||||
}
|
||||
|
||||
type commentString string
|
||||
|
||||
func (s commentString) String() string { return string(s) }
|
||||
121
vendor/github.com/google/go-cmp/cmp/report_value.go
generated
vendored
Normal file
121
vendor/github.com/google/go-cmp/cmp/report_value.go
generated
vendored
Normal file
@@ -0,0 +1,121 @@
|
||||
// Copyright 2019, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package cmp
|
||||
|
||||
import "reflect"
|
||||
|
||||
// valueNode represents a single node within a report, which is a
|
||||
// structured representation of the value tree, containing information
|
||||
// regarding which nodes are equal or not.
|
||||
type valueNode struct {
|
||||
parent *valueNode
|
||||
|
||||
Type reflect.Type
|
||||
ValueX reflect.Value
|
||||
ValueY reflect.Value
|
||||
|
||||
// NumSame is the number of leaf nodes that are equal.
|
||||
// All descendants are equal only if NumDiff is 0.
|
||||
NumSame int
|
||||
// NumDiff is the number of leaf nodes that are not equal.
|
||||
NumDiff int
|
||||
// NumIgnored is the number of leaf nodes that are ignored.
|
||||
NumIgnored int
|
||||
// NumCompared is the number of leaf nodes that were compared
|
||||
// using an Equal method or Comparer function.
|
||||
NumCompared int
|
||||
// NumTransformed is the number of non-leaf nodes that were transformed.
|
||||
NumTransformed int
|
||||
// NumChildren is the number of transitive descendants of this node.
|
||||
// This counts from zero; thus, leaf nodes have no descendants.
|
||||
NumChildren int
|
||||
// MaxDepth is the maximum depth of the tree. This counts from zero;
|
||||
// thus, leaf nodes have a depth of zero.
|
||||
MaxDepth int
|
||||
|
||||
// Records is a list of struct fields, slice elements, or map entries.
|
||||
Records []reportRecord // If populated, implies Value is not populated
|
||||
|
||||
// Value is the result of a transformation, pointer indirect, of
|
||||
// type assertion.
|
||||
Value *valueNode // If populated, implies Records is not populated
|
||||
|
||||
// TransformerName is the name of the transformer.
|
||||
TransformerName string // If non-empty, implies Value is populated
|
||||
}
|
||||
type reportRecord struct {
|
||||
Key reflect.Value // Invalid for slice element
|
||||
Value *valueNode
|
||||
}
|
||||
|
||||
func (parent *valueNode) PushStep(ps PathStep) (child *valueNode) {
|
||||
vx, vy := ps.Values()
|
||||
child = &valueNode{parent: parent, Type: ps.Type(), ValueX: vx, ValueY: vy}
|
||||
switch s := ps.(type) {
|
||||
case StructField:
|
||||
assert(parent.Value == nil)
|
||||
parent.Records = append(parent.Records, reportRecord{Key: reflect.ValueOf(s.Name()), Value: child})
|
||||
case SliceIndex:
|
||||
assert(parent.Value == nil)
|
||||
parent.Records = append(parent.Records, reportRecord{Value: child})
|
||||
case MapIndex:
|
||||
assert(parent.Value == nil)
|
||||
parent.Records = append(parent.Records, reportRecord{Key: s.Key(), Value: child})
|
||||
case Indirect:
|
||||
assert(parent.Value == nil && parent.Records == nil)
|
||||
parent.Value = child
|
||||
case TypeAssertion:
|
||||
assert(parent.Value == nil && parent.Records == nil)
|
||||
parent.Value = child
|
||||
case Transform:
|
||||
assert(parent.Value == nil && parent.Records == nil)
|
||||
parent.Value = child
|
||||
parent.TransformerName = s.Name()
|
||||
parent.NumTransformed++
|
||||
default:
|
||||
assert(parent == nil) // Must be the root step
|
||||
}
|
||||
return child
|
||||
}
|
||||
|
||||
func (r *valueNode) Report(rs Result) {
|
||||
assert(r.MaxDepth == 0) // May only be called on leaf nodes
|
||||
|
||||
if rs.ByIgnore() {
|
||||
r.NumIgnored++
|
||||
} else {
|
||||
if rs.Equal() {
|
||||
r.NumSame++
|
||||
} else {
|
||||
r.NumDiff++
|
||||
}
|
||||
}
|
||||
assert(r.NumSame+r.NumDiff+r.NumIgnored == 1)
|
||||
|
||||
if rs.ByMethod() {
|
||||
r.NumCompared++
|
||||
}
|
||||
if rs.ByFunc() {
|
||||
r.NumCompared++
|
||||
}
|
||||
assert(r.NumCompared <= 1)
|
||||
}
|
||||
|
||||
func (child *valueNode) PopStep() (parent *valueNode) {
|
||||
if child.parent == nil {
|
||||
return nil
|
||||
}
|
||||
parent = child.parent
|
||||
parent.NumSame += child.NumSame
|
||||
parent.NumDiff += child.NumDiff
|
||||
parent.NumIgnored += child.NumIgnored
|
||||
parent.NumCompared += child.NumCompared
|
||||
parent.NumTransformed += child.NumTransformed
|
||||
parent.NumChildren += child.NumChildren + 1
|
||||
if parent.MaxDepth < child.MaxDepth+1 {
|
||||
parent.MaxDepth = child.MaxDepth + 1
|
||||
}
|
||||
return parent
|
||||
}
|
||||
53
vendor/github.com/google/go-cmp/cmp/reporter.go
generated
vendored
53
vendor/github.com/google/go-cmp/cmp/reporter.go
generated
vendored
@@ -1,53 +0,0 @@
|
||||
// Copyright 2017, The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE.md file.
|
||||
|
||||
package cmp
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/google/go-cmp/cmp/internal/value"
|
||||
)
|
||||
|
||||
type defaultReporter struct {
|
||||
Option
|
||||
diffs []string // List of differences, possibly truncated
|
||||
ndiffs int // Total number of differences
|
||||
nbytes int // Number of bytes in diffs
|
||||
nlines int // Number of lines in diffs
|
||||
}
|
||||
|
||||
var _ reporter = (*defaultReporter)(nil)
|
||||
|
||||
func (r *defaultReporter) Report(x, y reflect.Value, eq bool, p Path) {
|
||||
if eq {
|
||||
return // Ignore equal results
|
||||
}
|
||||
const maxBytes = 4096
|
||||
const maxLines = 256
|
||||
r.ndiffs++
|
||||
if r.nbytes < maxBytes && r.nlines < maxLines {
|
||||
sx := value.Format(x, value.FormatConfig{UseStringer: true})
|
||||
sy := value.Format(y, value.FormatConfig{UseStringer: true})
|
||||
if sx == sy {
|
||||
// Unhelpful output, so use more exact formatting.
|
||||
sx = value.Format(x, value.FormatConfig{PrintPrimitiveType: true})
|
||||
sy = value.Format(y, value.FormatConfig{PrintPrimitiveType: true})
|
||||
}
|
||||
s := fmt.Sprintf("%#v:\n\t-: %s\n\t+: %s\n", p, sx, sy)
|
||||
r.diffs = append(r.diffs, s)
|
||||
r.nbytes += len(s)
|
||||
r.nlines += strings.Count(s, "\n")
|
||||
}
|
||||
}
|
||||
|
||||
func (r *defaultReporter) String() string {
|
||||
s := strings.Join(r.diffs, "")
|
||||
if r.ndiffs == len(r.diffs) {
|
||||
return s
|
||||
}
|
||||
return fmt.Sprintf("%s... %d more differences ...", s, r.ndiffs-len(r.diffs))
|
||||
}
|
||||
13
vendor/github.com/google/gofuzz/.travis.yml
generated
vendored
Normal file
13
vendor/github.com/google/gofuzz/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.4
|
||||
- 1.3
|
||||
- 1.2
|
||||
- tip
|
||||
|
||||
install:
|
||||
- if ! go get code.google.com/p/go.tools/cmd/cover; then go get golang.org/x/tools/cmd/cover; fi
|
||||
|
||||
script:
|
||||
- go test -cover
|
||||
67
vendor/github.com/google/gofuzz/CONTRIBUTING.md
generated
vendored
Normal file
67
vendor/github.com/google/gofuzz/CONTRIBUTING.md
generated
vendored
Normal file
@@ -0,0 +1,67 @@
|
||||
# How to contribute #
|
||||
|
||||
We'd love to accept your patches and contributions to this project. There are
|
||||
a just a few small guidelines you need to follow.
|
||||
|
||||
|
||||
## Contributor License Agreement ##
|
||||
|
||||
Contributions to any Google project must be accompanied by a Contributor
|
||||
License Agreement. This is not a copyright **assignment**, it simply gives
|
||||
Google permission to use and redistribute your contributions as part of the
|
||||
project.
|
||||
|
||||
* If you are an individual writing original source code and you're sure you
|
||||
own the intellectual property, then you'll need to sign an [individual
|
||||
CLA][].
|
||||
|
||||
* If you work for a company that wants to allow you to contribute your work,
|
||||
then you'll need to sign a [corporate CLA][].
|
||||
|
||||
You generally only need to submit a CLA once, so if you've already submitted
|
||||
one (even if it was for a different project), you probably don't need to do it
|
||||
again.
|
||||
|
||||
[individual CLA]: https://developers.google.com/open-source/cla/individual
|
||||
[corporate CLA]: https://developers.google.com/open-source/cla/corporate
|
||||
|
||||
|
||||
## Submitting a patch ##
|
||||
|
||||
1. It's generally best to start by opening a new issue describing the bug or
|
||||
feature you're intending to fix. Even if you think it's relatively minor,
|
||||
it's helpful to know what people are working on. Mention in the initial
|
||||
issue that you are planning to work on that bug or feature so that it can
|
||||
be assigned to you.
|
||||
|
||||
1. Follow the normal process of [forking][] the project, and setup a new
|
||||
branch to work in. It's important that each group of changes be done in
|
||||
separate branches in order to ensure that a pull request only includes the
|
||||
commits related to that bug or feature.
|
||||
|
||||
1. Go makes it very simple to ensure properly formatted code, so always run
|
||||
`go fmt` on your code before committing it. You should also run
|
||||
[golint][] over your code. As noted in the [golint readme][], it's not
|
||||
strictly necessary that your code be completely "lint-free", but this will
|
||||
help you find common style issues.
|
||||
|
||||
1. Any significant changes should almost always be accompanied by tests. The
|
||||
project already has good test coverage, so look at some of the existing
|
||||
tests if you're unsure how to go about it. [gocov][] and [gocov-html][]
|
||||
are invaluable tools for seeing which parts of your code aren't being
|
||||
exercised by your tests.
|
||||
|
||||
1. Do your best to have [well-formed commit messages][] for each change.
|
||||
This provides consistency throughout the project, and ensures that commit
|
||||
messages are able to be formatted properly by various git tools.
|
||||
|
||||
1. Finally, push the commits to your fork and submit a [pull request][].
|
||||
|
||||
[forking]: https://help.github.com/articles/fork-a-repo
|
||||
[golint]: https://github.com/golang/lint
|
||||
[golint readme]: https://github.com/golang/lint/blob/master/README
|
||||
[gocov]: https://github.com/axw/gocov
|
||||
[gocov-html]: https://github.com/matm/gocov-html
|
||||
[well-formed commit messages]: http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html
|
||||
[squash]: http://git-scm.com/book/en/Git-Tools-Rewriting-History#Squashing-Commits
|
||||
[pull request]: https://help.github.com/articles/creating-a-pull-request
|
||||
71
vendor/github.com/google/gofuzz/README.md
generated
vendored
Normal file
71
vendor/github.com/google/gofuzz/README.md
generated
vendored
Normal file
@@ -0,0 +1,71 @@
|
||||
gofuzz
|
||||
======
|
||||
|
||||
gofuzz is a library for populating go objects with random values.
|
||||
|
||||
[](https://godoc.org/github.com/google/gofuzz)
|
||||
[](https://travis-ci.org/google/gofuzz)
|
||||
|
||||
This is useful for testing:
|
||||
|
||||
* Do your project's objects really serialize/unserialize correctly in all cases?
|
||||
* Is there an incorrectly formatted object that will cause your project to panic?
|
||||
|
||||
Import with ```import "github.com/google/gofuzz"```
|
||||
|
||||
You can use it on single variables:
|
||||
```go
|
||||
f := fuzz.New()
|
||||
var myInt int
|
||||
f.Fuzz(&myInt) // myInt gets a random value.
|
||||
```
|
||||
|
||||
You can use it on maps:
|
||||
```go
|
||||
f := fuzz.New().NilChance(0).NumElements(1, 1)
|
||||
var myMap map[ComplexKeyType]string
|
||||
f.Fuzz(&myMap) // myMap will have exactly one element.
|
||||
```
|
||||
|
||||
Customize the chance of getting a nil pointer:
|
||||
```go
|
||||
f := fuzz.New().NilChance(.5)
|
||||
var fancyStruct struct {
|
||||
A, B, C, D *string
|
||||
}
|
||||
f.Fuzz(&fancyStruct) // About half the pointers should be set.
|
||||
```
|
||||
|
||||
You can even customize the randomization completely if needed:
|
||||
```go
|
||||
type MyEnum string
|
||||
const (
|
||||
A MyEnum = "A"
|
||||
B MyEnum = "B"
|
||||
)
|
||||
type MyInfo struct {
|
||||
Type MyEnum
|
||||
AInfo *string
|
||||
BInfo *string
|
||||
}
|
||||
|
||||
f := fuzz.New().NilChance(0).Funcs(
|
||||
func(e *MyInfo, c fuzz.Continue) {
|
||||
switch c.Intn(2) {
|
||||
case 0:
|
||||
e.Type = A
|
||||
c.Fuzz(&e.AInfo)
|
||||
case 1:
|
||||
e.Type = B
|
||||
c.Fuzz(&e.BInfo)
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
var myObject MyInfo
|
||||
f.Fuzz(&myObject) // Type will correspond to whether A or B info is set.
|
||||
```
|
||||
|
||||
See more examples in ```example_test.go```.
|
||||
|
||||
Happy testing!
|
||||
3
vendor/github.com/google/gofuzz/go.mod
generated
vendored
Normal file
3
vendor/github.com/google/gofuzz/go.mod
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
module github.com/google/gofuzz
|
||||
|
||||
go 1.12
|
||||
663
vendor/github.com/googleapis/gnostic/OpenAPIv2/OpenAPIv2.proto
generated
vendored
Normal file
663
vendor/github.com/googleapis/gnostic/OpenAPIv2/OpenAPIv2.proto
generated
vendored
Normal file
@@ -0,0 +1,663 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// THIS FILE IS AUTOMATICALLY GENERATED.
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
package openapi.v2;
|
||||
|
||||
import "google/protobuf/any.proto";
|
||||
|
||||
// This option lets the proto compiler generate Java code inside the package
|
||||
// name (see below) instead of inside an outer class. It creates a simpler
|
||||
// developer experience by reducing one-level of name nesting and be
|
||||
// consistent with most programming languages that don't support outer classes.
|
||||
option java_multiple_files = true;
|
||||
|
||||
// The Java outer classname should be the filename in UpperCamelCase. This
|
||||
// class is only used to hold proto descriptor, so developers don't need to
|
||||
// work with it directly.
|
||||
option java_outer_classname = "OpenAPIProto";
|
||||
|
||||
// The Java package name must be proto package name with proper prefix.
|
||||
option java_package = "org.openapi_v2";
|
||||
|
||||
// A reasonable prefix for the Objective-C symbols generated from the package.
|
||||
// It should at a minimum be 3 characters long, all uppercase, and convention
|
||||
// is to use an abbreviation of the package name. Something short, but
|
||||
// hopefully unique enough to not conflict with things that may come along in
|
||||
// the future. 'GPB' is reserved for the protocol buffer implementation itself.
|
||||
option objc_class_prefix = "OAS";
|
||||
|
||||
message AdditionalPropertiesItem {
|
||||
oneof oneof {
|
||||
Schema schema = 1;
|
||||
bool boolean = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message Any {
|
||||
google.protobuf.Any value = 1;
|
||||
string yaml = 2;
|
||||
}
|
||||
|
||||
message ApiKeySecurity {
|
||||
string type = 1;
|
||||
string name = 2;
|
||||
string in = 3;
|
||||
string description = 4;
|
||||
repeated NamedAny vendor_extension = 5;
|
||||
}
|
||||
|
||||
message BasicAuthenticationSecurity {
|
||||
string type = 1;
|
||||
string description = 2;
|
||||
repeated NamedAny vendor_extension = 3;
|
||||
}
|
||||
|
||||
message BodyParameter {
|
||||
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
|
||||
string description = 1;
|
||||
// The name of the parameter.
|
||||
string name = 2;
|
||||
// Determines the location of the parameter.
|
||||
string in = 3;
|
||||
// Determines whether or not this parameter is required or optional.
|
||||
bool required = 4;
|
||||
Schema schema = 5;
|
||||
repeated NamedAny vendor_extension = 6;
|
||||
}
|
||||
|
||||
// Contact information for the owners of the API.
|
||||
message Contact {
|
||||
// The identifying name of the contact person/organization.
|
||||
string name = 1;
|
||||
// The URL pointing to the contact information.
|
||||
string url = 2;
|
||||
// The email address of the contact person/organization.
|
||||
string email = 3;
|
||||
repeated NamedAny vendor_extension = 4;
|
||||
}
|
||||
|
||||
message Default {
|
||||
repeated NamedAny additional_properties = 1;
|
||||
}
|
||||
|
||||
// One or more JSON objects describing the schemas being consumed and produced by the API.
|
||||
message Definitions {
|
||||
repeated NamedSchema additional_properties = 1;
|
||||
}
|
||||
|
||||
message Document {
|
||||
// The Swagger version of this document.
|
||||
string swagger = 1;
|
||||
Info info = 2;
|
||||
// The host (name or ip) of the API. Example: 'swagger.io'
|
||||
string host = 3;
|
||||
// The base path to the API. Example: '/api'.
|
||||
string base_path = 4;
|
||||
// The transfer protocol of the API.
|
||||
repeated string schemes = 5;
|
||||
// A list of MIME types accepted by the API.
|
||||
repeated string consumes = 6;
|
||||
// A list of MIME types the API can produce.
|
||||
repeated string produces = 7;
|
||||
Paths paths = 8;
|
||||
Definitions definitions = 9;
|
||||
ParameterDefinitions parameters = 10;
|
||||
ResponseDefinitions responses = 11;
|
||||
repeated SecurityRequirement security = 12;
|
||||
SecurityDefinitions security_definitions = 13;
|
||||
repeated Tag tags = 14;
|
||||
ExternalDocs external_docs = 15;
|
||||
repeated NamedAny vendor_extension = 16;
|
||||
}
|
||||
|
||||
message Examples {
|
||||
repeated NamedAny additional_properties = 1;
|
||||
}
|
||||
|
||||
// information about external documentation
|
||||
message ExternalDocs {
|
||||
string description = 1;
|
||||
string url = 2;
|
||||
repeated NamedAny vendor_extension = 3;
|
||||
}
|
||||
|
||||
// A deterministic version of a JSON Schema object.
|
||||
message FileSchema {
|
||||
string format = 1;
|
||||
string title = 2;
|
||||
string description = 3;
|
||||
Any default = 4;
|
||||
repeated string required = 5;
|
||||
string type = 6;
|
||||
bool read_only = 7;
|
||||
ExternalDocs external_docs = 8;
|
||||
Any example = 9;
|
||||
repeated NamedAny vendor_extension = 10;
|
||||
}
|
||||
|
||||
message FormDataParameterSubSchema {
|
||||
// Determines whether or not this parameter is required or optional.
|
||||
bool required = 1;
|
||||
// Determines the location of the parameter.
|
||||
string in = 2;
|
||||
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
|
||||
string description = 3;
|
||||
// The name of the parameter.
|
||||
string name = 4;
|
||||
// allows sending a parameter by name only or with an empty value.
|
||||
bool allow_empty_value = 5;
|
||||
string type = 6;
|
||||
string format = 7;
|
||||
PrimitivesItems items = 8;
|
||||
string collection_format = 9;
|
||||
Any default = 10;
|
||||
double maximum = 11;
|
||||
bool exclusive_maximum = 12;
|
||||
double minimum = 13;
|
||||
bool exclusive_minimum = 14;
|
||||
int64 max_length = 15;
|
||||
int64 min_length = 16;
|
||||
string pattern = 17;
|
||||
int64 max_items = 18;
|
||||
int64 min_items = 19;
|
||||
bool unique_items = 20;
|
||||
repeated Any enum = 21;
|
||||
double multiple_of = 22;
|
||||
repeated NamedAny vendor_extension = 23;
|
||||
}
|
||||
|
||||
message Header {
|
||||
string type = 1;
|
||||
string format = 2;
|
||||
PrimitivesItems items = 3;
|
||||
string collection_format = 4;
|
||||
Any default = 5;
|
||||
double maximum = 6;
|
||||
bool exclusive_maximum = 7;
|
||||
double minimum = 8;
|
||||
bool exclusive_minimum = 9;
|
||||
int64 max_length = 10;
|
||||
int64 min_length = 11;
|
||||
string pattern = 12;
|
||||
int64 max_items = 13;
|
||||
int64 min_items = 14;
|
||||
bool unique_items = 15;
|
||||
repeated Any enum = 16;
|
||||
double multiple_of = 17;
|
||||
string description = 18;
|
||||
repeated NamedAny vendor_extension = 19;
|
||||
}
|
||||
|
||||
message HeaderParameterSubSchema {
|
||||
// Determines whether or not this parameter is required or optional.
|
||||
bool required = 1;
|
||||
// Determines the location of the parameter.
|
||||
string in = 2;
|
||||
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
|
||||
string description = 3;
|
||||
// The name of the parameter.
|
||||
string name = 4;
|
||||
string type = 5;
|
||||
string format = 6;
|
||||
PrimitivesItems items = 7;
|
||||
string collection_format = 8;
|
||||
Any default = 9;
|
||||
double maximum = 10;
|
||||
bool exclusive_maximum = 11;
|
||||
double minimum = 12;
|
||||
bool exclusive_minimum = 13;
|
||||
int64 max_length = 14;
|
||||
int64 min_length = 15;
|
||||
string pattern = 16;
|
||||
int64 max_items = 17;
|
||||
int64 min_items = 18;
|
||||
bool unique_items = 19;
|
||||
repeated Any enum = 20;
|
||||
double multiple_of = 21;
|
||||
repeated NamedAny vendor_extension = 22;
|
||||
}
|
||||
|
||||
message Headers {
|
||||
repeated NamedHeader additional_properties = 1;
|
||||
}
|
||||
|
||||
// General information about the API.
|
||||
message Info {
|
||||
// A unique and precise title of the API.
|
||||
string title = 1;
|
||||
// A semantic version number of the API.
|
||||
string version = 2;
|
||||
// A longer description of the API. Should be different from the title. GitHub Flavored Markdown is allowed.
|
||||
string description = 3;
|
||||
// The terms of service for the API.
|
||||
string terms_of_service = 4;
|
||||
Contact contact = 5;
|
||||
License license = 6;
|
||||
repeated NamedAny vendor_extension = 7;
|
||||
}
|
||||
|
||||
message ItemsItem {
|
||||
repeated Schema schema = 1;
|
||||
}
|
||||
|
||||
message JsonReference {
|
||||
string _ref = 1;
|
||||
string description = 2;
|
||||
}
|
||||
|
||||
message License {
|
||||
// The name of the license type. It's encouraged to use an OSI compatible license.
|
||||
string name = 1;
|
||||
// The URL pointing to the license.
|
||||
string url = 2;
|
||||
repeated NamedAny vendor_extension = 3;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of Any as ordered (name,value) pairs.
|
||||
message NamedAny {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
Any value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of Header as ordered (name,value) pairs.
|
||||
message NamedHeader {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
Header value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of Parameter as ordered (name,value) pairs.
|
||||
message NamedParameter {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
Parameter value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of PathItem as ordered (name,value) pairs.
|
||||
message NamedPathItem {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
PathItem value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of Response as ordered (name,value) pairs.
|
||||
message NamedResponse {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
Response value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of ResponseValue as ordered (name,value) pairs.
|
||||
message NamedResponseValue {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
ResponseValue value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of Schema as ordered (name,value) pairs.
|
||||
message NamedSchema {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
Schema value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of SecurityDefinitionsItem as ordered (name,value) pairs.
|
||||
message NamedSecurityDefinitionsItem {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
SecurityDefinitionsItem value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of string as ordered (name,value) pairs.
|
||||
message NamedString {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
string value = 2;
|
||||
}
|
||||
|
||||
// Automatically-generated message used to represent maps of StringArray as ordered (name,value) pairs.
|
||||
message NamedStringArray {
|
||||
// Map key
|
||||
string name = 1;
|
||||
// Mapped value
|
||||
StringArray value = 2;
|
||||
}
|
||||
|
||||
message NonBodyParameter {
|
||||
oneof oneof {
|
||||
HeaderParameterSubSchema header_parameter_sub_schema = 1;
|
||||
FormDataParameterSubSchema form_data_parameter_sub_schema = 2;
|
||||
QueryParameterSubSchema query_parameter_sub_schema = 3;
|
||||
PathParameterSubSchema path_parameter_sub_schema = 4;
|
||||
}
|
||||
}
|
||||
|
||||
message Oauth2AccessCodeSecurity {
|
||||
string type = 1;
|
||||
string flow = 2;
|
||||
Oauth2Scopes scopes = 3;
|
||||
string authorization_url = 4;
|
||||
string token_url = 5;
|
||||
string description = 6;
|
||||
repeated NamedAny vendor_extension = 7;
|
||||
}
|
||||
|
||||
message Oauth2ApplicationSecurity {
|
||||
string type = 1;
|
||||
string flow = 2;
|
||||
Oauth2Scopes scopes = 3;
|
||||
string token_url = 4;
|
||||
string description = 5;
|
||||
repeated NamedAny vendor_extension = 6;
|
||||
}
|
||||
|
||||
message Oauth2ImplicitSecurity {
|
||||
string type = 1;
|
||||
string flow = 2;
|
||||
Oauth2Scopes scopes = 3;
|
||||
string authorization_url = 4;
|
||||
string description = 5;
|
||||
repeated NamedAny vendor_extension = 6;
|
||||
}
|
||||
|
||||
message Oauth2PasswordSecurity {
|
||||
string type = 1;
|
||||
string flow = 2;
|
||||
Oauth2Scopes scopes = 3;
|
||||
string token_url = 4;
|
||||
string description = 5;
|
||||
repeated NamedAny vendor_extension = 6;
|
||||
}
|
||||
|
||||
message Oauth2Scopes {
|
||||
repeated NamedString additional_properties = 1;
|
||||
}
|
||||
|
||||
message Operation {
|
||||
repeated string tags = 1;
|
||||
// A brief summary of the operation.
|
||||
string summary = 2;
|
||||
// A longer description of the operation, GitHub Flavored Markdown is allowed.
|
||||
string description = 3;
|
||||
ExternalDocs external_docs = 4;
|
||||
// A unique identifier of the operation.
|
||||
string operation_id = 5;
|
||||
// A list of MIME types the API can produce.
|
||||
repeated string produces = 6;
|
||||
// A list of MIME types the API can consume.
|
||||
repeated string consumes = 7;
|
||||
// The parameters needed to send a valid API call.
|
||||
repeated ParametersItem parameters = 8;
|
||||
Responses responses = 9;
|
||||
// The transfer protocol of the API.
|
||||
repeated string schemes = 10;
|
||||
bool deprecated = 11;
|
||||
repeated SecurityRequirement security = 12;
|
||||
repeated NamedAny vendor_extension = 13;
|
||||
}
|
||||
|
||||
message Parameter {
|
||||
oneof oneof {
|
||||
BodyParameter body_parameter = 1;
|
||||
NonBodyParameter non_body_parameter = 2;
|
||||
}
|
||||
}
|
||||
|
||||
// One or more JSON representations for parameters
|
||||
message ParameterDefinitions {
|
||||
repeated NamedParameter additional_properties = 1;
|
||||
}
|
||||
|
||||
message ParametersItem {
|
||||
oneof oneof {
|
||||
Parameter parameter = 1;
|
||||
JsonReference json_reference = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message PathItem {
|
||||
string _ref = 1;
|
||||
Operation get = 2;
|
||||
Operation put = 3;
|
||||
Operation post = 4;
|
||||
Operation delete = 5;
|
||||
Operation options = 6;
|
||||
Operation head = 7;
|
||||
Operation patch = 8;
|
||||
// The parameters needed to send a valid API call.
|
||||
repeated ParametersItem parameters = 9;
|
||||
repeated NamedAny vendor_extension = 10;
|
||||
}
|
||||
|
||||
message PathParameterSubSchema {
|
||||
// Determines whether or not this parameter is required or optional.
|
||||
bool required = 1;
|
||||
// Determines the location of the parameter.
|
||||
string in = 2;
|
||||
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
|
||||
string description = 3;
|
||||
// The name of the parameter.
|
||||
string name = 4;
|
||||
string type = 5;
|
||||
string format = 6;
|
||||
PrimitivesItems items = 7;
|
||||
string collection_format = 8;
|
||||
Any default = 9;
|
||||
double maximum = 10;
|
||||
bool exclusive_maximum = 11;
|
||||
double minimum = 12;
|
||||
bool exclusive_minimum = 13;
|
||||
int64 max_length = 14;
|
||||
int64 min_length = 15;
|
||||
string pattern = 16;
|
||||
int64 max_items = 17;
|
||||
int64 min_items = 18;
|
||||
bool unique_items = 19;
|
||||
repeated Any enum = 20;
|
||||
double multiple_of = 21;
|
||||
repeated NamedAny vendor_extension = 22;
|
||||
}
|
||||
|
||||
// Relative paths to the individual endpoints. They must be relative to the 'basePath'.
|
||||
message Paths {
|
||||
repeated NamedAny vendor_extension = 1;
|
||||
repeated NamedPathItem path = 2;
|
||||
}
|
||||
|
||||
message PrimitivesItems {
|
||||
string type = 1;
|
||||
string format = 2;
|
||||
PrimitivesItems items = 3;
|
||||
string collection_format = 4;
|
||||
Any default = 5;
|
||||
double maximum = 6;
|
||||
bool exclusive_maximum = 7;
|
||||
double minimum = 8;
|
||||
bool exclusive_minimum = 9;
|
||||
int64 max_length = 10;
|
||||
int64 min_length = 11;
|
||||
string pattern = 12;
|
||||
int64 max_items = 13;
|
||||
int64 min_items = 14;
|
||||
bool unique_items = 15;
|
||||
repeated Any enum = 16;
|
||||
double multiple_of = 17;
|
||||
repeated NamedAny vendor_extension = 18;
|
||||
}
|
||||
|
||||
message Properties {
|
||||
repeated NamedSchema additional_properties = 1;
|
||||
}
|
||||
|
||||
message QueryParameterSubSchema {
|
||||
// Determines whether or not this parameter is required or optional.
|
||||
bool required = 1;
|
||||
// Determines the location of the parameter.
|
||||
string in = 2;
|
||||
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
|
||||
string description = 3;
|
||||
// The name of the parameter.
|
||||
string name = 4;
|
||||
// allows sending a parameter by name only or with an empty value.
|
||||
bool allow_empty_value = 5;
|
||||
string type = 6;
|
||||
string format = 7;
|
||||
PrimitivesItems items = 8;
|
||||
string collection_format = 9;
|
||||
Any default = 10;
|
||||
double maximum = 11;
|
||||
bool exclusive_maximum = 12;
|
||||
double minimum = 13;
|
||||
bool exclusive_minimum = 14;
|
||||
int64 max_length = 15;
|
||||
int64 min_length = 16;
|
||||
string pattern = 17;
|
||||
int64 max_items = 18;
|
||||
int64 min_items = 19;
|
||||
bool unique_items = 20;
|
||||
repeated Any enum = 21;
|
||||
double multiple_of = 22;
|
||||
repeated NamedAny vendor_extension = 23;
|
||||
}
|
||||
|
||||
message Response {
|
||||
string description = 1;
|
||||
SchemaItem schema = 2;
|
||||
Headers headers = 3;
|
||||
Examples examples = 4;
|
||||
repeated NamedAny vendor_extension = 5;
|
||||
}
|
||||
|
||||
// One or more JSON representations for parameters
|
||||
message ResponseDefinitions {
|
||||
repeated NamedResponse additional_properties = 1;
|
||||
}
|
||||
|
||||
message ResponseValue {
|
||||
oneof oneof {
|
||||
Response response = 1;
|
||||
JsonReference json_reference = 2;
|
||||
}
|
||||
}
|
||||
|
||||
// Response objects names can either be any valid HTTP status code or 'default'.
|
||||
message Responses {
|
||||
repeated NamedResponseValue response_code = 1;
|
||||
repeated NamedAny vendor_extension = 2;
|
||||
}
|
||||
|
||||
// A deterministic version of a JSON Schema object.
|
||||
message Schema {
|
||||
string _ref = 1;
|
||||
string format = 2;
|
||||
string title = 3;
|
||||
string description = 4;
|
||||
Any default = 5;
|
||||
double multiple_of = 6;
|
||||
double maximum = 7;
|
||||
bool exclusive_maximum = 8;
|
||||
double minimum = 9;
|
||||
bool exclusive_minimum = 10;
|
||||
int64 max_length = 11;
|
||||
int64 min_length = 12;
|
||||
string pattern = 13;
|
||||
int64 max_items = 14;
|
||||
int64 min_items = 15;
|
||||
bool unique_items = 16;
|
||||
int64 max_properties = 17;
|
||||
int64 min_properties = 18;
|
||||
repeated string required = 19;
|
||||
repeated Any enum = 20;
|
||||
AdditionalPropertiesItem additional_properties = 21;
|
||||
TypeItem type = 22;
|
||||
ItemsItem items = 23;
|
||||
repeated Schema all_of = 24;
|
||||
Properties properties = 25;
|
||||
string discriminator = 26;
|
||||
bool read_only = 27;
|
||||
Xml xml = 28;
|
||||
ExternalDocs external_docs = 29;
|
||||
Any example = 30;
|
||||
repeated NamedAny vendor_extension = 31;
|
||||
}
|
||||
|
||||
message SchemaItem {
|
||||
oneof oneof {
|
||||
Schema schema = 1;
|
||||
FileSchema file_schema = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message SecurityDefinitions {
|
||||
repeated NamedSecurityDefinitionsItem additional_properties = 1;
|
||||
}
|
||||
|
||||
message SecurityDefinitionsItem {
|
||||
oneof oneof {
|
||||
BasicAuthenticationSecurity basic_authentication_security = 1;
|
||||
ApiKeySecurity api_key_security = 2;
|
||||
Oauth2ImplicitSecurity oauth2_implicit_security = 3;
|
||||
Oauth2PasswordSecurity oauth2_password_security = 4;
|
||||
Oauth2ApplicationSecurity oauth2_application_security = 5;
|
||||
Oauth2AccessCodeSecurity oauth2_access_code_security = 6;
|
||||
}
|
||||
}
|
||||
|
||||
message SecurityRequirement {
|
||||
repeated NamedStringArray additional_properties = 1;
|
||||
}
|
||||
|
||||
message StringArray {
|
||||
repeated string value = 1;
|
||||
}
|
||||
|
||||
message Tag {
|
||||
string name = 1;
|
||||
string description = 2;
|
||||
ExternalDocs external_docs = 3;
|
||||
repeated NamedAny vendor_extension = 4;
|
||||
}
|
||||
|
||||
message TypeItem {
|
||||
repeated string value = 1;
|
||||
}
|
||||
|
||||
// Any property starting with x- is valid.
|
||||
message VendorExtension {
|
||||
repeated NamedAny additional_properties = 1;
|
||||
}
|
||||
|
||||
message Xml {
|
||||
string name = 1;
|
||||
string namespace = 2;
|
||||
string prefix = 3;
|
||||
bool attribute = 4;
|
||||
bool wrapped = 5;
|
||||
repeated NamedAny vendor_extension = 6;
|
||||
}
|
||||
|
||||
16
vendor/github.com/googleapis/gnostic/OpenAPIv2/README.md
generated
vendored
Normal file
16
vendor/github.com/googleapis/gnostic/OpenAPIv2/README.md
generated
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
# OpenAPI v2 Protocol Buffer Models
|
||||
|
||||
This directory contains a Protocol Buffer-language model
|
||||
and related code for supporting OpenAPI v2.
|
||||
|
||||
Gnostic applications and plugins can use OpenAPIv2.proto
|
||||
to generate Protocol Buffer support code for their preferred languages.
|
||||
|
||||
OpenAPIv2.go is used by Gnostic to read JSON and YAML OpenAPI
|
||||
descriptions into the Protocol Buffer-based datastructures
|
||||
generated from OpenAPIv2.proto.
|
||||
|
||||
OpenAPIv2.proto and OpenAPIv2.go are generated by the Gnostic
|
||||
compiler generator, and OpenAPIv2.pb.go is generated by
|
||||
protoc, the Protocol Buffer compiler, and protoc-gen-go, the
|
||||
Protocol Buffer Go code generation plugin.
|
||||
1610
vendor/github.com/googleapis/gnostic/OpenAPIv2/openapi-2.0.json
generated
vendored
Normal file
1610
vendor/github.com/googleapis/gnostic/OpenAPIv2/openapi-2.0.json
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
3
vendor/github.com/googleapis/gnostic/compiler/README.md
generated
vendored
Normal file
3
vendor/github.com/googleapis/gnostic/compiler/README.md
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
# Compiler support code
|
||||
|
||||
This directory contains compiler support code used by Gnostic and Gnostic extensions.
|
||||
5
vendor/github.com/googleapis/gnostic/extensions/COMPILE-EXTENSION.sh
generated
vendored
Normal file
5
vendor/github.com/googleapis/gnostic/extensions/COMPILE-EXTENSION.sh
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
go get github.com/golang/protobuf/protoc-gen-go
|
||||
|
||||
protoc \
|
||||
--go_out=Mgoogle/protobuf/any.proto=github.com/golang/protobuf/ptypes/any:. *.proto
|
||||
|
||||
5
vendor/github.com/googleapis/gnostic/extensions/README.md
generated
vendored
Normal file
5
vendor/github.com/googleapis/gnostic/extensions/README.md
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
# Extensions
|
||||
|
||||
This directory contains support code for building Gnostic extensions and associated examples.
|
||||
|
||||
Extensions are used to compile vendor or specification extensions into protocol buffer structures.
|
||||
93
vendor/github.com/googleapis/gnostic/extensions/extension.proto
generated
vendored
Normal file
93
vendor/github.com/googleapis/gnostic/extensions/extension.proto
generated
vendored
Normal file
@@ -0,0 +1,93 @@
|
||||
// Copyright 2017 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
syntax = "proto3";
|
||||
|
||||
import "google/protobuf/any.proto";
|
||||
package openapiextension.v1;
|
||||
|
||||
// This option lets the proto compiler generate Java code inside the package
|
||||
// name (see below) instead of inside an outer class. It creates a simpler
|
||||
// developer experience by reducing one-level of name nesting and be
|
||||
// consistent with most programming languages that don't support outer classes.
|
||||
option java_multiple_files = true;
|
||||
|
||||
// The Java outer classname should be the filename in UpperCamelCase. This
|
||||
// class is only used to hold proto descriptor, so developers don't need to
|
||||
// work with it directly.
|
||||
option java_outer_classname = "OpenAPIExtensionV1";
|
||||
|
||||
// The Java package name must be proto package name with proper prefix.
|
||||
option java_package = "org.gnostic.v1";
|
||||
|
||||
// A reasonable prefix for the Objective-C symbols generated from the package.
|
||||
// It should at a minimum be 3 characters long, all uppercase, and convention
|
||||
// is to use an abbreviation of the package name. Something short, but
|
||||
// hopefully unique enough to not conflict with things that may come along in
|
||||
// the future. 'GPB' is reserved for the protocol buffer implementation itself.
|
||||
//
|
||||
option objc_class_prefix = "OAE"; // "OpenAPI Extension"
|
||||
|
||||
// The version number of OpenAPI compiler.
|
||||
message Version {
|
||||
int32 major = 1;
|
||||
int32 minor = 2;
|
||||
int32 patch = 3;
|
||||
// A suffix for alpha, beta or rc release, e.g., "alpha-1", "rc2". It should
|
||||
// be empty for mainline stable releases.
|
||||
string suffix = 4;
|
||||
}
|
||||
|
||||
// An encoded Request is written to the ExtensionHandler's stdin.
|
||||
message ExtensionHandlerRequest {
|
||||
|
||||
// The OpenAPI descriptions that were explicitly listed on the command line.
|
||||
// The specifications will appear in the order they are specified to gnostic.
|
||||
Wrapper wrapper = 1;
|
||||
|
||||
// The version number of openapi compiler.
|
||||
Version compiler_version = 3;
|
||||
}
|
||||
|
||||
// The extensions writes an encoded ExtensionHandlerResponse to stdout.
|
||||
message ExtensionHandlerResponse {
|
||||
|
||||
// true if the extension is handled by the extension handler; false otherwise
|
||||
bool handled = 1;
|
||||
|
||||
// Error message. If non-empty, the extension handling failed.
|
||||
// The extension handler process should exit with status code zero
|
||||
// even if it reports an error in this way.
|
||||
//
|
||||
// This should be used to indicate errors which prevent the extension from
|
||||
// operating as intended. Errors which indicate a problem in gnostic
|
||||
// itself -- such as the input Document being unparseable -- should be
|
||||
// reported by writing a message to stderr and exiting with a non-zero
|
||||
// status code.
|
||||
repeated string error = 2;
|
||||
|
||||
// text output
|
||||
google.protobuf.Any value = 3;
|
||||
}
|
||||
|
||||
message Wrapper {
|
||||
// version of the OpenAPI specification in which this extension was written.
|
||||
string version = 1;
|
||||
|
||||
// Name of the extension
|
||||
string extension_name = 2;
|
||||
|
||||
// Must be a valid yaml for the proto
|
||||
string yaml = 3;
|
||||
}
|
||||
7
vendor/github.com/gregjones/httpcache/LICENSE.txt
generated
vendored
7
vendor/github.com/gregjones/httpcache/LICENSE.txt
generated
vendored
@@ -1,7 +0,0 @@
|
||||
Copyright © 2012 Greg Jones (greg.jones@gmail.com)
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
61
vendor/github.com/gregjones/httpcache/diskcache/diskcache.go
generated
vendored
61
vendor/github.com/gregjones/httpcache/diskcache/diskcache.go
generated
vendored
@@ -1,61 +0,0 @@
|
||||
// Package diskcache provides an implementation of httpcache.Cache that uses the diskv package
|
||||
// to supplement an in-memory map with persistent storage
|
||||
//
|
||||
package diskcache
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/md5"
|
||||
"encoding/hex"
|
||||
"github.com/peterbourgon/diskv"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Cache is an implementation of httpcache.Cache that supplements the in-memory map with persistent storage
|
||||
type Cache struct {
|
||||
d *diskv.Diskv
|
||||
}
|
||||
|
||||
// Get returns the response corresponding to key if present
|
||||
func (c *Cache) Get(key string) (resp []byte, ok bool) {
|
||||
key = keyToFilename(key)
|
||||
resp, err := c.d.Read(key)
|
||||
if err != nil {
|
||||
return []byte{}, false
|
||||
}
|
||||
return resp, true
|
||||
}
|
||||
|
||||
// Set saves a response to the cache as key
|
||||
func (c *Cache) Set(key string, resp []byte) {
|
||||
key = keyToFilename(key)
|
||||
c.d.WriteStream(key, bytes.NewReader(resp), true)
|
||||
}
|
||||
|
||||
// Delete removes the response with key from the cache
|
||||
func (c *Cache) Delete(key string) {
|
||||
key = keyToFilename(key)
|
||||
c.d.Erase(key)
|
||||
}
|
||||
|
||||
func keyToFilename(key string) string {
|
||||
h := md5.New()
|
||||
io.WriteString(h, key)
|
||||
return hex.EncodeToString(h.Sum(nil))
|
||||
}
|
||||
|
||||
// New returns a new Cache that will store files in basePath
|
||||
func New(basePath string) *Cache {
|
||||
return &Cache{
|
||||
d: diskv.New(diskv.Options{
|
||||
BasePath: basePath,
|
||||
CacheSizeMax: 100 * 1024 * 1024, // 100MB
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
// NewWithDiskv returns a new Cache using the provided Diskv as underlying
|
||||
// storage.
|
||||
func NewWithDiskv(d *diskv.Diskv) *Cache {
|
||||
return &Cache{d}
|
||||
}
|
||||
551
vendor/github.com/gregjones/httpcache/httpcache.go
generated
vendored
551
vendor/github.com/gregjones/httpcache/httpcache.go
generated
vendored
@@ -1,551 +0,0 @@
|
||||
// Package httpcache provides a http.RoundTripper implementation that works as a
|
||||
// mostly RFC-compliant cache for http responses.
|
||||
//
|
||||
// It is only suitable for use as a 'private' cache (i.e. for a web-browser or an API-client
|
||||
// and not for a shared proxy).
|
||||
//
|
||||
package httpcache
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"errors"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
stale = iota
|
||||
fresh
|
||||
transparent
|
||||
// XFromCache is the header added to responses that are returned from the cache
|
||||
XFromCache = "X-From-Cache"
|
||||
)
|
||||
|
||||
// A Cache interface is used by the Transport to store and retrieve responses.
|
||||
type Cache interface {
|
||||
// Get returns the []byte representation of a cached response and a bool
|
||||
// set to true if the value isn't empty
|
||||
Get(key string) (responseBytes []byte, ok bool)
|
||||
// Set stores the []byte representation of a response against a key
|
||||
Set(key string, responseBytes []byte)
|
||||
// Delete removes the value associated with the key
|
||||
Delete(key string)
|
||||
}
|
||||
|
||||
// cacheKey returns the cache key for req.
|
||||
func cacheKey(req *http.Request) string {
|
||||
if req.Method == http.MethodGet {
|
||||
return req.URL.String()
|
||||
} else {
|
||||
return req.Method + " " + req.URL.String()
|
||||
}
|
||||
}
|
||||
|
||||
// CachedResponse returns the cached http.Response for req if present, and nil
|
||||
// otherwise.
|
||||
func CachedResponse(c Cache, req *http.Request) (resp *http.Response, err error) {
|
||||
cachedVal, ok := c.Get(cacheKey(req))
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
b := bytes.NewBuffer(cachedVal)
|
||||
return http.ReadResponse(bufio.NewReader(b), req)
|
||||
}
|
||||
|
||||
// MemoryCache is an implemtation of Cache that stores responses in an in-memory map.
|
||||
type MemoryCache struct {
|
||||
mu sync.RWMutex
|
||||
items map[string][]byte
|
||||
}
|
||||
|
||||
// Get returns the []byte representation of the response and true if present, false if not
|
||||
func (c *MemoryCache) Get(key string) (resp []byte, ok bool) {
|
||||
c.mu.RLock()
|
||||
resp, ok = c.items[key]
|
||||
c.mu.RUnlock()
|
||||
return resp, ok
|
||||
}
|
||||
|
||||
// Set saves response resp to the cache with key
|
||||
func (c *MemoryCache) Set(key string, resp []byte) {
|
||||
c.mu.Lock()
|
||||
c.items[key] = resp
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// Delete removes key from the cache
|
||||
func (c *MemoryCache) Delete(key string) {
|
||||
c.mu.Lock()
|
||||
delete(c.items, key)
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// NewMemoryCache returns a new Cache that will store items in an in-memory map
|
||||
func NewMemoryCache() *MemoryCache {
|
||||
c := &MemoryCache{items: map[string][]byte{}}
|
||||
return c
|
||||
}
|
||||
|
||||
// Transport is an implementation of http.RoundTripper that will return values from a cache
|
||||
// where possible (avoiding a network request) and will additionally add validators (etag/if-modified-since)
|
||||
// to repeated requests allowing servers to return 304 / Not Modified
|
||||
type Transport struct {
|
||||
// The RoundTripper interface actually used to make requests
|
||||
// If nil, http.DefaultTransport is used
|
||||
Transport http.RoundTripper
|
||||
Cache Cache
|
||||
// If true, responses returned from the cache will be given an extra header, X-From-Cache
|
||||
MarkCachedResponses bool
|
||||
}
|
||||
|
||||
// NewTransport returns a new Transport with the
|
||||
// provided Cache implementation and MarkCachedResponses set to true
|
||||
func NewTransport(c Cache) *Transport {
|
||||
return &Transport{Cache: c, MarkCachedResponses: true}
|
||||
}
|
||||
|
||||
// Client returns an *http.Client that caches responses.
|
||||
func (t *Transport) Client() *http.Client {
|
||||
return &http.Client{Transport: t}
|
||||
}
|
||||
|
||||
// varyMatches will return false unless all of the cached values for the headers listed in Vary
|
||||
// match the new request
|
||||
func varyMatches(cachedResp *http.Response, req *http.Request) bool {
|
||||
for _, header := range headerAllCommaSepValues(cachedResp.Header, "vary") {
|
||||
header = http.CanonicalHeaderKey(header)
|
||||
if header != "" && req.Header.Get(header) != cachedResp.Header.Get("X-Varied-"+header) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// RoundTrip takes a Request and returns a Response
|
||||
//
|
||||
// If there is a fresh Response already in cache, then it will be returned without connecting to
|
||||
// the server.
|
||||
//
|
||||
// If there is a stale Response, then any validators it contains will be set on the new request
|
||||
// to give the server a chance to respond with NotModified. If this happens, then the cached Response
|
||||
// will be returned.
|
||||
func (t *Transport) RoundTrip(req *http.Request) (resp *http.Response, err error) {
|
||||
cacheKey := cacheKey(req)
|
||||
cacheable := (req.Method == "GET" || req.Method == "HEAD") && req.Header.Get("range") == ""
|
||||
var cachedResp *http.Response
|
||||
if cacheable {
|
||||
cachedResp, err = CachedResponse(t.Cache, req)
|
||||
} else {
|
||||
// Need to invalidate an existing value
|
||||
t.Cache.Delete(cacheKey)
|
||||
}
|
||||
|
||||
transport := t.Transport
|
||||
if transport == nil {
|
||||
transport = http.DefaultTransport
|
||||
}
|
||||
|
||||
if cacheable && cachedResp != nil && err == nil {
|
||||
if t.MarkCachedResponses {
|
||||
cachedResp.Header.Set(XFromCache, "1")
|
||||
}
|
||||
|
||||
if varyMatches(cachedResp, req) {
|
||||
// Can only use cached value if the new request doesn't Vary significantly
|
||||
freshness := getFreshness(cachedResp.Header, req.Header)
|
||||
if freshness == fresh {
|
||||
return cachedResp, nil
|
||||
}
|
||||
|
||||
if freshness == stale {
|
||||
var req2 *http.Request
|
||||
// Add validators if caller hasn't already done so
|
||||
etag := cachedResp.Header.Get("etag")
|
||||
if etag != "" && req.Header.Get("etag") == "" {
|
||||
req2 = cloneRequest(req)
|
||||
req2.Header.Set("if-none-match", etag)
|
||||
}
|
||||
lastModified := cachedResp.Header.Get("last-modified")
|
||||
if lastModified != "" && req.Header.Get("last-modified") == "" {
|
||||
if req2 == nil {
|
||||
req2 = cloneRequest(req)
|
||||
}
|
||||
req2.Header.Set("if-modified-since", lastModified)
|
||||
}
|
||||
if req2 != nil {
|
||||
req = req2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resp, err = transport.RoundTrip(req)
|
||||
if err == nil && req.Method == "GET" && resp.StatusCode == http.StatusNotModified {
|
||||
// Replace the 304 response with the one from cache, but update with some new headers
|
||||
endToEndHeaders := getEndToEndHeaders(resp.Header)
|
||||
for _, header := range endToEndHeaders {
|
||||
cachedResp.Header[header] = resp.Header[header]
|
||||
}
|
||||
resp = cachedResp
|
||||
} else if (err != nil || (cachedResp != nil && resp.StatusCode >= 500)) &&
|
||||
req.Method == "GET" && canStaleOnError(cachedResp.Header, req.Header) {
|
||||
// In case of transport failure and stale-if-error activated, returns cached content
|
||||
// when available
|
||||
return cachedResp, nil
|
||||
} else {
|
||||
if err != nil || resp.StatusCode != http.StatusOK {
|
||||
t.Cache.Delete(cacheKey)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
reqCacheControl := parseCacheControl(req.Header)
|
||||
if _, ok := reqCacheControl["only-if-cached"]; ok {
|
||||
resp = newGatewayTimeoutResponse(req)
|
||||
} else {
|
||||
resp, err = transport.RoundTrip(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if cacheable && canStore(parseCacheControl(req.Header), parseCacheControl(resp.Header)) {
|
||||
for _, varyKey := range headerAllCommaSepValues(resp.Header, "vary") {
|
||||
varyKey = http.CanonicalHeaderKey(varyKey)
|
||||
fakeHeader := "X-Varied-" + varyKey
|
||||
reqValue := req.Header.Get(varyKey)
|
||||
if reqValue != "" {
|
||||
resp.Header.Set(fakeHeader, reqValue)
|
||||
}
|
||||
}
|
||||
switch req.Method {
|
||||
case "GET":
|
||||
// Delay caching until EOF is reached.
|
||||
resp.Body = &cachingReadCloser{
|
||||
R: resp.Body,
|
||||
OnEOF: func(r io.Reader) {
|
||||
resp := *resp
|
||||
resp.Body = ioutil.NopCloser(r)
|
||||
respBytes, err := httputil.DumpResponse(&resp, true)
|
||||
if err == nil {
|
||||
t.Cache.Set(cacheKey, respBytes)
|
||||
}
|
||||
},
|
||||
}
|
||||
default:
|
||||
respBytes, err := httputil.DumpResponse(resp, true)
|
||||
if err == nil {
|
||||
t.Cache.Set(cacheKey, respBytes)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
t.Cache.Delete(cacheKey)
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
// ErrNoDateHeader indicates that the HTTP headers contained no Date header.
|
||||
var ErrNoDateHeader = errors.New("no Date header")
|
||||
|
||||
// Date parses and returns the value of the Date header.
|
||||
func Date(respHeaders http.Header) (date time.Time, err error) {
|
||||
dateHeader := respHeaders.Get("date")
|
||||
if dateHeader == "" {
|
||||
err = ErrNoDateHeader
|
||||
return
|
||||
}
|
||||
|
||||
return time.Parse(time.RFC1123, dateHeader)
|
||||
}
|
||||
|
||||
type realClock struct{}
|
||||
|
||||
func (c *realClock) since(d time.Time) time.Duration {
|
||||
return time.Since(d)
|
||||
}
|
||||
|
||||
type timer interface {
|
||||
since(d time.Time) time.Duration
|
||||
}
|
||||
|
||||
var clock timer = &realClock{}
|
||||
|
||||
// getFreshness will return one of fresh/stale/transparent based on the cache-control
|
||||
// values of the request and the response
|
||||
//
|
||||
// fresh indicates the response can be returned
|
||||
// stale indicates that the response needs validating before it is returned
|
||||
// transparent indicates the response should not be used to fulfil the request
|
||||
//
|
||||
// Because this is only a private cache, 'public' and 'private' in cache-control aren't
|
||||
// signficant. Similarly, smax-age isn't used.
|
||||
func getFreshness(respHeaders, reqHeaders http.Header) (freshness int) {
|
||||
respCacheControl := parseCacheControl(respHeaders)
|
||||
reqCacheControl := parseCacheControl(reqHeaders)
|
||||
if _, ok := reqCacheControl["no-cache"]; ok {
|
||||
return transparent
|
||||
}
|
||||
if _, ok := respCacheControl["no-cache"]; ok {
|
||||
return stale
|
||||
}
|
||||
if _, ok := reqCacheControl["only-if-cached"]; ok {
|
||||
return fresh
|
||||
}
|
||||
|
||||
date, err := Date(respHeaders)
|
||||
if err != nil {
|
||||
return stale
|
||||
}
|
||||
currentAge := clock.since(date)
|
||||
|
||||
var lifetime time.Duration
|
||||
var zeroDuration time.Duration
|
||||
|
||||
// If a response includes both an Expires header and a max-age directive,
|
||||
// the max-age directive overrides the Expires header, even if the Expires header is more restrictive.
|
||||
if maxAge, ok := respCacheControl["max-age"]; ok {
|
||||
lifetime, err = time.ParseDuration(maxAge + "s")
|
||||
if err != nil {
|
||||
lifetime = zeroDuration
|
||||
}
|
||||
} else {
|
||||
expiresHeader := respHeaders.Get("Expires")
|
||||
if expiresHeader != "" {
|
||||
expires, err := time.Parse(time.RFC1123, expiresHeader)
|
||||
if err != nil {
|
||||
lifetime = zeroDuration
|
||||
} else {
|
||||
lifetime = expires.Sub(date)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if maxAge, ok := reqCacheControl["max-age"]; ok {
|
||||
// the client is willing to accept a response whose age is no greater than the specified time in seconds
|
||||
lifetime, err = time.ParseDuration(maxAge + "s")
|
||||
if err != nil {
|
||||
lifetime = zeroDuration
|
||||
}
|
||||
}
|
||||
if minfresh, ok := reqCacheControl["min-fresh"]; ok {
|
||||
// the client wants a response that will still be fresh for at least the specified number of seconds.
|
||||
minfreshDuration, err := time.ParseDuration(minfresh + "s")
|
||||
if err == nil {
|
||||
currentAge = time.Duration(currentAge + minfreshDuration)
|
||||
}
|
||||
}
|
||||
|
||||
if maxstale, ok := reqCacheControl["max-stale"]; ok {
|
||||
// Indicates that the client is willing to accept a response that has exceeded its expiration time.
|
||||
// If max-stale is assigned a value, then the client is willing to accept a response that has exceeded
|
||||
// its expiration time by no more than the specified number of seconds.
|
||||
// If no value is assigned to max-stale, then the client is willing to accept a stale response of any age.
|
||||
//
|
||||
// Responses served only because of a max-stale value are supposed to have a Warning header added to them,
|
||||
// but that seems like a hassle, and is it actually useful? If so, then there needs to be a different
|
||||
// return-value available here.
|
||||
if maxstale == "" {
|
||||
return fresh
|
||||
}
|
||||
maxstaleDuration, err := time.ParseDuration(maxstale + "s")
|
||||
if err == nil {
|
||||
currentAge = time.Duration(currentAge - maxstaleDuration)
|
||||
}
|
||||
}
|
||||
|
||||
if lifetime > currentAge {
|
||||
return fresh
|
||||
}
|
||||
|
||||
return stale
|
||||
}
|
||||
|
||||
// Returns true if either the request or the response includes the stale-if-error
|
||||
// cache control extension: https://tools.ietf.org/html/rfc5861
|
||||
func canStaleOnError(respHeaders, reqHeaders http.Header) bool {
|
||||
respCacheControl := parseCacheControl(respHeaders)
|
||||
reqCacheControl := parseCacheControl(reqHeaders)
|
||||
|
||||
var err error
|
||||
lifetime := time.Duration(-1)
|
||||
|
||||
if staleMaxAge, ok := respCacheControl["stale-if-error"]; ok {
|
||||
if staleMaxAge != "" {
|
||||
lifetime, err = time.ParseDuration(staleMaxAge + "s")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
return true
|
||||
}
|
||||
}
|
||||
if staleMaxAge, ok := reqCacheControl["stale-if-error"]; ok {
|
||||
if staleMaxAge != "" {
|
||||
lifetime, err = time.ParseDuration(staleMaxAge + "s")
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
if lifetime >= 0 {
|
||||
date, err := Date(respHeaders)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
currentAge := clock.since(date)
|
||||
if lifetime > currentAge {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func getEndToEndHeaders(respHeaders http.Header) []string {
|
||||
// These headers are always hop-by-hop
|
||||
hopByHopHeaders := map[string]struct{}{
|
||||
"Connection": {},
|
||||
"Keep-Alive": {},
|
||||
"Proxy-Authenticate": {},
|
||||
"Proxy-Authorization": {},
|
||||
"Te": {},
|
||||
"Trailers": {},
|
||||
"Transfer-Encoding": {},
|
||||
"Upgrade": {},
|
||||
}
|
||||
|
||||
for _, extra := range strings.Split(respHeaders.Get("connection"), ",") {
|
||||
// any header listed in connection, if present, is also considered hop-by-hop
|
||||
if strings.Trim(extra, " ") != "" {
|
||||
hopByHopHeaders[http.CanonicalHeaderKey(extra)] = struct{}{}
|
||||
}
|
||||
}
|
||||
endToEndHeaders := []string{}
|
||||
for respHeader := range respHeaders {
|
||||
if _, ok := hopByHopHeaders[respHeader]; !ok {
|
||||
endToEndHeaders = append(endToEndHeaders, respHeader)
|
||||
}
|
||||
}
|
||||
return endToEndHeaders
|
||||
}
|
||||
|
||||
func canStore(reqCacheControl, respCacheControl cacheControl) (canStore bool) {
|
||||
if _, ok := respCacheControl["no-store"]; ok {
|
||||
return false
|
||||
}
|
||||
if _, ok := reqCacheControl["no-store"]; ok {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func newGatewayTimeoutResponse(req *http.Request) *http.Response {
|
||||
var braw bytes.Buffer
|
||||
braw.WriteString("HTTP/1.1 504 Gateway Timeout\r\n\r\n")
|
||||
resp, err := http.ReadResponse(bufio.NewReader(&braw), req)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return resp
|
||||
}
|
||||
|
||||
// cloneRequest returns a clone of the provided *http.Request.
|
||||
// The clone is a shallow copy of the struct and its Header map.
|
||||
// (This function copyright goauth2 authors: https://code.google.com/p/goauth2)
|
||||
func cloneRequest(r *http.Request) *http.Request {
|
||||
// shallow copy of the struct
|
||||
r2 := new(http.Request)
|
||||
*r2 = *r
|
||||
// deep copy of the Header
|
||||
r2.Header = make(http.Header)
|
||||
for k, s := range r.Header {
|
||||
r2.Header[k] = s
|
||||
}
|
||||
return r2
|
||||
}
|
||||
|
||||
type cacheControl map[string]string
|
||||
|
||||
func parseCacheControl(headers http.Header) cacheControl {
|
||||
cc := cacheControl{}
|
||||
ccHeader := headers.Get("Cache-Control")
|
||||
for _, part := range strings.Split(ccHeader, ",") {
|
||||
part = strings.Trim(part, " ")
|
||||
if part == "" {
|
||||
continue
|
||||
}
|
||||
if strings.ContainsRune(part, '=') {
|
||||
keyval := strings.Split(part, "=")
|
||||
cc[strings.Trim(keyval[0], " ")] = strings.Trim(keyval[1], ",")
|
||||
} else {
|
||||
cc[part] = ""
|
||||
}
|
||||
}
|
||||
return cc
|
||||
}
|
||||
|
||||
// headerAllCommaSepValues returns all comma-separated values (each
|
||||
// with whitespace trimmed) for header name in headers. According to
|
||||
// Section 4.2 of the HTTP/1.1 spec
|
||||
// (http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2),
|
||||
// values from multiple occurrences of a header should be concatenated, if
|
||||
// the header's value is a comma-separated list.
|
||||
func headerAllCommaSepValues(headers http.Header, name string) []string {
|
||||
var vals []string
|
||||
for _, val := range headers[http.CanonicalHeaderKey(name)] {
|
||||
fields := strings.Split(val, ",")
|
||||
for i, f := range fields {
|
||||
fields[i] = strings.TrimSpace(f)
|
||||
}
|
||||
vals = append(vals, fields...)
|
||||
}
|
||||
return vals
|
||||
}
|
||||
|
||||
// cachingReadCloser is a wrapper around ReadCloser R that calls OnEOF
|
||||
// handler with a full copy of the content read from R when EOF is
|
||||
// reached.
|
||||
type cachingReadCloser struct {
|
||||
// Underlying ReadCloser.
|
||||
R io.ReadCloser
|
||||
// OnEOF is called with a copy of the content of R when EOF is reached.
|
||||
OnEOF func(io.Reader)
|
||||
|
||||
buf bytes.Buffer // buf stores a copy of the content of R.
|
||||
}
|
||||
|
||||
// Read reads the next len(p) bytes from R or until R is drained. The
|
||||
// return value n is the number of bytes read. If R has no data to
|
||||
// return, err is io.EOF and OnEOF is called with a full copy of what
|
||||
// has been read so far.
|
||||
func (r *cachingReadCloser) Read(p []byte) (n int, err error) {
|
||||
n, err = r.R.Read(p)
|
||||
r.buf.Write(p[:n])
|
||||
if err == io.EOF {
|
||||
r.OnEOF(bytes.NewReader(r.buf.Bytes()))
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (r *cachingReadCloser) Close() error {
|
||||
return r.R.Close()
|
||||
}
|
||||
|
||||
// NewMemoryCacheTransport returns a new Transport using the in-memory cache implementation
|
||||
func NewMemoryCacheTransport() *Transport {
|
||||
c := NewMemoryCache()
|
||||
t := NewTransport(c)
|
||||
return t
|
||||
}
|
||||
196
vendor/github.com/h2non/parth/README.md
generated
vendored
Normal file
196
vendor/github.com/h2non/parth/README.md
generated
vendored
Normal file
@@ -0,0 +1,196 @@
|
||||
# parth
|
||||
|
||||
go get -u github.com/h2non/parth
|
||||
|
||||
Package parth provides path parsing for segment unmarshaling and slicing. In
|
||||
other words, parth provides simple and flexible access to (URL) path parameters.
|
||||
|
||||
Along with string, all basic non-alias types are supported. An interface is
|
||||
available for implementation by user-defined types. When handling an int, uint,
|
||||
or float of any size, the first valid value within the specified segment will be
|
||||
used.
|
||||
|
||||
## Usage
|
||||
|
||||
```go
|
||||
Variables
|
||||
func Segment(path string, i int, v interface{}) error
|
||||
func Sequent(path, key string, v interface{}) error
|
||||
func Span(path string, i, j int) (string, error)
|
||||
func SubSeg(path, key string, i int, v interface{}) error
|
||||
func SubSpan(path, key string, i, j int) (string, error)
|
||||
type Parth
|
||||
func New(path string) *Parth
|
||||
func NewBySpan(path string, i, j int) *Parth
|
||||
func NewBySubSpan(path, key string, i, j int) *Parth
|
||||
func (p *Parth) Err() error
|
||||
func (p *Parth) Segment(i int, v interface{})
|
||||
func (p *Parth) Sequent(key string, v interface{})
|
||||
func (p *Parth) Span(i, j int) string
|
||||
func (p *Parth) SubSeg(key string, i int, v interface{})
|
||||
func (p *Parth) SubSpan(key string, i, j int) string
|
||||
type Unmarshaler
|
||||
```
|
||||
|
||||
### Setup ("By Index")
|
||||
|
||||
```go
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/codemodus/parth"
|
||||
)
|
||||
|
||||
func handler(w http.ResponseWriter, r *http.Request) {
|
||||
var s string
|
||||
if err := parth.Segment(r.URL.Path, 4, &s); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
}
|
||||
|
||||
fmt.Println(r.URL.Path)
|
||||
fmt.Printf("%v (%T)\n", s, s)
|
||||
|
||||
// Output:
|
||||
// /some/path/things/42/others/3
|
||||
// others (string)
|
||||
}
|
||||
```
|
||||
|
||||
### Setup ("By Key")
|
||||
|
||||
```go
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/codemodus/parth"
|
||||
)
|
||||
|
||||
func handler(w http.ResponseWriter, r *http.Request) {
|
||||
var i int64
|
||||
if err := parth.Sequent(r.URL.Path, "things", &i); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
}
|
||||
|
||||
fmt.Println(r.URL.Path)
|
||||
fmt.Printf("%v (%T)\n", i, i)
|
||||
|
||||
// Output:
|
||||
// /some/path/things/42/others/3
|
||||
// 42 (int64)
|
||||
}
|
||||
```
|
||||
|
||||
### Setup (Parth Type)
|
||||
|
||||
```go
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/codemodus/parth"
|
||||
)
|
||||
|
||||
func handler(w http.ResponseWriter, r *http.Request) {
|
||||
var s string
|
||||
var f float32
|
||||
|
||||
p := parth.New(r.URL.Path)
|
||||
p.Segment(2, &s)
|
||||
p.SubSeg("key", 1, &f)
|
||||
if err := p.Err(); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
}
|
||||
|
||||
fmt.Println(r.URL.Path)
|
||||
fmt.Printf("%v (%T)\n", s, s)
|
||||
fmt.Printf("%v (%T)\n", f, f)
|
||||
|
||||
// Output:
|
||||
// /zero/one/two/key/four/5.5/six
|
||||
// two (string)
|
||||
// 5.5 (float32)
|
||||
}
|
||||
```
|
||||
|
||||
### Setup (Unmarshaler)
|
||||
|
||||
```go
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/codemodus/parth"
|
||||
)
|
||||
|
||||
func handler(w http.ResponseWriter, r *http.Request) {
|
||||
/*
|
||||
type mytype []byte
|
||||
|
||||
func (m *mytype) UnmarshalSegment(seg string) error {
|
||||
*m = []byte(seg)
|
||||
}
|
||||
*/
|
||||
|
||||
var m mytype
|
||||
if err := parth.Segment(r.URL.Path, 4, &m); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
}
|
||||
|
||||
fmt.Println(r.URL.Path)
|
||||
fmt.Printf("%v == %q (%T)\n", m, m, m)
|
||||
|
||||
// Output:
|
||||
// /zero/one/two/key/four/5.5/six
|
||||
// [102 111 117 114] == "four" (mypkg.mytype)
|
||||
}
|
||||
```
|
||||
|
||||
## More Info
|
||||
|
||||
### Keep Using http.HandlerFunc And Minimize context.Context Usage
|
||||
|
||||
The most obvious use case for parth is when working with any URL path such as
|
||||
the one found at http.Request.URL.Path. parth is fast enough that it can be used
|
||||
multiple times in place of a single use of similar router-parameter schemes or
|
||||
even context.Context. There is no need to use an alternate http handler function
|
||||
definition in order to pass data that is already being passed. The http.Request
|
||||
type already holds URL data and parth is great at handling it. Additionally,
|
||||
parth takes care of parsing selected path segments into the types actually
|
||||
needed. Parth not only does more, it's usually faster and less intrusive than
|
||||
the alternatives.
|
||||
|
||||
### Indexes
|
||||
|
||||
If an index is negative, the negative count begins with the last segment.
|
||||
Providing a 0 for the second index is a special case which acts as an alias for
|
||||
the end of the path. An error is returned if: 1. Any index is out of range of
|
||||
the path; 2. When there are two indexes, the first index does not precede the
|
||||
second index.
|
||||
|
||||
### Keys
|
||||
|
||||
If a key is involved, functions will only handle the portion of the path
|
||||
subsequent to the provided key. An error is returned if the key cannot be found
|
||||
in the path.
|
||||
|
||||
### First Whole, First Decimal (Restated - Important!)
|
||||
|
||||
When handling an int, uint, or float of any size, the first valid value within
|
||||
the specified segment will be used.
|
||||
|
||||
## Documentation
|
||||
|
||||
View the [GoDoc](http://godoc.org/github.com/codemodus/parth)
|
||||
|
||||
## Benchmarks
|
||||
|
||||
Go 1.11
|
||||
benchmark iter time/iter bytes alloc allocs
|
||||
--------- ---- --------- ----------- ------
|
||||
BenchmarkSegmentString-8 30000000 39.60 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkSegmentInt-8 20000000 65.60 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkSegmentIntNegIndex-8 20000000 86.60 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkSpan-8 100000000 18.20 ns/op 0 B/op 0 allocs/op
|
||||
BenchmarkStdlibSegmentString-8 5000000 454.00 ns/op 50 B/op 2 allocs/op
|
||||
BenchmarkStdlibSegmentInt-8 3000000 526.00 ns/op 50 B/op 2 allocs/op
|
||||
BenchmarkStdlibSpan-8 3000000 518.00 ns/op 69 B/op 2 allocs/op
|
||||
BenchmarkContextLookupSetGet-8 1000000 1984.00 ns/op 480 B/op 6 allocs/op
|
||||
|
||||
1
vendor/github.com/h2non/parth/go.mod
generated
vendored
Normal file
1
vendor/github.com/h2non/parth/go.mod
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module github.com/h2non/parth
|
||||
67
vendor/github.com/hashicorp/consul/api/README.md
generated
vendored
Normal file
67
vendor/github.com/hashicorp/consul/api/README.md
generated
vendored
Normal file
@@ -0,0 +1,67 @@
|
||||
Consul API client
|
||||
=================
|
||||
|
||||
This package provides the `api` package which attempts to
|
||||
provide programmatic access to the full Consul API.
|
||||
|
||||
Currently, all of the Consul APIs included in version 0.6.0 are supported.
|
||||
|
||||
Documentation
|
||||
=============
|
||||
|
||||
The full documentation is available on [Godoc](https://godoc.org/github.com/hashicorp/consul/api)
|
||||
|
||||
Usage
|
||||
=====
|
||||
|
||||
Below is an example of using the Consul client:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import "github.com/hashicorp/consul/api"
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
// Get a new client
|
||||
client, err := api.NewClient(api.DefaultConfig())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Get a handle to the KV API
|
||||
kv := client.KV()
|
||||
|
||||
// PUT a new KV pair
|
||||
p := &api.KVPair{Key: "REDIS_MAXCLIENTS", Value: []byte("1000")}
|
||||
_, err = kv.Put(p, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Lookup the pair
|
||||
pair, _, err := kv.Get("REDIS_MAXCLIENTS", nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
fmt.Printf("KV: %v %s\n", pair.Key, pair.Value)
|
||||
}
|
||||
```
|
||||
|
||||
To run this example, start a Consul server:
|
||||
|
||||
```bash
|
||||
consul agent -dev
|
||||
```
|
||||
|
||||
Copy the code above into a file such as `main.go`.
|
||||
|
||||
Install and run. You'll see a key (`REDIS_MAXCLIENTS`) and value (`1000`) printed.
|
||||
|
||||
```bash
|
||||
$ go get
|
||||
$ go run main.go
|
||||
KV: REDIS_MAXCLIENTS 1000
|
||||
```
|
||||
|
||||
After running the code, you can also view the values in the Consul UI on your local machine at http://localhost:8500/ui/dc1/kv
|
||||
10
vendor/github.com/hashicorp/consul/ui-v2/app/styles/components/notice.scss
generated
vendored
10
vendor/github.com/hashicorp/consul/ui-v2/app/styles/components/notice.scss
generated
vendored
@@ -1,10 +0,0 @@
|
||||
@import './notice/index';
|
||||
.notice.warning {
|
||||
@extend %notice-warning;
|
||||
}
|
||||
.notice.info {
|
||||
@extend %notice-info;
|
||||
}
|
||||
.notice.policy-management {
|
||||
@extend %notice-highlight;
|
||||
}
|
||||
21
vendor/github.com/hashicorp/consul/ui-v2/lib/block-slots/LICENSE.md
generated
vendored
21
vendor/github.com/hashicorp/consul/ui-v2/lib/block-slots/LICENSE.md
generated
vendored
@@ -1,21 +0,0 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016 Ciena Corporation.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
10
vendor/github.com/hashicorp/consul/website/LICENSE.md
generated
vendored
10
vendor/github.com/hashicorp/consul/website/LICENSE.md
generated
vendored
@@ -1,10 +0,0 @@
|
||||
# Proprietary License
|
||||
|
||||
This license is temporary while a more official one is drafted. However,
|
||||
this should make it clear:
|
||||
|
||||
The text contents of this website are MPL 2.0 licensed.
|
||||
|
||||
The design contents of this website are proprietary and may not be reproduced
|
||||
or reused in any way other than to run the website locally. The license for
|
||||
the design is owned solely by HashiCorp, Inc.
|
||||
145
vendor/github.com/hashicorp/consul/website/source/api/operator/license.html.md
generated
vendored
145
vendor/github.com/hashicorp/consul/website/source/api/operator/license.html.md
generated
vendored
@@ -1,145 +0,0 @@
|
||||
---
|
||||
layout: api
|
||||
page_title: License - Operator - HTTP API
|
||||
sidebar_current: api-operator-license
|
||||
description: |-
|
||||
The /operator/license endpoints allow for setting and retrieving the Consul
|
||||
Enterprise License.
|
||||
---
|
||||
|
||||
# License - Operator HTTP API
|
||||
|
||||
~> **Enterprise Only!** This API endpoint and functionality only exists in
|
||||
Consul Enterprise. This is not present in the open source version of Consul.
|
||||
|
||||
The licensing functionality described here is available only in
|
||||
[Consul Enterprise](https://www.hashicorp.com/products/consul/) version 1.1.0 and later.
|
||||
|
||||
## Getting the Consul License
|
||||
|
||||
This endpoint gets information about the current license.
|
||||
|
||||
| Method | Path | Produces |
|
||||
| ------ | ---------------------------- | -------------------------- |
|
||||
| `GET` | `/operator/license` | `application/json` |
|
||||
|
||||
The table below shows this endpoint's support for
|
||||
[blocking queries](/api/index.html#blocking-queries),
|
||||
[consistency modes](/api/index.html#consistency-modes),
|
||||
[agent caching](/api/index.html#agent-caching), and
|
||||
[required ACLs](/api/index.html#acls).
|
||||
|
||||
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
|
||||
| ---------------- | ----------------- | ------------- | ---------------- |
|
||||
| `NO` | `all` | `none` | `none` |
|
||||
|
||||
### Parameters
|
||||
|
||||
- `dc` `(string: "")` - Specifies the datacenter whose license should be retrieved.
|
||||
This will default to the datacenter of the agent serving the HTTP request.
|
||||
This is specified as a URL query parameter.
|
||||
|
||||
### Sample Request
|
||||
|
||||
```text
|
||||
$ curl \
|
||||
http://127.0.0.1:8500/v1/operator/license
|
||||
```
|
||||
|
||||
### Sample Response
|
||||
|
||||
```json
|
||||
{
|
||||
"Valid": true,
|
||||
"License": {
|
||||
"license_id": "2afbf681-0d1a-0649-cb6c-333ec9f0989c",
|
||||
"customer_id": "0259271d-8ffc-e85e-0830-c0822c1f5f2b",
|
||||
"installation_id": "*",
|
||||
"issue_time": "2018-05-21T20:03:35.911567355Z",
|
||||
"start_time": "2018-05-21T04:00:00Z",
|
||||
"expiration_time": "2019-05-22T03:59:59.999Z",
|
||||
"product": "consul",
|
||||
"flags": {
|
||||
"package": "premium"
|
||||
},
|
||||
"features": [
|
||||
"Automated Backups",
|
||||
"Automated Upgrades",
|
||||
"Enhanced Read Scalability",
|
||||
"Network Segments",
|
||||
"Redundancy Zone",
|
||||
"Advanced Network Federation"
|
||||
],
|
||||
"temporary": false
|
||||
},
|
||||
"Warnings": []
|
||||
}
|
||||
```
|
||||
|
||||
## Updating the Consul License
|
||||
|
||||
This endpoint updates the Consul license and returns some of the
|
||||
license contents as well as any warning messages regarding its validity.
|
||||
|
||||
| Method | Path | Produces |
|
||||
| ------ | ---------------------------- | -------------------------- |
|
||||
| `PUT` | `/operator/license` | `application/json` |
|
||||
|
||||
The table below shows this endpoint's support for
|
||||
[blocking queries](/api/index.html#blocking-queries),
|
||||
[consistency modes](/api/index.html#consistency-modes),
|
||||
[agent caching](/api/index.html#agent-caching), and
|
||||
[required ACLs](/api/index.html#acls).
|
||||
|
||||
| Blocking Queries | Consistency Modes | Agent Caching | ACL Required |
|
||||
| ---------------- | ----------------- | ------------- | ---------------- |
|
||||
| `NO` | `none` | `none` | `operator:write` |
|
||||
|
||||
### Parameters
|
||||
|
||||
- `dc` `(string: "")` - Specifies the datacenter whose license should be updated.
|
||||
This will default to the datacenter of the agent serving the HTTP request.
|
||||
This is specified as a URL query parameter.
|
||||
|
||||
### Sample Payload
|
||||
|
||||
The payload is the raw license blob.
|
||||
|
||||
### Sample Request
|
||||
|
||||
```text
|
||||
$ curl \
|
||||
--request PUT \
|
||||
--data @consul.license \
|
||||
http://127.0.0.1:8500/v1/operator/license
|
||||
```
|
||||
|
||||
### Sample Response
|
||||
|
||||
```json
|
||||
{
|
||||
"Valid": true,
|
||||
"License": {
|
||||
"license_id": "2afbf681-0d1a-0649-cb6c-333ec9f0989c",
|
||||
"customer_id": "0259271d-8ffc-e85e-0830-c0822c1f5f2b",
|
||||
"installation_id": "*",
|
||||
"issue_time": "2018-05-21T20:03:35.911567355Z",
|
||||
"start_time": "2018-05-21T04:00:00Z",
|
||||
"expiration_time": "2019-05-22T03:59:59.999Z",
|
||||
"product": "consul",
|
||||
"flags": {
|
||||
"package": "premium"
|
||||
},
|
||||
"features": [
|
||||
"Automated Backups",
|
||||
"Automated Upgrades",
|
||||
"Enhanced Read Scalability",
|
||||
"Network Segments",
|
||||
"Redundancy Zone",
|
||||
"Advanced Network Federation"
|
||||
],
|
||||
"temporary": false
|
||||
},
|
||||
"Warnings": []
|
||||
}
|
||||
```
|
||||
109
vendor/github.com/hashicorp/consul/website/source/docs/commands/license.html.markdown.erb
generated
vendored
109
vendor/github.com/hashicorp/consul/website/source/docs/commands/license.html.markdown.erb
generated
vendored
@@ -1,109 +0,0 @@
|
||||
---
|
||||
layout: "docs"
|
||||
page_title: "Commands: License"
|
||||
sidebar_current: "docs-commands-license"
|
||||
description: >
|
||||
The license command provides datacenter-level management of the Consul Enterprise license.
|
||||
---
|
||||
|
||||
# Consul License
|
||||
|
||||
Command: `consul license`
|
||||
|
||||
<%= enterprise_alert :consul %>
|
||||
|
||||
The `license` command provides datacenter-level management of the Consul Enterprise license. This was added in Consul 1.1.0.
|
||||
|
||||
If ACLs are enabled then a token with operator privileges may be required in
|
||||
order to use this command. Requests are forwarded internally to the leader
|
||||
if required, so this can be run from any Consul node in a cluster. See the
|
||||
[ACL Guide](/docs/guides/acl.html#operator) for more information.
|
||||
|
||||
|
||||
```text
|
||||
Usage: consul license <subcommand> [options] [args]
|
||||
|
||||
This command has subcommands for managing the Consul Enterprise license
|
||||
Here are some simple examples, and more detailed examples are
|
||||
available in the subcommands or the documentation.
|
||||
|
||||
Install a new license from a file:
|
||||
|
||||
$ consul license put @consul.license
|
||||
|
||||
Install a new license from stdin:
|
||||
|
||||
$ consul license put -
|
||||
|
||||
Install a new license from a string:
|
||||
|
||||
$ consul license put "<license blob>"
|
||||
|
||||
Retrieve the current license:
|
||||
|
||||
$ consul license get
|
||||
|
||||
For more examples, ask for subcommand help or view the documentation.
|
||||
|
||||
Subcommands:
|
||||
get Get the current license
|
||||
put Puts a new license in the datacenter
|
||||
```
|
||||
|
||||
## put
|
||||
|
||||
This command sets the Consul Enterprise license.
|
||||
|
||||
Usage: `consul license put [options] LICENSE`
|
||||
|
||||
#### API Options
|
||||
|
||||
<%= partial "docs/commands/http_api_options_client" %>
|
||||
<%= partial "docs/commands/http_api_options_server" %>
|
||||
|
||||
The output looks like this:
|
||||
|
||||
```
|
||||
License is valid
|
||||
License ID: 2afbf681-0d1a-0649-cb6c-333ec9f0989c
|
||||
Customer ID: 0259271d-8ffc-e85e-0830-c0822c1f5f2b
|
||||
Expires At: 2019-05-22 03:59:59.999 +0000 UTC
|
||||
Datacenter: *
|
||||
Package: premium
|
||||
Licensed Features:
|
||||
Automated Backups
|
||||
Automated Upgrades
|
||||
Enhanced Read Scalability
|
||||
Network Segments
|
||||
Redundancy Zone
|
||||
Advanced Network Federation
|
||||
```
|
||||
|
||||
## get
|
||||
|
||||
This command gets the Consul Enterprise license.
|
||||
|
||||
Usage: `consul license get [options]`
|
||||
|
||||
#### API Options
|
||||
|
||||
<%= partial "docs/commands/http_api_options_client" %>
|
||||
<%= partial "docs/commands/http_api_options_server" %>
|
||||
|
||||
The output looks like this:
|
||||
|
||||
```
|
||||
License is valid
|
||||
License ID: 2afbf681-0d1a-0649-cb6c-333ec9f0989c
|
||||
Customer ID: 0259271d-8ffc-e85e-0830-c0822c1f5f2b
|
||||
Expires At: 2019-05-22 03:59:59.999 +0000 UTC
|
||||
Datacenter: *
|
||||
Package: premium
|
||||
Licensed Features:
|
||||
Automated Backups
|
||||
Automated Upgrades
|
||||
Enhanced Read Scalability
|
||||
Network Segments
|
||||
Redundancy Zone
|
||||
Advanced Network Federation
|
||||
```
|
||||
89
vendor/github.com/hashicorp/errwrap/README.md
generated
vendored
Normal file
89
vendor/github.com/hashicorp/errwrap/README.md
generated
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
# errwrap
|
||||
|
||||
`errwrap` is a package for Go that formalizes the pattern of wrapping errors
|
||||
and checking if an error contains another error.
|
||||
|
||||
There is a common pattern in Go of taking a returned `error` value and
|
||||
then wrapping it (such as with `fmt.Errorf`) before returning it. The problem
|
||||
with this pattern is that you completely lose the original `error` structure.
|
||||
|
||||
Arguably the _correct_ approach is that you should make a custom structure
|
||||
implementing the `error` interface, and have the original error as a field
|
||||
on that structure, such [as this example](http://golang.org/pkg/os/#PathError).
|
||||
This is a good approach, but you have to know the entire chain of possible
|
||||
rewrapping that happens, when you might just care about one.
|
||||
|
||||
`errwrap` formalizes this pattern (it doesn't matter what approach you use
|
||||
above) by giving a single interface for wrapping errors, checking if a specific
|
||||
error is wrapped, and extracting that error.
|
||||
|
||||
## Installation and Docs
|
||||
|
||||
Install using `go get github.com/hashicorp/errwrap`.
|
||||
|
||||
Full documentation is available at
|
||||
http://godoc.org/github.com/hashicorp/errwrap
|
||||
|
||||
## Usage
|
||||
|
||||
#### Basic Usage
|
||||
|
||||
Below is a very basic example of its usage:
|
||||
|
||||
```go
|
||||
// A function that always returns an error, but wraps it, like a real
|
||||
// function might.
|
||||
func tryOpen() error {
|
||||
_, err := os.Open("/i/dont/exist")
|
||||
if err != nil {
|
||||
return errwrap.Wrapf("Doesn't exist: {{err}}", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
err := tryOpen()
|
||||
|
||||
// We can use the Contains helpers to check if an error contains
|
||||
// another error. It is safe to do this with a nil error, or with
|
||||
// an error that doesn't even use the errwrap package.
|
||||
if errwrap.Contains(err, "does not exist") {
|
||||
// Do something
|
||||
}
|
||||
if errwrap.ContainsType(err, new(os.PathError)) {
|
||||
// Do something
|
||||
}
|
||||
|
||||
// Or we can use the associated `Get` functions to just extract
|
||||
// a specific error. This would return nil if that specific error doesn't
|
||||
// exist.
|
||||
perr := errwrap.GetType(err, new(os.PathError))
|
||||
}
|
||||
```
|
||||
|
||||
#### Custom Types
|
||||
|
||||
If you're already making custom types that properly wrap errors, then
|
||||
you can get all the functionality of `errwraps.Contains` and such by
|
||||
implementing the `Wrapper` interface with just one function. Example:
|
||||
|
||||
```go
|
||||
type AppError {
|
||||
Code ErrorCode
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e *AppError) WrappedErrors() []error {
|
||||
return []error{e.Err}
|
||||
}
|
||||
```
|
||||
|
||||
Now this works:
|
||||
|
||||
```go
|
||||
err := &AppError{Err: fmt.Errorf("an error")}
|
||||
if errwrap.ContainsType(err, fmt.Errorf("")) {
|
||||
// This will work!
|
||||
}
|
||||
```
|
||||
1
vendor/github.com/hashicorp/errwrap/go.mod
generated
vendored
Normal file
1
vendor/github.com/hashicorp/errwrap/go.mod
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module github.com/hashicorp/errwrap
|
||||
30
vendor/github.com/hashicorp/go-cleanhttp/README.md
generated
vendored
Normal file
30
vendor/github.com/hashicorp/go-cleanhttp/README.md
generated
vendored
Normal file
@@ -0,0 +1,30 @@
|
||||
# cleanhttp
|
||||
|
||||
Functions for accessing "clean" Go http.Client values
|
||||
|
||||
-------------
|
||||
|
||||
The Go standard library contains a default `http.Client` called
|
||||
`http.DefaultClient`. It is a common idiom in Go code to start with
|
||||
`http.DefaultClient` and tweak it as necessary, and in fact, this is
|
||||
encouraged; from the `http` package documentation:
|
||||
|
||||
> The Client's Transport typically has internal state (cached TCP connections),
|
||||
so Clients should be reused instead of created as needed. Clients are safe for
|
||||
concurrent use by multiple goroutines.
|
||||
|
||||
Unfortunately, this is a shared value, and it is not uncommon for libraries to
|
||||
assume that they are free to modify it at will. With enough dependencies, it
|
||||
can be very easy to encounter strange problems and race conditions due to
|
||||
manipulation of this shared value across libraries and goroutines (clients are
|
||||
safe for concurrent use, but writing values to the client struct itself is not
|
||||
protected).
|
||||
|
||||
Making things worse is the fact that a bare `http.Client` will use a default
|
||||
`http.Transport` called `http.DefaultTransport`, which is another global value
|
||||
that behaves the same way. So it is not simply enough to replace
|
||||
`http.DefaultClient` with `&http.Client{}`.
|
||||
|
||||
This repository provides some simple functions to get a "clean" `http.Client`
|
||||
-- one that uses the same default values as the Go standard library, but
|
||||
returns a client that does not share any state with other clients.
|
||||
1
vendor/github.com/hashicorp/go-cleanhttp/go.mod
generated
vendored
Normal file
1
vendor/github.com/hashicorp/go-cleanhttp/go.mod
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module github.com/hashicorp/go-cleanhttp
|
||||
24
vendor/github.com/hashicorp/go-immutable-radix/.gitignore
generated
vendored
Normal file
24
vendor/github.com/hashicorp/go-immutable-radix/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
||||
*.test
|
||||
*.prof
|
||||
3
vendor/github.com/hashicorp/go-immutable-radix/.travis.yml
generated
vendored
Normal file
3
vendor/github.com/hashicorp/go-immutable-radix/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
language: go
|
||||
go:
|
||||
- tip
|
||||
41
vendor/github.com/hashicorp/go-immutable-radix/README.md
generated
vendored
Normal file
41
vendor/github.com/hashicorp/go-immutable-radix/README.md
generated
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
go-immutable-radix [](https://travis-ci.org/hashicorp/go-immutable-radix)
|
||||
=========
|
||||
|
||||
Provides the `iradix` package that implements an immutable [radix tree](http://en.wikipedia.org/wiki/Radix_tree).
|
||||
The package only provides a single `Tree` implementation, optimized for sparse nodes.
|
||||
|
||||
As a radix tree, it provides the following:
|
||||
* O(k) operations. In many cases, this can be faster than a hash table since
|
||||
the hash function is an O(k) operation, and hash tables have very poor cache locality.
|
||||
* Minimum / Maximum value lookups
|
||||
* Ordered iteration
|
||||
|
||||
A tree supports using a transaction to batch multiple updates (insert, delete)
|
||||
in a more efficient manner than performing each operation one at a time.
|
||||
|
||||
For a mutable variant, see [go-radix](https://github.com/armon/go-radix).
|
||||
|
||||
Documentation
|
||||
=============
|
||||
|
||||
The full documentation is available on [Godoc](http://godoc.org/github.com/hashicorp/go-immutable-radix).
|
||||
|
||||
Example
|
||||
=======
|
||||
|
||||
Below is a simple example of usage
|
||||
|
||||
```go
|
||||
// Create a tree
|
||||
r := iradix.New()
|
||||
r, _, _ = r.Insert([]byte("foo"), 1)
|
||||
r, _, _ = r.Insert([]byte("bar"), 2)
|
||||
r, _, _ = r.Insert([]byte("foobar"), 2)
|
||||
|
||||
// Find the longest prefix match
|
||||
m, _, _ := r.Root().LongestPrefix([]byte("foozip"))
|
||||
if string(m) != "foo" {
|
||||
panic("should be foo")
|
||||
}
|
||||
```
|
||||
|
||||
6
vendor/github.com/hashicorp/go-immutable-radix/go.mod
generated
vendored
Normal file
6
vendor/github.com/hashicorp/go-immutable-radix/go.mod
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
module github.com/hashicorp/go-immutable-radix
|
||||
|
||||
require (
|
||||
github.com/hashicorp/go-uuid v1.0.0
|
||||
github.com/hashicorp/golang-lru v0.5.0
|
||||
)
|
||||
4
vendor/github.com/hashicorp/go-immutable-radix/go.sum
generated
vendored
Normal file
4
vendor/github.com/hashicorp/go-immutable-radix/go.sum
generated
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
github.com/hashicorp/go-uuid v1.0.0 h1:RS8zrF7PhGwyNPOtxSClXXj9HA8feRnJzgnI1RJCSnM=
|
||||
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
|
||||
github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
12
vendor/github.com/hashicorp/go-multierror/.travis.yml
generated
vendored
Normal file
12
vendor/github.com/hashicorp/go-multierror/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
sudo: false
|
||||
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.x
|
||||
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
|
||||
script: make test testrace
|
||||
31
vendor/github.com/hashicorp/go-multierror/Makefile
generated
vendored
Normal file
31
vendor/github.com/hashicorp/go-multierror/Makefile
generated
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
TEST?=./...
|
||||
|
||||
default: test
|
||||
|
||||
# test runs the test suite and vets the code.
|
||||
test: generate
|
||||
@echo "==> Running tests..."
|
||||
@go list $(TEST) \
|
||||
| grep -v "/vendor/" \
|
||||
| xargs -n1 go test -timeout=60s -parallel=10 ${TESTARGS}
|
||||
|
||||
# testrace runs the race checker
|
||||
testrace: generate
|
||||
@echo "==> Running tests (race)..."
|
||||
@go list $(TEST) \
|
||||
| grep -v "/vendor/" \
|
||||
| xargs -n1 go test -timeout=60s -race ${TESTARGS}
|
||||
|
||||
# updatedeps installs all the dependencies needed to run and build.
|
||||
updatedeps:
|
||||
@sh -c "'${CURDIR}/scripts/deps.sh' '${NAME}'"
|
||||
|
||||
# generate runs `go generate` to build the dynamically generated source files.
|
||||
generate:
|
||||
@echo "==> Generating..."
|
||||
@find . -type f -name '.DS_Store' -delete
|
||||
@go list ./... \
|
||||
| grep -v "/vendor/" \
|
||||
| xargs -n1 go generate
|
||||
|
||||
.PHONY: default test testrace updatedeps generate
|
||||
97
vendor/github.com/hashicorp/go-multierror/README.md
generated
vendored
Normal file
97
vendor/github.com/hashicorp/go-multierror/README.md
generated
vendored
Normal file
@@ -0,0 +1,97 @@
|
||||
# go-multierror
|
||||
|
||||
[][travis]
|
||||
[][godocs]
|
||||
|
||||
[travis]: https://travis-ci.org/hashicorp/go-multierror
|
||||
[godocs]: https://godoc.org/github.com/hashicorp/go-multierror
|
||||
|
||||
`go-multierror` is a package for Go that provides a mechanism for
|
||||
representing a list of `error` values as a single `error`.
|
||||
|
||||
This allows a function in Go to return an `error` that might actually
|
||||
be a list of errors. If the caller knows this, they can unwrap the
|
||||
list and access the errors. If the caller doesn't know, the error
|
||||
formats to a nice human-readable format.
|
||||
|
||||
`go-multierror` implements the
|
||||
[errwrap](https://github.com/hashicorp/errwrap) interface so that it can
|
||||
be used with that library, as well.
|
||||
|
||||
## Installation and Docs
|
||||
|
||||
Install using `go get github.com/hashicorp/go-multierror`.
|
||||
|
||||
Full documentation is available at
|
||||
http://godoc.org/github.com/hashicorp/go-multierror
|
||||
|
||||
## Usage
|
||||
|
||||
go-multierror is easy to use and purposely built to be unobtrusive in
|
||||
existing Go applications/libraries that may not be aware of it.
|
||||
|
||||
**Building a list of errors**
|
||||
|
||||
The `Append` function is used to create a list of errors. This function
|
||||
behaves a lot like the Go built-in `append` function: it doesn't matter
|
||||
if the first argument is nil, a `multierror.Error`, or any other `error`,
|
||||
the function behaves as you would expect.
|
||||
|
||||
```go
|
||||
var result error
|
||||
|
||||
if err := step1(); err != nil {
|
||||
result = multierror.Append(result, err)
|
||||
}
|
||||
if err := step2(); err != nil {
|
||||
result = multierror.Append(result, err)
|
||||
}
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
**Customizing the formatting of the errors**
|
||||
|
||||
By specifying a custom `ErrorFormat`, you can customize the format
|
||||
of the `Error() string` function:
|
||||
|
||||
```go
|
||||
var result *multierror.Error
|
||||
|
||||
// ... accumulate errors here, maybe using Append
|
||||
|
||||
if result != nil {
|
||||
result.ErrorFormat = func([]error) string {
|
||||
return "errors!"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Accessing the list of errors**
|
||||
|
||||
`multierror.Error` implements `error` so if the caller doesn't know about
|
||||
multierror, it will work just fine. But if you're aware a multierror might
|
||||
be returned, you can use type switches to access the list of errors:
|
||||
|
||||
```go
|
||||
if err := something(); err != nil {
|
||||
if merr, ok := err.(*multierror.Error); ok {
|
||||
// Use merr.Errors
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Returning a multierror only if there are errors**
|
||||
|
||||
If you build a `multierror.Error`, you can use the `ErrorOrNil` function
|
||||
to return an `error` implementation only if there are errors to return:
|
||||
|
||||
```go
|
||||
var result *multierror.Error
|
||||
|
||||
// ... accumulate errors here
|
||||
|
||||
// Return the `error` only if errors were added to the multierror, otherwise
|
||||
// return nil since there are no errors.
|
||||
return result.ErrorOrNil()
|
||||
```
|
||||
3
vendor/github.com/hashicorp/go-multierror/go.mod
generated
vendored
Normal file
3
vendor/github.com/hashicorp/go-multierror/go.mod
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
module github.com/hashicorp/go-multierror
|
||||
|
||||
require github.com/hashicorp/errwrap v1.0.0
|
||||
4
vendor/github.com/hashicorp/go-multierror/go.sum
generated
vendored
Normal file
4
vendor/github.com/hashicorp/go-multierror/go.sum
generated
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce h1:prjrVgOk2Yg6w+PflHoszQNLTUh4kaByUcEWM/9uin4=
|
||||
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
3
vendor/github.com/hashicorp/go-retryablehttp/.gitignore
generated
vendored
Normal file
3
vendor/github.com/hashicorp/go-retryablehttp/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
.idea/
|
||||
*.iml
|
||||
*.test
|
||||
12
vendor/github.com/hashicorp/go-retryablehttp/.travis.yml
generated
vendored
Normal file
12
vendor/github.com/hashicorp/go-retryablehttp/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
sudo: false
|
||||
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.8.1
|
||||
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
|
||||
script: make updatedeps test
|
||||
11
vendor/github.com/hashicorp/go-retryablehttp/Makefile
generated
vendored
Normal file
11
vendor/github.com/hashicorp/go-retryablehttp/Makefile
generated
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
default: test
|
||||
|
||||
test:
|
||||
go vet ./...
|
||||
go test -race ./...
|
||||
|
||||
updatedeps:
|
||||
go get -f -t -u ./...
|
||||
go get -f -u ./...
|
||||
|
||||
.PHONY: default test updatedeps
|
||||
46
vendor/github.com/hashicorp/go-retryablehttp/README.md
generated
vendored
Normal file
46
vendor/github.com/hashicorp/go-retryablehttp/README.md
generated
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
go-retryablehttp
|
||||
================
|
||||
|
||||
[][travis]
|
||||
[][godocs]
|
||||
|
||||
[travis]: http://travis-ci.org/hashicorp/go-retryablehttp
|
||||
[godocs]: http://godoc.org/github.com/hashicorp/go-retryablehttp
|
||||
|
||||
The `retryablehttp` package provides a familiar HTTP client interface with
|
||||
automatic retries and exponential backoff. It is a thin wrapper over the
|
||||
standard `net/http` client library and exposes nearly the same public API. This
|
||||
makes `retryablehttp` very easy to drop into existing programs.
|
||||
|
||||
`retryablehttp` performs automatic retries under certain conditions. Mainly, if
|
||||
an error is returned by the client (connection errors, etc.), or if a 500-range
|
||||
response code is received (except 501), then a retry is invoked after a wait
|
||||
period. Otherwise, the response is returned and left to the caller to
|
||||
interpret.
|
||||
|
||||
The main difference from `net/http` is that requests which take a request body
|
||||
(POST/PUT et. al) can have the body provided in a number of ways (some more or
|
||||
less efficient) that allow "rewinding" the request body if the initial request
|
||||
fails so that the full request can be attempted again. See the
|
||||
[godoc](http://godoc.org/github.com/hashicorp/go-retryablehttp) for more
|
||||
details.
|
||||
|
||||
Example Use
|
||||
===========
|
||||
|
||||
Using this library should look almost identical to what you would do with
|
||||
`net/http`. The most simple example of a GET request is shown below:
|
||||
|
||||
```go
|
||||
resp, err := retryablehttp.Get("/foo")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
The returned response object is an `*http.Response`, the same thing you would
|
||||
usually get from `net/http`. Had the request failed one or more times, the above
|
||||
call would block and retry with exponential backoff.
|
||||
|
||||
For more usage and examples see the
|
||||
[godoc](http://godoc.org/github.com/hashicorp/go-retryablehttp).
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user