76 Commits

Author SHA1 Message Date
dependabot[bot]
6037c98a77 Bump github.com/prometheus/client_golang from 1.19.0 to 1.19.1 (#181)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.19.0 to 1.19.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.19.0...v1.19.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-03 16:40:23 +01:00
dependabot[bot]
ad237243c4 Bump golang.org/x/crypto from 0.22.0 to 0.25.0 (#188)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.22.0 to 0.25.0.
- [Commits](https://github.com/golang/crypto/compare/v0.22.0...v0.25.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-03 16:39:44 +01:00
Rob Best
1b8a0c3b93 Use custom User-Agent header (#178) 2024-04-28 18:54:55 +01:00
Rob Best
dd2a9a2e71 Add test for TRUSTED CERTIFICATE block (#177)
Not the best test in the world but at least it verifies that we read
this block into a certificate.
2024-04-28 18:16:04 +01:00
MisterVVP
1ec0cd6dc7 fix: support parsing of openssl specific cert formats (#142) 2024-04-28 17:44:52 +01:00
jaroug
515b990f52 Add http_file prober (#144)
* feat: add remote_file probe

* fix: use tls module config

* chore: write http/https tests for probing remote file

* chore: get rid of useless lines

* fix: get rid of useless file download, check body directly

* fix: use checkCertificateMetrics to actually check values

* Rename remote_file to http_file

You can fetch remote content with a lot of different protocols, so I
think it's worth being specific here.

As part of this change I've fixed up some of the logic in the code. I've
also created a separate `http_file` block in the module config.

* Actually include renamed files

---------

Co-authored-by: Anthony LE BERRE <aleberre@veepee.com>
Co-authored-by: Rob Best <rob.best@jetstack.io>
2024-04-28 16:48:09 +01:00
Rob Best
4cb38cb268 Stop running tests twice on PR (#176) 2024-04-28 16:45:53 +01:00
Rob Best
3f34a7b234 Bump modules (#174) 2024-04-28 12:22:42 +01:00
dependabot[bot]
e810f50bea Bump github.com/prometheus/common from 0.48.0 to 0.53.0 (#172)
* Bump github.com/prometheus/common from 0.48.0 to 0.53.0

Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.48.0 to 0.53.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.48.0...v0.53.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix: NewCollector

* chore: bump to Go 1.22

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Rob Best <rob.best@jetstack.io>
2024-04-28 12:08:17 +01:00
dependabot[bot]
ccea4e9ec4 Bump golang.org/x/crypto from 0.21.0 to 0.22.0 (#170)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.21.0 to 0.22.0.
- [Commits](https://github.com/golang/crypto/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-18 15:59:49 +01:00
dependabot[bot]
1cedc5a542 Bump github.com/prometheus/client_model from 0.6.0 to 0.6.1 (#169)
Bumps [github.com/prometheus/client_model](https://github.com/prometheus/client_model) from 0.6.0 to 0.6.1.
- [Release notes](https://github.com/prometheus/client_model/releases)
- [Commits](https://github.com/prometheus/client_model/compare/v0.6.0...v0.6.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_model
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-18 15:59:33 +01:00
Dmitriy Alekseev
3424423d4a Fix tcp starttls test for smtp (#167)
* Fix tcp starttls test for smtp

* Update tcp.go

* Update tcp_test.go

* Update test/tcp.go

Co-authored-by: Rob Best <robertbest89@gmail.com>

* Update tcp_test.go

---------

Co-authored-by: Rob Best <robertbest89@gmail.com>
2024-03-22 12:13:21 +01:00
dependabot[bot]
07786ae452 Bump golang.org/x/crypto from 0.17.0 to 0.21.0 (#165)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.17.0 to 0.21.0.
- [Commits](https://github.com/golang/crypto/compare/v0.17.0...v0.21.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:52:56 +01:00
dependabot[bot]
734396b9fb Bump github.com/prometheus/client_model from 0.5.0 to 0.6.0 (#161)
Bumps [github.com/prometheus/client_model](https://github.com/prometheus/client_model) from 0.5.0 to 0.6.0.
- [Release notes](https://github.com/prometheus/client_model/releases)
- [Commits](https://github.com/prometheus/client_model/compare/v0.5.0...v0.6.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_model
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:52:28 +01:00
dependabot[bot]
76e1464643 Bump docker/setup-qemu-action from 2 to 3 (#154)
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:47:33 +01:00
dependabot[bot]
7d95b17e92 Bump actions/checkout from 3 to 4 (#153)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:45:09 +01:00
dependabot[bot]
9725277499 Bump docker/login-action from 2 to 3 (#152)
Bumps [docker/login-action](https://github.com/docker/login-action) from 2 to 3.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:44:28 +01:00
dependabot[bot]
6317311727 Bump actions/upload-artifact from 3 to 4 (#151)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:44:06 +01:00
dependabot[bot]
433631e903 Bump actions/setup-go from 4 to 5 (#150)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:43:42 +01:00
dependabot[bot]
2e6396494a Bump github.com/prometheus/client_golang from 1.18.0 to 1.19.0 (#166)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.18.0 to 1.19.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.18.0...v1.19.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:42:34 +01:00
dependabot[bot]
a537a91f4d Bump google.golang.org/protobuf from 1.32.0 to 1.33.0 (#168)
Bumps google.golang.org/protobuf from 1.32.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 13:41:57 +01:00
wyfrel
376cb7ebba fix description (#155) 2024-03-20 13:30:26 +01:00
dependabot[bot]
890c51077c Bump actions/setup-go from 3 to 4 (#131)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3 to 4.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 16:55:08 +00:00
Rob Best
3a594bc445 Bump modules and Go version (#148)
* Bump modules and Go version

* Bump go version in github actions
2024-01-05 16:41:58 +00:00
dependabot[bot]
4526fea4ad Bump golang.org/x/text from 0.3.7 to 0.3.8 (#126)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.7 to 0.3.8.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.3.7...v0.3.8)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-13 00:05:29 +00:00
dependabot[bot]
e8e2ba084c Bump github.com/prometheus/client_model from 0.2.0 to 0.3.0 (#117)
Bumps [github.com/prometheus/client_model](https://github.com/prometheus/client_model) from 0.2.0 to 0.3.0.
- [Release notes](https://github.com/prometheus/client_model/releases)
- [Commits](https://github.com/prometheus/client_model/compare/v0.2.0...v0.3.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_model
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-13 00:04:51 +00:00
dependabot[bot]
9d2fefefef Bump github.com/prometheus/client_golang from 1.12.2 to 1.13.0 (#110)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.12.2 to 1.13.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.12.2...v1.13.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-09-29 13:47:22 +01:00
Rob Best
8b30e0983c Bump dependencies (#108)
* Bump dependencies

* Go 1.18

* go mod tidy
2022-07-15 11:32:47 +01:00
dependabot[bot]
0c3452869d Bump github.com/prometheus/common from 0.35.0 to 0.36.0 (#106)
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.35.0 to 0.36.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.35.0...v0.36.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-07-12 05:48:27 +01:00
dependabot[bot]
cad7f2a3e3 Bump github.com/prometheus/common from 0.34.0 to 0.35.0 (#104)
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.34.0 to 0.35.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.34.0...v0.35.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-07-08 09:54:03 +01:00
dependabot[bot]
57395f8901 Bump github.com/go-kit/log from 0.2.0 to 0.2.1 (#99)
Bumps [github.com/go-kit/log](https://github.com/go-kit/log) from 0.2.0 to 0.2.1.
- [Release notes](https://github.com/go-kit/log/releases)
- [Commits](https://github.com/go-kit/log/compare/v0.2.0...v0.2.1)

---
updated-dependencies:
- dependency-name: github.com/go-kit/log
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-17 06:04:57 +01:00
dependabot[bot]
9c5ba75ff8 Bump docker/setup-qemu-action from 1 to 2 (#96)
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 2.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-17 05:35:43 +01:00
dependabot[bot]
8f808b7698 Bump docker/login-action from 1 to 2 (#97)
Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-17 05:35:20 +01:00
dependabot[bot]
120cbe636d Bump github.com/prometheus/client_golang from 1.12.1 to 1.12.2 (#98)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.12.1 to 1.12.2.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.12.1...v1.12.2)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-17 05:34:42 +01:00
manishpanjwani21
67a8b2d393 Update README.md (#90)
* Update README.md

Added the Example for type - kubernetes

* Update README.md

Co-authored-by: Rob Best <robertbest89@gmail.com>
2022-05-09 17:56:42 +01:00
Rob Best
52fb44781c Amend module path for v2 2022-05-07 09:33:55 +01:00
dependabot[bot]
793444c203 Bump actions/setup-go from 1 to 3 (#94)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 1 to 3.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v1...v3)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-06 18:36:25 +01:00
dependabot[bot]
a4b90c67c5 Bump actions/checkout from 2 to 3 (#95)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-06 18:36:03 +01:00
dependabot[bot]
ee7c7c64de Bump actions/upload-artifact from 2 to 3 (#93)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 2 to 3.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-06 18:35:46 +01:00
Rob Best
cd7adea7cb Update kubernetes deps 2022-05-05 15:21:54 +01:00
Rob Best
ee51a3ec43 Update go.mod and add dependabot 2022-05-05 14:53:10 +01:00
teuto.net Netzdienste GmbH
65249bc2e7 added pop3 STARTTLS queryResponse (#84)
* added pop3 STARTTLS queryResponse

* implemented pop3 test, added pop3 starttls parameter to README

Co-authored-by: Timo Boldt <tb@teuto.net>
2021-12-31 13:47:05 +00:00
Rob Best
8e318931c5 Build multi-arch docker images 2021-12-31 11:59:20 +00:00
Rob Best
d38244bc39 Release additional archs 2021-12-30 19:39:43 +00:00
Rob Best
02d61835e8 Add default_module and target options
If default_module is set then the exporter will use that when the module
parameter isn't set.

If target is set for a module then the module will use that target,
ignoring the target parameter completely.
2021-12-23 15:28:41 +00:00
Rob Best
087c407585 Move grafana dashboard to contrib
I'm not actively maintaining this and a contrib diredctory indicates
that.
2021-12-23 13:36:54 +00:00
Rob Best
d475f3abd2 Update release instructions 2021-12-23 13:22:10 +00:00
Vegar Sechmann Molvig
a8dcb43b44 Use FieldSelector to select only tls secrets (#82)
This speeds up the listing of certs significatnyly in clusters with many secrets.
2021-12-23 13:18:24 +00:00
Rob Best
0b960631e6 CI improvements 2021-12-23 12:38:02 +00:00
Rob Best
88198bf608 Install goreleaser with go install 2021-12-23 12:33:58 +00:00
Rob Best
b5b2729d01 Go 1.17 and update deps 2021-12-23 12:32:51 +00:00
Ben Ritcey
43dee906c6 Support TLS renegotiation (#83)
* Support TLS renegotiation

* Bump version

* Revert version bump

* Extend TLSConfig with renegotiation support

* Update config/config.go - comment formatting

Co-authored-by: Rob Best <robertbest89@gmail.com>

* add dedicated renegotiation example

* Create local NewTLSConfig in order to incorporate local extentions

* go mod tidy

* Move TLS renegotiation parsing into UnmarshalYAML

Co-authored-by: Rob Best <robertbest89@gmail.com>
2021-12-09 08:34:59 +00:00
Rob Best
78306b97c9 actions: push to Docker Hub 2021-09-11 13:06:47 +01:00
Rob Best
08d9a665b6 Release 2.3.1 2021-08-23 17:44:15 +01:00
Tarvi Pillessaar
a94845ae5d Add support for postgresql protocol (#77)
With postgresql to initiate SSL-encrypted connection specific combination
of bytes must be sent to the server.

Message flow is described on following page
https://www.postgresql.org/docs/13/protocol-flow.html#id-1.10.5.7.11

And SSLRequest message format is described on
https://www.postgresql.org/docs/13/protocol-message-formats.html

The value of SSLRequest message becomes to bytes that is used in the code
2021-08-23 08:39:40 +01:00
Johan Fleury
ef1a35d69f Update dependencies (#76)
* Update dependencies

Fixes #75

* Remove vendor directory
2021-08-07 18:58:33 +01:00
Rob Best
4aaa67e80a Release 2.2.1 2021-06-23 17:28:29 +01:00
Johan Fleury
83f01274fc Move to github.com/prometheus/common/promlog for logging (#71)
* Move to yaml.v3 everywhere

* Switch to github.com/prometheus/common/promlog for logging
2021-06-23 17:22:22 +01:00
Rob Best
d5cbd64f94 Update README.md
- Remove TOC (Github provides one)
- Add quicker links at the top of the doc
2021-05-05 22:48:07 +01:00
treydock
5265251777 Support getting certificate information from a kubeconfig file (#61)
* Support getting certificate information from a kubeconfig file

* Support relative paths for cluster CA and user certificate in kubeconfig

* Determine relative using filepath.IsAbs

* Make relative path logic actually work, add test. Move all kubeconfig parsing into parsing specific function
2021-04-02 10:53:31 +01:00
duchuan
b37574b48f [added] release target add mips64le (#65)
Co-authored-by: duchuanLX <duhchuan@loongson.cn>
2021-02-28 11:00:10 +00:00
Rob Best
5d3ac12e65 release 2.2.0 2020-12-07 20:18:38 +00:00
Rob Best
44d8713091 Add test for TLS version metric 2020-11-19 22:53:47 +00:00
Rob Best
8cde56ce6a Fix examples in the README 2020-11-16 08:47:52 +00:00
Rob Best
fdda9c3eca Add prober column to metrics table 2020-11-16 08:40:48 +00:00
Rob Best
d92d7bed30 Add file prober to example config 2020-11-16 00:49:31 +00:00
Rob Best
ca7aa1f14e Fix golint errors 2020-11-16 00:48:15 +00:00
Rob Best
13a03b1e2b Move tests to prober package 2020-11-16 00:41:36 +00:00
Rob Best
67539b6000 Use same results check for file + kube probes 2020-11-15 22:39:38 +00:00
Rob Best
f4782e3093 Make the description in the README more succinct 2020-11-15 22:28:36 +00:00
Rob Best
63dcb9aff1 Add kubernetes prober 2020-11-15 22:12:18 +00:00
Rob Best
0506638f63 Add file prober 2020-11-15 13:59:51 +00:00
Rob Best
c74c0de901 Refactor prober function and metrics collection
The existing implementation consists of a collector that exports
information from a tls.ConnectionState returned by the prober function.
This won't necessarily integrate well with additional probers that
retrieve certs from sources other than a tls handshake (from file, for
instance).

I've made the probing more generically expandable by removing the
collector and instead registering and collecting metrics inside the
prober. This makes it possible to collect the same metrics in a
different way, or collect different metrics depending on the prober.
2020-11-07 17:17:06 +00:00
Rob Best
e05745b959 Export OCSP stapling metrics (#54)
* Export OCSP stapling metrics

* Add ocsp_response_stapled boolean

* Add missing ocsp_this_update metric to README
2020-10-27 09:10:42 +00:00
Rob Best
896b59b1fe Update deps && go 1.15 2020-10-18 16:48:23 +01:00
Rob Best
119d3cd200 Add a configurable timeout to the module configuration (#55) 2020-10-09 16:47:21 +01:00
860 changed files with 3783 additions and 355729 deletions

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"

View File

@@ -1,24 +0,0 @@
name: build
on: [push, pull_request]
jobs:
build:
name: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Build the Docker image
run: docker build .
- name: Set up Go
uses: actions/setup-go@v1
with:
go-version: 1.14.x
- name: Build release snapshot
run: make snapshot
- name: Archive release snapshot
uses: actions/upload-artifact@v2
with:
name: release-snapshot
path: |
bin/*.tar.gz
bin/*.txt
bin/*.yaml

View File

@@ -9,13 +9,25 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Unshallow
run: git fetch --prune --unshallow
- name: Set up Qemu
uses: docker/setup-qemu-action@v3
- name: Set up Go
uses: actions/setup-go@v1
uses: actions/setup-go@v5
with:
go-version: 1.14.x
go-version: 1.22.x
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Release with GoReleaser
run: make release
env:

View File

@@ -0,0 +1,48 @@
name: test-and-snapshot
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.22.x
- name: Test
run: make test
snapshot:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Qemu
uses: docker/setup-qemu-action@v3
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.22.x
- name: Build release snapshot
run: make snapshot
- name: Archive release snapshot
uses: actions/upload-artifact@v4
with:
name: release-snapshot
path: |
bin/*.tar.gz
bin/*.txt
bin/*.yaml

View File

@@ -8,16 +8,55 @@ builds:
- darwin
- windows
goarch:
- "386"
- amd64
- arm
- arm64
- mips64le
flags:
- -v
ldflags: |
-X github.com/prometheus/common/version.Version={{.Env.APP_VERSION}}
-X github.com/prometheus/common/version.Revision={{.Env.APP_REVISION}}
-X github.com/prometheus/common/version.Version={{.Version}}
-X github.com/prometheus/common/version.Revision={{.Commit}}
-X github.com/prometheus/common/version.Branch={{.Env.APP_BRANCH}}
-X github.com/prometheus/common/version.BuildUser={{.Env.APP_USER}}@{{.Env.APP_HOST}}
-X github.com/prometheus/common/version.BuildDate={{.Env.APP_BUILD_DATE}}
-X github.com/prometheus/common/version.BuildDate={{.Date}}
release:
github:
owner: ribbybibby
name: ssl_exporter
dockers:
- image_templates:
- "{{.Env.APP_DOCKER_IMAGE_NAME}}:{{.Version}}-amd64"
dockerfile: Dockerfile
use: buildx
build_flag_templates:
- "--pull"
- "--label=org.opencontainers.image.created={{.Date}}"
- "--label=org.opencontainers.image.name={{.ProjectName}}"
- "--label=org.opencontainers.image.revision={{.FullCommit}}"
- "--label=org.opencontainers.image.version={{.Version}}"
- "--label=org.opencontainers.image.source={{.GitURL}}"
- "--platform=linux/amd64"
- image_templates:
- "{{.Env.APP_DOCKER_IMAGE_NAME}}:{{.Version}}-arm64"
dockerfile: Dockerfile
use: buildx
build_flag_templates:
- "--pull"
- "--label=org.opencontainers.image.created={{.Date}}"
- "--label=org.opencontainers.image.name={{.ProjectName}}"
- "--label=org.opencontainers.image.revision={{.FullCommit}}"
- "--label=org.opencontainers.image.version={{.Version}}"
- "--label=org.opencontainers.image.source={{.GitURL}}"
- "--platform=linux/arm64"
goarch: arm64
docker_manifests:
- name_template: "{{.Env.APP_DOCKER_IMAGE_NAME}}:{{.Version}}"
image_templates:
- "{{.Env.APP_DOCKER_IMAGE_NAME}}:{{.Version}}-amd64"
- "{{.Env.APP_DOCKER_IMAGE_NAME}}:{{.Version}}-arm64"
- name_template: "{{.Env.APP_DOCKER_IMAGE_NAME}}:latest"
image_templates:
- "{{.Env.APP_DOCKER_IMAGE_NAME}}:{{.Version}}-amd64"
- "{{.Env.APP_DOCKER_IMAGE_NAME}}:{{.Version}}-arm64"

View File

@@ -1,20 +1,16 @@
FROM golang:1.14-stretch AS build
ADD . /tmp/ssl_exporter
RUN cd /tmp/ssl_exporter && \
echo "ssl:*:100:ssl" > group && \
echo "ssl:*:100:100::/:/ssl_exporter" > passwd && \
make
FROM alpine:3.15 as build
RUN apk --update add ca-certificates
RUN echo "ssl:*:100:ssl" > /tmp/group && \
echo "ssl:*:100:100::/:/ssl_exporter" > /tmp/passwd
FROM scratch
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /tmp/ssl_exporter/group \
/tmp/ssl_exporter/passwd \
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=build /tmp/group \
/tmp/passwd \
/etc/
COPY --from=build /tmp/ssl_exporter/ssl_exporter /
COPY ssl_exporter /
USER ssl:ssl
EXPOSE 9219/tcp

21
Dockerfile.local Normal file
View File

@@ -0,0 +1,21 @@
FROM golang:1.18-buster AS build
ADD . /tmp/ssl_exporter
RUN cd /tmp/ssl_exporter && \
echo "ssl:*:100:ssl" > group && \
echo "ssl:*:100:100::/:/ssl_exporter" > passwd && \
make
FROM scratch
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /tmp/ssl_exporter/group \
/tmp/ssl_exporter/passwd \
/etc/
COPY --from=build /tmp/ssl_exporter/ssl_exporter /
USER ssl:ssl
EXPOSE 9219/tcp
ENTRYPOINT ["/ssl_exporter"]

View File

@@ -8,18 +8,16 @@ DOCKER_IMAGE_TAG ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD))
# Race detector is only supported on amd64.
RACE := $(shell test $$(go env GOARCH) != "amd64" || (echo "-race"))
export APP_HOST ?= $(shell hostname)
export APP_BRANCH ?= $(shell git describe --all --contains --dirty HEAD)
export APP_VERSION := $(shell cat VERSION)
export APP_REVISION := $(shell git rev-parse HEAD)
export APP_USER := $(shell id -u --name)
export APP_BUILD_DATE := $(shell date '+%Y%m%d-%H:%M:%S')
export APP_HOST ?= $(shell hostname)
export APP_BRANCH ?= $(shell git describe --all --contains --dirty HEAD)
export APP_USER := $(shell id -u --name)
export APP_DOCKER_IMAGE_NAME := ribbybibby/$(DOCKER_IMAGE_NAME)
all: clean format vet build test
style:
@echo ">> checking code style"
@! gofmt -s -d $(shell find . -path ./vendor -prune -o -name '*.go' -print) | grep '^'
@! gofmt -s -d . | grep '^'
test:
@echo ">> running tests"
@@ -36,20 +34,20 @@ vet:
build:
@echo ">> building binary"
@CGO_ENABLED=0 go build -v \
-ldflags "-X github.com/prometheus/common/version.Version=$(APP_VERSION) \
-X github.com/prometheus/common/version.Revision=$(APP_REVISION) \
-ldflags "-X github.com/prometheus/common/version.Version=dev \
-X github.com/prometheus/common/version.Revision=$(shell git rev-parse HEAD) \
-X github.com/prometheus/common/version.Branch=$(APP_BRANCH) \
-X github.com/prometheus/common/version.BuildUser=$(APP_USER)@$(APP_HOST) \
-X github.com/prometheus/common/version.BuildDate=$(APP_BUILD_DATE)\
-X github.com/prometheus/common/version.BuildDate=$(shell date '+%Y%m%d-%H:%M:%S') \
" \
-o $(BIN_NAME) .
docker:
@echo ">> building docker image"
@docker build -t "$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" .
@docker build -t "$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" -f Dockerfile.local .
$(GOPATH)/bin/goreleaser:
@curl -sfL https://install.goreleaser.com/github.com/goreleaser/goreleaser.sh | BINDIR=$(GOPATH)/bin sh
@go install github.com/goreleaser/goreleaser@v1.2.2
snapshot: $(GOPATH)/bin/goreleaser
@echo ">> building snapshot"
@@ -63,4 +61,4 @@ clean:
@rm -Rf $(BIN_DIR)
@rm -Rf $(BIN_NAME)
.PHONY: all style test format vet build docker snapshot release clean
.PHONY: all style test format vet build docker snapshot release clean

333
README.md
View File

@@ -1,44 +1,15 @@
# SSL Certificate Exporter
The [blackbox_exporter](https://github.com/prometheus/blackbox_exporter) allows
you to test the expiry date of a certificate as part of its HTTP(S) probe -
which is great. It doesn't, however, tell you which certificate in the chain is
nearing expiry or give you any other information that might be useful when
sending alerts.
Exports metrics for certificates collected from various sources:
- [TCP probes](#tcp)
- [HTTPS probes](#https)
- [PEM files](#file)
- [Remote PEM files](#http_file)
- [Kubernetes secrets](#kubernetes)
- [Kubeconfig files](#kubeconfig)
For instance, there's a definite value in knowing, upon first receiving an
alert, if it's a certificate you manage directly or one further up the chain.
It's also not always necessarily clear from the address you're polling what kind
of certificate renewal you're looking at. Is it a Let's Encrypt, in which case
it should be handled by automation? Or your organisation's wildcard? Maybe the
domain is managed by a third-party and you need to submit a ticket to get it
renewed.
Whatever it is, the SSL exporter gives you visibility over those dimensions at
the point at which you receive an alert. It also allows you to produce more
meaningful visualisations and consoles.
## Table of Contents
- [SSL Certificate Exporter](#ssl-certificate-exporter)
- [Table of Contents](#table-of-contents)
- [Building](#building)
- [Docker](#docker)
- [Release process](#release-process)
- [Usage](#usage)
- [Metrics](#metrics)
- [Configuration](#configuration)
- [Configuration file](#configuration-file)
- [&lt;module&gt;](#module)
- [&lt;tls_config&gt;](#tls_config)
- [&lt;https_probe&gt;](#https_probe)
- [&lt;tcp_probe&gt;](#tcp_probe)
- [Example Queries](#example-queries)
- [Peer Cerificates vs Verified Chain Certificates](#peer-cerificates-vs-verified-chain-certificates)
- [Proxying](#proxying)
- [Grafana](#grafana)
Created by [gh-md-toc](https://github.com/ekalinin/github-markdown-toc)
The metrics are labelled with fields from the certificate, which allows for
informational dashboards and flexible alert routing.
## Building
@@ -47,21 +18,19 @@ Created by [gh-md-toc](https://github.com/ekalinin/github-markdown-toc)
Similarly to the blackbox_exporter, visiting
[http://localhost:9219/probe?target=example.com:443](http://localhost:9219/probe?target=example.com:443)
will return certificate metrics for example.com. The `ssl_tls_connect_success`
will return certificate metrics for example.com. The `ssl_probe_success`
metric indicates if the probe has been successful.
### Docker
docker pull ribbybibby/ssl-exporter
docker run -p 9219:9219 ribbybibby/ssl-exporter:latest <flags>
### Release process
- Update the `VERSION` file in this repository and commit to master
- [This github action](.github/workflows/release.yaml) will add a changelog and
upload binaries in response to a release being created in Github
- Dockerhub will build and tag a new container image in response to tags of the
format `/^v[0-9.]+$/`
- Create a release in Github with a semver tag and GH actions will:
- Add a changelog
- Upload binaries
- Build and push a Docker image
## Usage
@@ -88,18 +57,32 @@ Flags:
## Metrics
| Metric | Meaning | Labels |
| ---------------------------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
| ssl_cert_not_after | The date after which a peer certificate expires. Expressed as a Unix Epoch Time. | serial_no, issuer_cn, cn, dnsnames, ips, emails, ou |
| ssl_cert_not_before | The date before which a peer certificate is not valid. Expressed as a Unix Epoch Time. | serial_no, issuer_cn, cn, dnsnames, ips, emails, ou |
| ssl_prober | The prober used by the exporter to connect to the target. Boolean. | prober |
| ssl_tls_connect_success | Was the TLS connection successful? Boolean. | |
| ssl_tls_version_info | The TLS version used. Always 1. | version |
| ssl_verified_cert_not_after | The date after which a certificate in the verified chain expires. Expressed as a Unix Epoch Time. | chain_no, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou |
| ssl_verified_cert_not_before | The date before which a certificate in the verified chain is not valid. Expressed as a Unix Epoch Time. | chain_no, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou |
| Metric | Meaning | Labels | Probers |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------- | ---------- |
| ssl_cert_not_after | The date after which a peer certificate expires. Expressed as a Unix Epoch Time. | serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | tcp, https |
| ssl_cert_not_before | The date before which a peer certificate is not valid. Expressed as a Unix Epoch Time. | serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | tcp, https |
| ssl_file_cert_not_after | The date after which a certificate found by the file prober expires. Expressed as a Unix Epoch Time. | file, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | file |
| ssl_file_cert_not_before | The date before which a certificate found by the file prober is not valid. Expressed as a Unix Epoch Time. | file, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | file |
| ssl_kubernetes_cert_not_after | The date after which a certificate found by the kubernetes prober expires. Expressed as a Unix Epoch Time. | namespace, secret, key, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | kubernetes |
| ssl_kubernetes_cert_not_before | The date before which a certificate found by the kubernetes prober is not valid. Expressed as a Unix Epoch Time. | namespace, secret, key, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | kubernetes |
| ssl_kubeconfig_cert_not_after | The date after which a certificate found by the kubeconfig prober expires. Expressed as a Unix Epoch Time. | kubeconfig, name, type, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | kubeconfig |
| ssl_kubeconfig_cert_not_before | The date before which a certificate found by the kubeconfig prober is not valid. Expressed as a Unix Epoch Time. | kubeconfig, name, type, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | kubeconfig |
| ssl_ocsp_response_next_update | The nextUpdate value in the OCSP response. Expressed as a Unix Epoch Time | | tcp, https |
| ssl_ocsp_response_produced_at | The producedAt value in the OCSP response. Expressed as a Unix Epoch Time | | tcp, https |
| ssl_ocsp_response_revoked_at | The revocationTime value in the OCSP response. Expressed as a Unix Epoch Time | | tcp, https |
| ssl_ocsp_response_status | The status in the OCSP response. 0=Good 1=Revoked 2=Unknown | | tcp, https |
| ssl_ocsp_response_stapled | Does the connection state contain a stapled OCSP response? Boolean. | | tcp, https |
| ssl_ocsp_response_this_update | The thisUpdate value in the OCSP response. Expressed as a Unix Epoch Time | | tcp, https |
| ssl_probe_success | Was the probe successful? Boolean. | | all |
| ssl_prober | The prober used by the exporter to connect to the target. Boolean. | prober | all |
| ssl_tls_version_info | The TLS version used. Always 1. | version | tcp, https |
| ssl_verified_cert_not_after | The date after which a certificate in the verified chain expires. Expressed as a Unix Epoch Time. | chain_no, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | tcp, https |
| ssl_verified_cert_not_before | The date before which a certificate in the verified chain is not valid. Expressed as a Unix Epoch Time. | chain_no, serial_no, issuer_cn, cn, dnsnames, ips, emails, ou | tcp, https |
## Configuration
### TCP
Just like with the blackbox_exporter, you should pass the targets to a single
instance of the exporter in a scrape config with a clever bit of relabelling.
This allows you to leverage service discovery and keeps configuration
@@ -122,8 +105,11 @@ scrape_configs:
replacement: 127.0.0.1:9219 # SSL exporter.
```
By default the exporter will make a TCP connection to the target. You can change
this to https by setting the module parameter:
### HTTPS
By default the exporter will make a TCP connection to the target. This will be
suitable for most cases but if you want to take advantage of http proxying you
can use a HTTPS client by setting the `https` module parameter:
```yml
scrape_configs:
@@ -144,36 +130,221 @@ scrape_configs:
replacement: 127.0.0.1:9219
```
### Configuration file
This will use proxy servers discovered by the environment variables `HTTP_PROXY`,
`HTTPS_PROXY` and `ALL_PROXY`. Or, you can set the `https.proxy_url` option in the module
configuration.
The latter takes precedence.
### File
The `file` prober exports `ssl_file_cert_not_after` and
`ssl_file_cert_not_before` for PEM encoded certificates found in local files.
Files local to the exporter can be scraped by providing them as the target
parameter:
```
curl "localhost:9219/probe?module=file&target=/etc/ssl/cert.pem"
```
The target parameter supports globbing (as provided by the
[doublestar](https://github.com/bmatcuk/doublestar) package),
which allows you to capture multiple files at once:
```
curl "localhost:9219/probe?module=file&target=/etc/ssl/**/*.pem"
```
One specific usage of this prober could be to run the exporter as a DaemonSet in
Kubernetes and then scrape each instance to check the expiry of certificates on
each node:
```yml
scrape_configs:
- job_name: "ssl-kubernetes-file"
metrics_path: /probe
params:
module: ["file"]
target: ["/etc/kubernetes/**/*.crt"]
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__address__]
regex: ^(.*):(.*)$
target_label: __address__
replacement: ${1}:9219
```
### HTTP File
The `http_file` prober exports `ssl_cert_not_after` and
`ssl_cert_not_before` for PEM encoded certificates found at the
specified URL.
```
curl "localhost:9219/probe?module=http_file&target=https://www.paypalobjects.com/marketing/web/logos/paypal_com.pem"
```
Here's a sample Prometheus configuration:
```yml
scrape_configs:
- job_name: 'ssl-http-files'
metrics_path: /probe
params:
module: ["http_file"]
static_configs:
- targets:
- 'https://www.paypalobjects.com/marketing/web/logos/paypal_com.pem'
- 'https://d3frv9g52qce38.cloudfront.net/amazondefault/amazon_web_services_inc_2024.pem'
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9219
```
For proxying to the target resource, this prober will use proxy servers
discovered in the environment variables `HTTP_PROXY`, `HTTPS_PROXY` and
`ALL_PROXY`. Or, you can set the `http_file.proxy_url` option in the module
configuration.
The latter takes precedence.
### Kubernetes
The `kubernetes` prober exports `ssl_kubernetes_cert_not_after` and
`ssl_kubernetes_cert_not_before` for PEM encoded certificates found in secrets
of type `kubernetes.io/tls`.
Provide the namespace and name of the secret in the form `<namespace>/<name>` as
the target:
```
curl "localhost:9219/probe?module=kubernetes&target=kube-system/secret-name"
```
Both the namespace and name portions of the target support glob matching (as provided by the
[doublestar](https://github.com/bmatcuk/doublestar) package):
```
curl "localhost:9219/probe?module=kubernetes&target=kube-system/*"
```
```
curl "localhost:9219/probe?module=kubernetes&target=*/*"
```
The exporter retrieves credentials and context configuration from the following
sources in the following order:
- The `kubeconfig` path in the module configuration
- The `$KUBECONFIG` environment variable
- The default configuration file (`$HOME/.kube/config`)
- The in-cluster environment, if running in a pod
```yml
- job_name: "ssl-kubernetes"
metrics_path: /probe
params:
module: ["kubernetes"]
static_configs:
- targets:
- "test-namespace/nginx-cert"
relabel_configs:
- source_labels: [ __address__ ]
target_label: __param_target
- source_labels: [ __param_target ]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9219
```
### Kubeconfig
The `kubeconfig` prober exports `ssl_kubeconfig_cert_not_after` and
`ssl_kubeconfig_cert_not_before` for PEM encoded certificates found in the specified kubeconfig file.
Kubeconfigs local to the exporter can be scraped by providing them as the target
parameter:
```
curl "localhost:9219/probe?module=kubeconfig&target=/etc/kubernetes/admin.conf"
```
One specific usage of this prober could be to run the exporter as a DaemonSet in
Kubernetes and then scrape each instance to check the expiry of certificates on
each node:
```yml
scrape_configs:
- job_name: "ssl-kubernetes-kubeconfig"
metrics_path: /probe
params:
module: ["kubeconfig"]
target: ["/etc/kubernetes/admin.conf"]
kubernetes_sd_configs:
- role: node
relabel_configs:
- source_labels: [__address__]
regex: ^(.*):(.*)$
target_label: __address__
replacement: ${1}:9219
```
## Configuration file
You can provide further module configuration by providing the path to a
configuration file with `--config.file`. The file is written in yaml format,
defined by the schema below.
```
# The default module to use. If omitted, then the module must be provided by the
# 'module' query parameter
default_module: <string>
# Module configuration
modules: [<module>]
```
#### \<module\>
### \<module\>
```
# The protocol over which the probe will take place (https, tcp)
# The type of probe (https, tcp, file, kubernetes, kubeconfig)
prober: <prober_string>
# The probe target. If set, then the 'target' query parameter is ignored.
# If omitted, then the 'target' query parameter is required.
target: <string>
# How long the probe will wait before giving up.
[ timeout: <duration> ]
# Configuration for TLS
[ tls_config: <tls_config> ]
# The specific probe configuration
[ https: <https_probe> ]
[ tcp: <tcp_probe> ]
[ kubernetes: <kubernetes_probe> ]
[ http_file: <http_file_probe> ]
```
#### <tls_config>
### <tls_config>
```
# Disable target certificate validation.
[ insecure_skip_verify: <boolean> | default = false ]
# Configure TLS renegotiation support.
# Valid options: never, once, freely
[ renegotiation: <string> | default = never ]
# The CA cert to use for the targets.
[ ca_file: <filename> ]
@@ -187,20 +358,34 @@ prober: <prober_string>
[ server_name: <string> ]
```
#### <https_probe>
### <https_probe>
```
# HTTP proxy server to use to connect to the targets.
[ proxy_url: <string> ]
```
#### <tcp_probe>
### <tcp_probe>
```
# Use the STARTTLS command before starting TLS for those protocols that support it (smtp, ftp, imap)
# Use the STARTTLS command before starting TLS for those protocols that support it (smtp, ftp, imap, pop3, postgres)
[ starttls: <string> ]
```
### <kubernetes_probe>
```
# The path of a kubeconfig file to configure the probe
[ kubeconfig: <string> ]
```
### <http_file_probe>
```
# HTTP proxy server to use to connect to the targets.
[ proxy_url: <string> ]
```
## Example Queries
Certificates that expire within 7 days:
@@ -228,13 +413,13 @@ Number of certificates presented by the server:
count(ssl_cert_not_after) by (instance)
```
Identify instances that have failed to create a valid SSL connection:
Identify failed probes:
```
ssl_tls_connect_success == 0
ssl_probe_success == 0
```
## Peer Cerificates vs Verified Chain Certificates
## Peer Certificates vs Verified Chain Certificates
Metrics are exported for the `NotAfter` and `NotBefore` fields for peer
certificates as well as for the verified chain that is
@@ -268,21 +453,7 @@ of trust between the exporter and the target. Genuine clients may hold different
root certs than the exporter and therefore have different verified chains of
trust.
## Proxying
The `https` prober supports the use of proxy servers discovered by the
environment variables `HTTP_PROXY`, `HTTPS_PROXY` and `ALL_PROXY`.
For instance:
$ export HTTPS_PROXY=localhost:8888
$ ./ssl_exporter
Or, you can set the `proxy_url` option in the module.
The latter takes precedence.
## Grafana
You can find a simple dashboard [here](grafana/dashboard.json) that tracks
You can find a simple dashboard [here](contrib/grafana/dashboard.json) that tracks
certificate expiration dates and target connection errors.

View File

@@ -1 +0,0 @@
2.1.1

View File

@@ -1,30 +1,48 @@
package config
import (
"crypto/tls"
"fmt"
"net/url"
"os"
"time"
"github.com/prometheus/common/config"
pconfig "github.com/prometheus/common/config"
yaml "gopkg.in/yaml.v3"
)
var (
// DefaultConfig is the default configuration that is used when no
// configuration file is provided
DefaultConfig = &Config{
map[string]Module{
"tcp": Module{
DefaultModule: "tcp",
Modules: map[string]Module{
"tcp": {
Prober: "tcp",
},
"http": Module{
"http": {
Prober: "https",
},
"https": Module{
"https": {
Prober: "https",
},
"file": {
Prober: "file",
},
"http_file": {
Prober: "http_file",
},
"kubernetes": {
Prober: "kubernetes",
},
"kubeconfig": {
Prober: "kubeconfig",
},
},
}
)
// LoadConfig loads configuration from a file
func LoadConfig(confFile string) (*Config, error) {
var c *Config
@@ -41,28 +59,98 @@ func LoadConfig(confFile string) (*Config, error) {
}
return c, nil
}
// Config configures the exporter
type Config struct {
Modules map[string]Module `yaml:"modules"`
DefaultModule string `yaml:"default_module"`
Modules map[string]Module `yaml:"modules"`
}
// Module configures a prober
type Module struct {
Prober string `yaml:"prober,omitempty"`
TLSConfig config.TLSConfig `yaml:"tls_config,omitempty"`
HTTPS HTTPSProbe `yaml:"https,omitempty"`
TCP TCPProbe `yaml:"tcp,omitempty"`
Prober string `yaml:"prober,omitempty"`
Target string `yaml:"target,omitempty"`
Timeout time.Duration `yaml:"timeout,omitempty"`
TLSConfig TLSConfig `yaml:"tls_config,omitempty"`
HTTPS HTTPSProbe `yaml:"https,omitempty"`
TCP TCPProbe `yaml:"tcp,omitempty"`
Kubernetes KubernetesProbe `yaml:"kubernetes,omitempty"`
HTTPFile HTTPFileProbe `yaml:"http_file,omitempty"`
}
// TLSConfig is a superset of config.TLSConfig that supports TLS renegotiation
type TLSConfig struct {
CAFile string `yaml:"ca_file,omitempty"`
CertFile string `yaml:"cert_file,omitempty"`
KeyFile string `yaml:"key_file,omitempty"`
ServerName string `yaml:"server_name,omitempty"`
InsecureSkipVerify bool `yaml:"insecure_skip_verify"`
// Renegotiation controls what types of TLS renegotiation are supported.
// Supported values: never (default), once, freely.
Renegotiation renegotiation `yaml:"renegotiation,omitempty"`
}
type renegotiation tls.RenegotiationSupport
func (r *renegotiation) UnmarshalYAML(unmarshal func(interface{}) error) error {
var v string
if err := unmarshal(&v); err != nil {
return err
}
switch v {
case "", "never":
*r = renegotiation(tls.RenegotiateNever)
case "once":
*r = renegotiation(tls.RenegotiateOnceAsClient)
case "freely":
*r = renegotiation(tls.RenegotiateFreelyAsClient)
default:
return fmt.Errorf("unsupported TLS renegotiation type %s", v)
}
return nil
}
// NewTLSConfig creates a new tls.Config from the given TLSConfig,
// plus our local extensions
func NewTLSConfig(cfg *TLSConfig) (*tls.Config, error) {
tlsConfig, err := pconfig.NewTLSConfig(&pconfig.TLSConfig{
CAFile: cfg.CAFile,
CertFile: cfg.CertFile,
KeyFile: cfg.KeyFile,
ServerName: cfg.ServerName,
InsecureSkipVerify: cfg.InsecureSkipVerify,
})
if err != nil {
return nil, err
}
tlsConfig.Renegotiation = tls.RenegotiationSupport(cfg.Renegotiation)
return tlsConfig, nil
}
// TCPProbe configures a tcp probe
type TCPProbe struct {
StartTLS string `yaml:"starttls,omitempty"`
}
// HTTPSProbe configures a https probe
type HTTPSProbe struct {
ProxyURL URL `yaml:"proxy_url,omitempty"`
}
// KubernetesProbe configures a kubernetes probe
type KubernetesProbe struct {
Kubeconfig string `yaml:"kubeconfig,omitempty"`
}
// HTTPFileProbe configures a http_file probe
type HTTPFileProbe struct {
ProxyURL URL `yaml:"proxy_url,omitempty"`
}
// URL is a custom URL type that allows validation at configuration load time
type URL struct {
*url.URL

View File

@@ -12,7 +12,7 @@
}
]
},
"description": "Shows certitificate expiration times, as well as failed ssl connects",
"description": "Shows certificate expiration times, as well as failed ssl connects",
"editable": true,
"gnetId": 11279,
"graphTooltip": 0,
@@ -240,7 +240,7 @@
"steppedLine": false,
"targets": [
{
"expr": "(up{job=~\"$job\", instance=~\"$instance\"} == 0 or ssl_tls_connect_success{job=~\"$job\", instance=~\"$instance\"} == 0)^0",
"expr": "(up{job=~\"$job\", instance=~\"$instance\"} == 0 or ssl_probe_success{job=~\"$job\", instance=~\"$instance\"} == 0)^0",
"format": "time_series",
"instant": false,
"legendFormat": "{{instance}}",
@@ -407,7 +407,7 @@
],
"targets": [
{
"expr": "ssl_tls_connect_success{instance=~\"$instance\",job=~\"$job\"} == 0",
"expr": "ssl_probe_success{instance=~\"$instance\",job=~\"$job\"} == 0",
"format": "table",
"instant": true,
"intervalFactor": 1,
@@ -585,14 +585,14 @@
"value": ["$__all"]
},
"datasource": "Prometheus",
"definition": "label_values(ssl_tls_connect_success, job)",
"definition": "label_values(ssl_probe_success, job)",
"hide": 0,
"includeAll": true,
"label": "Job",
"multi": true,
"name": "job",
"options": [],
"query": "label_values(ssl_tls_connect_success, job)",
"query": "label_values(ssl_probe_success, job)",
"refresh": 1,
"regex": "",
"skipUrlSync": false,

View File

@@ -17,4 +17,35 @@ scrape_configs:
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9219 # SSL exporter.
replacement: 127.0.0.1:9219 # SSL exporter.
- job_name: 'ssl-files'
metrics_path: /probe
params:
module: ["file"]
target: ["/etc/ssl/**/*.pem"]
static_configs:
- targets:
- 127.0.0.1:9219
- job_name: 'ssl-kubernetes-secrets'
metrics_path: /probe
params:
module: ["kubernetes"]
target: ["kube-system/*"]
static_configs:
- targets:
- 127.0.0.1:9219
- job_name: 'ssl-http-files'
metrics_path: /probe
params:
module: ["http_file"]
static_configs:
- targets:
- 'https://www.paypalobjects.com/marketing/web/logos/paypal_com.pem'
- 'https://d3frv9g52qce38.cloudfront.net/amazondefault/amazon_web_services_inc_2024.pem'
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 127.0.0.1:9219

View File

@@ -1,3 +1,4 @@
default_module: https
modules:
https:
prober: https
@@ -5,10 +6,17 @@ modules:
prober: https
tls_config:
insecure_skip_verify: true
https_renegotiation:
prober: https
tls_config:
renegotiation: freely
https_proxy:
prober: https
https:
proxy_url: "socks5://localhost:8123"
https_timeout:
prober: https
timeout: 3s
tcp:
prober: tcp
tcp_servername:
@@ -25,3 +33,22 @@ modules:
prober: tcp
tcp:
starttls: smtp
file:
prober: file
file_ca_certificates:
prober: file
target: /etc/ssl/certs/ca-certificates.crt
http_file:
prober: http_file
http_file_proxy:
prober: http_file
http_file:
proxy_url: "socks5://localhost:8123"
kubernetes:
prober: kubernetes
kubernetes_kubeconfig:
prober: kubernetes
kubernetes:
kubeconfig: /root/.kube/config
kubeconfig:
prober: kubeconfig

85
go.mod
View File

@@ -1,21 +1,72 @@
module github.com/ribbybibby/ssl_exporter
module github.com/ribbybibby/ssl_exporter/v2
require (
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d // indirect
github.com/golang/protobuf v1.4.2 // indirect
github.com/jpillora/backoff v1.0.0 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/prometheus/client_golang v1.6.0
github.com/prometheus/common v0.10.0
github.com/prometheus/procfs v0.1.3 // indirect
github.com/sirupsen/logrus v1.6.0 // indirect
golang.org/x/net v0.0.0-20200602114024-627f9648deb9 // indirect
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1 // indirect
golang.org/x/text v0.3.3 // indirect
google.golang.org/protobuf v1.24.0 // indirect
gopkg.in/alecthomas/kingpin.v2 v2.2.6
gopkg.in/yaml.v2 v2.3.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776
github.com/alecthomas/kingpin/v2 v2.4.0
github.com/bmatcuk/doublestar/v2 v2.0.4
github.com/go-kit/log v0.2.1
github.com/prometheus/client_golang v1.19.1
github.com/prometheus/client_model v0.6.1
github.com/prometheus/common v0.53.0
golang.org/x/crypto v0.25.0
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.30.0
k8s.io/apimachinery v0.30.0
k8s.io/client-go v1.5.2
)
go 1.14
require (
github.com/alecthomas/units v0.0.0-20231202071711-9a357b53e9c9 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.12.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/gnostic-models v0.6.9-0.20230804172637-c7be7c783f49 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/jpillora/backoff v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/procfs v0.14.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/xhit/go-str2duration/v2 v2.1.0 // indirect
golang.org/x/net v0.24.0 // indirect
golang.org/x/oauth2 v0.19.0 // indirect
golang.org/x/sys v0.22.0 // indirect
golang.org/x/term v0.22.0 // indirect
golang.org/x/text v0.16.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/klog/v2 v2.120.1 // indirect
k8s.io/kube-openapi v0.0.0-20240423202451-8948a665c108 // indirect
k8s.io/utils v0.0.0-20240423183400-0849a56e8f22 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)
replace (
k8s.io/api => k8s.io/api v0.29.0
k8s.io/apimachinery => k8s.io/apimachinery v0.29.0
k8s.io/client-go => k8s.io/client-go v0.29.0
)
go 1.22
toolchain go1.22.1

300
go.sum
View File

@@ -1,181 +1,173 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc h1:cAKDfWh5VpdgMhJosfJnn5/FoN2SRZ4p7fJNX58YPaU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf h1:qet1QNfXsQxTZqLG4oE62mJzwPIB8+Tee4RNCL9ulrY=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d h1:UQZhZ2O0vMHr2cI+DC1Mbh0TJxzA3RcLoMsFw+aXw7E=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/alecthomas/kingpin/v2 v2.4.0 h1:f48lwail6p8zpO1bC4TxtqACaGqHYA22qkHjHpqDjYY=
github.com/alecthomas/kingpin/v2 v2.4.0/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
github.com/alecthomas/units v0.0.0-20231202071711-9a357b53e9c9 h1:ez/4by2iGztzR4L0zgAOR8lTQK9VlyBVVd7G4omaOQs=
github.com/alecthomas/units v0.0.0-20231202071711-9a357b53e9c9/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/bmatcuk/doublestar/v2 v2.0.4 h1:6I6oUiT/sU27eE2OFcWqBhL1SwjyvQuOssxT4a1yidI=
github.com/bmatcuk/doublestar/v2 v2.0.4/go.mod h1:QMmcs3H2AUQICWhfzLXz+IYln8lRQmTZRptLie8RgRw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0 h1:xsAVV57WRhGj6kEIi8ReJzQlHHqcBYCElAvkovg3B/4=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/emicklei/go-restful/v3 v3.12.0 h1:y2DdzBAURM29NFF94q6RaY4vjIH1rtwDapwQtU84iWk=
github.com/emicklei/go-restful/v3 v3.12.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU=
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ=
github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/gnostic-models v0.6.9-0.20230804172637-c7be7c783f49 h1:0VpGH+cDhbDtdcweoyCVsF3fhN8kejK6rFe/2FFX2nU=
github.com/google/gnostic-models v0.6.9-0.20230804172637-c7be7c783f49/go.mod h1:BkkQ4L1KS1xMt2aWSPStnn55ChGC0DPOn2FQYj+f25M=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 h1:K6RDEckDVWvDI9JAJYCmNdQXq6neHJOYx3V6jnqNEec=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4=
github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3 h1:CE8S1cTafDpPvMhIxNJKvHsGVBgn1xWYf1NbHQhywc8=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/onsi/ginkgo/v2 v2.13.0 h1:0jY9lJquiL8fcf3M4LAXN5aMlS/b2BV86HFFPCPMgE4=
github.com/onsi/ginkgo/v2 v2.13.0/go.mod h1:TE309ZR8s5FsKKpuB1YAQYBzCaAfUgatB/xlT/ETL/o=
github.com/onsi/gomega v1.29.0 h1:KIA/t2t5UBzoirT4H9tsML45GEbo3ouUnBHsCfD2tVg=
github.com/onsi/gomega v1.29.0/go.mod h1:9sxs+SwGrKI0+PWe4Fxa9tFQQBG5xSsSbMXOI8PPpoQ=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.6.0 h1:YVPodQOcK15POxhgARIvnDRVpLcuK8mglnMrWfyrw6A=
github.com/prometheus/client_golang v1.6.0/go.mod h1:ZLOG9ck3JLRdB5MgO8f+lLTe83AXG6ro35rLTxvnIl4=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.1.3 h1:F0+tqvhOksq22sc6iCHF5WGlWjdwj92p0udFh1VFBS8=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE=
github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.53.0 h1:U2pL9w9nmJwJDa4qqLQ3ZaePJ6ZTwt7cMD3AG3+aLCE=
github.com/prometheus/common v0.53.0/go.mod h1:BrxBKv3FWBIGXw89Mg1AeBq7FSyRzXWI3l3e7W3RN5U=
github.com/prometheus/procfs v0.14.0 h1:Lw4VdGGoKEZilJsayHf0B+9YgLGREba2C6xr+Fdfq6s=
github.com/prometheus/procfs v0.14.0/go.mod h1:XL+Iwz8k8ZabyZfMFHPiilCniixqQarAy5Mu67pHlNQ=
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc=
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200602114024-627f9648deb9 h1:pNX+40auqi2JqRfOP1akLGtYcn15TUbkhwuCO3foqqM=
golang.org/x/net v0.0.0-20200602114024-627f9648deb9/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.25.0 h1:ypSNr+bnYL2YhwoMt2zPxHFmbAN1KZs/njMG3hxUp30=
golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.24.0 h1:1PcaxkF854Fu3+lvBIx5SYn9wRlBzzcnHZSiaFFAb0w=
golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
golang.org/x/oauth2 v0.19.0 h1:9+E/EZBCbTLNrbN35fHv/a/d/mOBatymz1zbtQrXpIg=
golang.org/x/oauth2 v0.19.0/go.mod h1:vYi7skDa1x015PmRRYZ7+s1cWyPgrPiSYRe4rnsexc8=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200420163511-1957bb5e6d1f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1 h1:ogLJMz+qpzav7lGMh10LMvAkM/fAoGlaiiHYiFYdm80=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.22.0 h1:BbsgPEJULsl2fV/AT3v15Mjva5yXKQDyKf+TbDz7QJk=
golang.org/x/term v0.22.0/go.mod h1:F3qCibpT5AMpCRfhfT53vVJwhLtIVHhB9XDjfFvnMI4=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d h1:vU5i/LfpvrRCpgM/VPfJLg5KjxD3E+hfT1SH+d9zLwg=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0 h1:UhZDfRO8JRQru4/+LlLE0BRKGF8L+PICnvYZmx/fEGA=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776 h1:tQIYjPdBoyREyB9XMu+nnTclpTYkz2zFM+lzLJFO4gQ=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.29.0 h1:NiCdQMY1QOp1H8lfRyeEf8eOwV6+0xA6XEE44ohDX2A=
k8s.io/api v0.29.0/go.mod h1:sdVmXoz2Bo/cb77Pxi71IPTSErEW32xa4aXwKH7gfBA=
k8s.io/apimachinery v0.29.0 h1:+ACVktwyicPz0oc6MTMLwa2Pw3ouLAfAon1wPLtG48o=
k8s.io/apimachinery v0.29.0/go.mod h1:eVBxQ/cwiJxH58eK/jd/vAk4mrxmVlnpBH5J2GbMeis=
k8s.io/client-go v0.29.0 h1:KmlDtFcrdUzOYrBhXHgKw5ycWzc3ryPX5mQe0SkG3y8=
k8s.io/client-go v0.29.0/go.mod h1:yLkXH4HKMAywcrD82KMSmfYg2DlE8mepPR4JGSo5n38=
k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw=
k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20240423202451-8948a665c108 h1:Q8Z7VlGhcJgBHJHYugJ/K/7iB8a2eSxCyxdVjJp+lLY=
k8s.io/kube-openapi v0.0.0-20240423202451-8948a665c108/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98=
k8s.io/utils v0.0.0-20240423183400-0849a56e8f22 h1:ao5hUqGhsqdm+bYbjH/pRkCs0unBGe9UyDahzs9zQzQ=
k8s.io/utils v0.0.0-20240423183400-0849a56e8f22/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

37
prober/file.go Normal file
View File

@@ -0,0 +1,37 @@
package prober
import (
"context"
"fmt"
"github.com/bmatcuk/doublestar/v2"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
)
// ProbeFile collects certificate metrics from local files
func ProbeFile(ctx context.Context, logger log.Logger, target string, module config.Module, registry *prometheus.Registry) error {
errCh := make(chan error, 1)
go func() {
files, err := doublestar.Glob(target)
if err != nil {
errCh <- err
return
}
if len(files) == 0 {
errCh <- fmt.Errorf("No files found")
} else {
errCh <- collectFileMetrics(logger, files, registry)
}
}()
select {
case <-ctx.Done():
return fmt.Errorf("context timeout, ran out of time")
case err := <-errCh:
return err
}
}

200
prober/file_test.go Normal file
View File

@@ -0,0 +1,200 @@
package prober
import (
"context"
"crypto/x509"
"encoding/pem"
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/ribbybibby/ssl_exporter/v2/config"
"github.com/ribbybibby/ssl_exporter/v2/test"
"github.com/prometheus/client_golang/prometheus"
)
// TestProbeFile tests a file
func TestProbeFile(t *testing.T) {
cert, certFile, err := createTestFile("", "tls*.crt")
if err != nil {
t.Fatalf(err.Error())
}
defer os.Remove(certFile)
module := config.Module{}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeFile(ctx, newTestLogger(), certFile, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkFileMetrics(cert, certFile, registry, t)
}
// TestProbeFileGlob tests matching a file with a glob
func TestProbeFileGlob(t *testing.T) {
cert, certFile, err := createTestFile("", "tls*.crt")
if err != nil {
t.Fatalf(err.Error())
}
defer os.Remove(certFile)
module := config.Module{}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
glob := filepath.Dir(certFile) + "/*.crt"
if err := ProbeFile(ctx, newTestLogger(), glob, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkFileMetrics(cert, certFile, registry, t)
}
// TestProbeFileGlobDoubleStar tests matching a file with a ** glob
func TestProbeFileGlobDoubleStar(t *testing.T) {
tmpDir, err := ioutil.TempDir("", "testdir")
if err != nil {
t.Fatalf(err.Error())
}
cert, certFile, err := createTestFile(tmpDir, "tls*.crt")
if err != nil {
t.Fatalf(err.Error())
}
defer os.Remove(certFile)
module := config.Module{}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
glob := filepath.Dir(filepath.Dir(certFile)) + "/**/*.crt"
if err := ProbeFile(ctx, newTestLogger(), glob, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkFileMetrics(cert, certFile, registry, t)
}
// TestProbeFileGlobDoubleStarMultiple tests matching multiple files with a ** glob
func TestProbeFileGlobDoubleStarMultiple(t *testing.T) {
tmpDir, err := ioutil.TempDir("", "testdir")
if err != nil {
t.Fatalf(err.Error())
}
defer os.RemoveAll(tmpDir)
tmpDir1, err := ioutil.TempDir(tmpDir, "testdir")
if err != nil {
t.Fatalf(err.Error())
}
cert1, certFile1, err := createTestFile(tmpDir1, "1*.crt")
if err != nil {
t.Fatalf(err.Error())
}
tmpDir2, err := ioutil.TempDir(tmpDir, "testdir")
if err != nil {
t.Fatalf(err.Error())
}
cert2, certFile2, err := createTestFile(tmpDir2, "2*.crt")
if err != nil {
t.Fatalf(err.Error())
}
module := config.Module{}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
glob := tmpDir + "/**/*.crt"
if err := ProbeFile(ctx, newTestLogger(), glob, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkFileMetrics(cert1, certFile1, registry, t)
checkFileMetrics(cert2, certFile2, registry, t)
}
// Create a certificate and write it to a file
func createTestFile(dir, filename string) (*x509.Certificate, string, error) {
certPEM, _ := test.GenerateTestCertificate(time.Now().Add(time.Hour * 1))
block, _ := pem.Decode([]byte(certPEM))
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return nil, "", err
}
tmpFile, err := ioutil.TempFile(dir, filename)
if err != nil {
return nil, tmpFile.Name(), err
}
if _, err := tmpFile.Write(certPEM); err != nil {
return nil, tmpFile.Name(), err
}
if err := tmpFile.Close(); err != nil {
return nil, tmpFile.Name(), err
}
return cert, tmpFile.Name(), nil
}
// Check metrics
func checkFileMetrics(cert *x509.Certificate, certFile string, registry *prometheus.Registry, t *testing.T) {
mfs, err := registry.Gather()
if err != nil {
t.Fatal(err)
}
ips := ","
for _, ip := range cert.IPAddresses {
ips = ips + ip.String() + ","
}
expectedResults := []*registryResult{
&registryResult{
Name: "ssl_file_cert_not_after",
LabelValues: map[string]string{
"file": certFile,
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
},
Value: float64(cert.NotAfter.Unix()),
},
&registryResult{
Name: "ssl_file_cert_not_before",
LabelValues: map[string]string{
"file": certFile,
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
},
Value: float64(cert.NotBefore.Unix()),
},
}
checkRegistryResults(expectedResults, mfs, t)
}

60
prober/http_file.go Normal file
View File

@@ -0,0 +1,60 @@
package prober
import (
"context"
"fmt"
"io"
"net/http"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
)
// ProbeHTTPFile collects certificate metrics from a remote file via http
func ProbeHTTPFile(ctx context.Context, logger log.Logger, target string, module config.Module, registry *prometheus.Registry) error {
proxy := http.ProxyFromEnvironment
if module.HTTPFile.ProxyURL.URL != nil {
proxy = http.ProxyURL(module.HTTPFile.ProxyURL.URL)
}
tlsConfig, err := config.NewTLSConfig(&module.TLSConfig)
if err != nil {
return fmt.Errorf("creating TLS config: %w", err)
}
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: tlsConfig,
Proxy: proxy,
DisableKeepAlives: true,
},
}
req, err := http.NewRequestWithContext(ctx, http.MethodGet, target, nil)
if err != nil {
return fmt.Errorf("creating http request: %w", err)
}
req.Header.Set("User-Agent", userAgent)
resp, err := client.Do(req)
if err != nil {
return fmt.Errorf("making http request: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected response code: %d", resp.StatusCode)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("reading response body: %w", err)
}
certs, err := decodeCertificates(body)
if err != nil {
return fmt.Errorf("decoding certificates from response body: %w", err)
}
return collectCertificateMetrics(certs, registry)
}

112
prober/http_file_test.go Normal file
View File

@@ -0,0 +1,112 @@
package prober
import (
"context"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
"github.com/ribbybibby/ssl_exporter/v2/test"
)
func TestProbeHTTPFile(t *testing.T) {
testcertPEM, _ := test.GenerateTestCertificate(time.Now().AddDate(0, 0, 1))
server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write(testcertPEM)
}))
server.Start()
defer server.Close()
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPFile(ctx, newTestLogger(), server.URL+"/file", config.Module{}, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(testcertPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
}
func TestProbeHTTPFile_NotCertificate(t *testing.T) {
server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("foobar"))
}))
server.Start()
defer server.Close()
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPFile(ctx, newTestLogger(), server.URL+"/file", config.Module{}, registry); err == nil {
t.Errorf("expected error but got nil")
}
}
func TestProbeHTTPFile_NotFound(t *testing.T) {
server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
}))
server.Start()
defer server.Close()
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPFile(ctx, newTestLogger(), server.URL+"/file", config.Module{}, registry); err == nil {
t.Errorf("expected error but got nil")
}
}
func TestProbeHTTPFileHTTPS(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.Config.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write(certPEM)
})
server.StartTLS()
defer server.Close()
module := config.Module{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPFile(ctx, newTestLogger(), server.URL+"/file", module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
}

View File

@@ -1,25 +1,32 @@
package prober
import (
"crypto/tls"
"context"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
"time"
"github.com/prometheus/common/log"
"github.com/ribbybibby/ssl_exporter/config"
pconfig "github.com/prometheus/common/config"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/version"
"github.com/ribbybibby/ssl_exporter/v2/config"
)
var userAgent = fmt.Sprintf("SSLExporter/%s", version.Version)
// ProbeHTTPS performs a https probe
func ProbeHTTPS(target string, module config.Module, timeout time.Duration) (*tls.ConnectionState, error) {
func ProbeHTTPS(ctx context.Context, logger log.Logger, target string, module config.Module, registry *prometheus.Registry) error {
tlsConfig, err := newTLSConfig("", registry, &module.TLSConfig)
if err != nil {
return err
}
if strings.HasPrefix(target, "http://") {
return nil, fmt.Errorf("Target is using http scheme: %s", target)
return fmt.Errorf("Target is using http scheme: %s", target)
}
if !strings.HasPrefix(target, "https://") {
@@ -28,12 +35,7 @@ func ProbeHTTPS(target string, module config.Module, timeout time.Duration) (*tl
targetURL, err := url.Parse(target)
if err != nil {
return nil, err
}
tlsConfig, err := pconfig.NewTLSConfig(&module.TLSConfig)
if err != nil {
return nil, err
return err
}
proxy := http.ProxyFromEnvironment
@@ -50,26 +52,31 @@ func ProbeHTTPS(target string, module config.Module, timeout time.Duration) (*tl
Proxy: proxy,
DisableKeepAlives: true,
},
Timeout: timeout,
}
// Issue a GET request to the target
resp, err := client.Get(targetURL.String())
request, err := http.NewRequest(http.MethodGet, targetURL.String(), nil)
if err != nil {
return nil, err
return err
}
request = request.WithContext(ctx)
request.Header.Set("User-Agent", userAgent)
resp, err := client.Do(request)
if err != nil {
return err
}
defer func() {
_, err := io.Copy(ioutil.Discard, resp.Body)
if err != nil {
log.Errorln(err)
level.Error(logger).Log("msg", err)
}
resp.Body.Close()
}()
// Check if the response from the target is encrypted
if resp.TLS == nil {
return nil, fmt.Errorf("The response from %s is unencrypted", targetURL.String())
return fmt.Errorf("The response from %s is unencrypted", targetURL.String())
}
return resp.TLS, nil
return nil
}

View File

@@ -1,9 +1,14 @@
package prober
import (
"bytes"
"context"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"fmt"
"math/big"
"net/http"
"net/http/httptest"
"net/url"
@@ -11,14 +16,15 @@ import (
"testing"
"time"
pconfig "github.com/prometheus/common/config"
"github.com/ribbybibby/ssl_exporter/config"
"github.com/ribbybibby/ssl_exporter/test"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
"github.com/ribbybibby/ssl_exporter/v2/test"
"golang.org/x/crypto/ocsp"
)
// TestProbeHTTPS tests the typical case
func TestProbeHTTPS(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
server, certPEM, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -28,18 +34,60 @@ func TestProbeHTTPS(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
state, err := ProbeHTTPS(server.URL, module, 5*time.Second)
if err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
if state == nil {
t.Fatalf("expected state but got nil")
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeHTTPSTimeout tests that the https probe respects the timeout in the
// context
func TestProbeHTTPSTimeout(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.Config.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(3 * time.Second)
fmt.Fprintln(w, "Hello world")
})
server.StartTLS()
defer server.Close()
module := config.Module{
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err == nil {
t.Fatalf("Expected error but returned error was nil")
}
}
@@ -56,7 +104,7 @@ func TestProbeHTTPSInvalidName(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
@@ -67,7 +115,12 @@ func TestProbeHTTPSInvalidName(t *testing.T) {
t.Fatalf(err.Error())
}
if _, err := ProbeHTTPS("https://localhost:"+u.Port(), module, 5*time.Second); err == nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), "https://localhost:"+u.Port(), module, registry); err == nil {
t.Fatalf("expected error, but err was nil")
}
}
@@ -75,7 +128,7 @@ func TestProbeHTTPSInvalidName(t *testing.T) {
// TestProbeHTTPSNoScheme tests that the probe is successful when the scheme is
// omitted from the target. The scheme should be added by the prober.
func TestProbeHTTPSNoScheme(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
server, certPEM, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -85,7 +138,7 @@ func TestProbeHTTPSNoScheme(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
@@ -96,15 +149,28 @@ func TestProbeHTTPSNoScheme(t *testing.T) {
t.Fatalf(err.Error())
}
if _, err := ProbeHTTPS(u.Host, module, 5*time.Second); err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), u.Host, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeHTTPSServername tests that the probe is successful when the
// servername is provided in the TLS config
func TestProbeHTTPSServerName(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
server, certPEM, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -119,16 +185,29 @@ func TestProbeHTTPSServerName(t *testing.T) {
}
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
ServerName: u.Hostname(),
},
}
if _, err := ProbeHTTPS("https://localhost:"+u.Port(), module, 5*time.Second); err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), "https://localhost:"+u.Port(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeHTTPSHTTP tests that the prober fails when hitting a HTTP server
@@ -139,7 +218,12 @@ func TestProbeHTTPSHTTP(t *testing.T) {
server.Start()
defer server.Close()
if _, err := ProbeHTTPS(server.URL, config.Module{}, 5*time.Second); err == nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, config.Module{}, registry); err == nil {
t.Fatalf("expected error, but err was nil")
}
}
@@ -178,7 +262,7 @@ func TestProbeHTTPSClientAuth(t *testing.T) {
defer os.Remove(keyFile)
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
CertFile: certFile,
KeyFile: keyFile,
@@ -186,13 +270,22 @@ func TestProbeHTTPSClientAuth(t *testing.T) {
},
}
state, err := ProbeHTTPS(server.URL, module, 5*time.Second)
if err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
if state == nil {
t.Fatalf("expected state but got nil")
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeHTTPSClientAuthWrongClientCert tests that the probe fails with a bad
@@ -233,7 +326,7 @@ func TestProbeHTTPSClientAuthWrongClientCert(t *testing.T) {
defer os.Remove(keyFile)
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
CertFile: certFile,
KeyFile: keyFile,
@@ -241,7 +334,12 @@ func TestProbeHTTPSClientAuthWrongClientCert(t *testing.T) {
},
}
if _, err := ProbeHTTPS(server.URL, module, 5*time.Second); err == nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err == nil {
t.Fatalf("expected error but err is nil")
}
}
@@ -266,13 +364,18 @@ func TestProbeHTTPSExpired(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
if _, err := ProbeHTTPS(server.URL, module, 5*time.Second); err == nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err == nil {
t.Fatalf("expected error but err is nil")
}
}
@@ -280,7 +383,7 @@ func TestProbeHTTPSExpired(t *testing.T) {
// TestProbeHTTPSExpiredInsecure tests that the probe succeeds with an expired server cert
// when skipping cert verification
func TestProbeHTTPSExpiredInsecure(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
server, certPEM, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -298,24 +401,33 @@ func TestProbeHTTPSExpiredInsecure(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: true,
},
}
state, err := ProbeHTTPS(server.URL, module, 5*time.Second)
if err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
if state == nil {
t.Fatalf("expected state but got nil")
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeHTTPSProxy tests the proxy_url field in the configuration
func TestProbeHTTPSProxy(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
server, certPEM, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -342,7 +454,7 @@ func TestProbeHTTPSProxy(t *testing.T) {
}
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
@@ -352,19 +464,155 @@ func TestProbeHTTPSProxy(t *testing.T) {
},
}
_, err = ProbeHTTPS(server.URL, module, 5*time.Second)
if err == nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err == nil {
t.Fatalf("expected error but err was nil")
}
// Test with the proxy url, this shouldn't return an error
module.HTTPS.ProxyURL = config.URL{URL: proxyURL}
state, err := ProbeHTTPS(server.URL, module, 5*time.Second)
if err != nil {
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
if state == nil {
t.Fatalf("expected state but got nil")
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeHTTPSOCSP tests a HTTPS probe with OCSP stapling
func TestProbeHTTPSOCSP(t *testing.T) {
server, certPEM, keyPEM, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
key, err := newKey(keyPEM)
if err != nil {
t.Fatal(err)
}
resp, err := ocsp.CreateResponse(cert, cert, ocsp.Response{SerialNumber: big.NewInt(64), Status: 1}, key)
if err != nil {
t.Fatalf(err.Error())
}
server.TLS.Certificates[0].OCSPStaple = resp
server.StartTLS()
defer server.Close()
module := config.Module{
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics(resp, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeHTTPSVerifiedChains tests the verified chain metrics returned by a
// https probe
func TestProbeHTTPSVerifiedChains(t *testing.T) {
rootPrivateKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
t.Fatalf(err.Error())
}
rootCertExpiry := time.Now().AddDate(0, 0, 5)
rootCertTmpl := test.GenerateCertificateTemplate(rootCertExpiry)
rootCertTmpl.IsCA = true
rootCertTmpl.SerialNumber = big.NewInt(1)
rootCert, rootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(rootCertTmpl, rootPrivateKey)
olderRootCertExpiry := time.Now().AddDate(0, 0, 3)
olderRootCertTmpl := test.GenerateCertificateTemplate(olderRootCertExpiry)
olderRootCertTmpl.IsCA = true
olderRootCertTmpl.SerialNumber = big.NewInt(2)
olderRootCert, olderRootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(olderRootCertTmpl, rootPrivateKey)
oldestRootCertExpiry := time.Now().AddDate(0, 0, 1)
oldestRootCertTmpl := test.GenerateCertificateTemplate(oldestRootCertExpiry)
oldestRootCertTmpl.IsCA = true
oldestRootCertTmpl.SerialNumber = big.NewInt(3)
oldestRootCert, oldestRootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(oldestRootCertTmpl, rootPrivateKey)
serverCertExpiry := time.Now().AddDate(0, 0, 4)
serverCertTmpl := test.GenerateCertificateTemplate(serverCertExpiry)
serverCertTmpl.SerialNumber = big.NewInt(4)
serverCert, serverCertPem, serverKey := test.GenerateSignedCertificate(serverCertTmpl, olderRootCert, rootPrivateKey)
verifiedChains := [][]*x509.Certificate{
[]*x509.Certificate{
serverCert,
rootCert,
},
[]*x509.Certificate{
serverCert,
olderRootCert,
},
[]*x509.Certificate{
serverCert,
oldestRootCert,
},
}
caCertPem := bytes.Join([][]byte{oldestRootCertPem, olderRootCertPem, rootCertPem}, []byte(""))
server, caFile, teardown, err := test.SetupHTTPSServerWithCertAndKey(
caCertPem,
serverCertPem,
serverKey,
)
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartTLS()
defer server.Close()
module := config.Module{
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeHTTPS(ctx, newTestLogger(), server.URL, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkCertificateMetrics(serverCert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkVerifiedChainMetrics(verifiedChains, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}

92
prober/kubeconfig.go Normal file
View File

@@ -0,0 +1,92 @@
package prober
import (
"context"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
"gopkg.in/yaml.v3"
)
type KubeConfigCluster struct {
Name string
Cluster KubeConfigClusterCert
}
type KubeConfigClusterCert struct {
CertificateAuthority string `yaml:"certificate-authority"`
CertificateAuthorityData string `yaml:"certificate-authority-data"`
}
type KubeConfigUser struct {
Name string
User KubeConfigUserCert
}
type KubeConfigUserCert struct {
ClientCertificate string `yaml:"client-certificate"`
ClientCertificateData string `yaml:"client-certificate-data"`
}
type KubeConfig struct {
Path string
Clusters []KubeConfigCluster
Users []KubeConfigUser
}
// ProbeKubeconfig collects certificate metrics from kubeconfig files
func ProbeKubeconfig(ctx context.Context, logger log.Logger, target string, module config.Module, registry *prometheus.Registry) error {
if _, err := os.Stat(target); err != nil {
return fmt.Errorf("kubeconfig not found: %s", target)
}
k, err := ParseKubeConfig(target)
if err != nil {
return err
}
err = collectKubeconfigMetrics(logger, *k, registry)
if err != nil {
return err
}
return nil
}
func ParseKubeConfig(file string) (*KubeConfig, error) {
k := &KubeConfig{}
data, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
err = yaml.Unmarshal([]byte(data), k)
if err != nil {
return nil, err
}
k.Path = file
clusters := []KubeConfigCluster{}
users := []KubeConfigUser{}
for _, c := range k.Clusters {
// Path is relative to kubeconfig path
if c.Cluster.CertificateAuthority != "" && !filepath.IsAbs(c.Cluster.CertificateAuthority) {
newPath := filepath.Join(filepath.Dir(k.Path), c.Cluster.CertificateAuthority)
c.Cluster.CertificateAuthority = newPath
}
clusters = append(clusters, c)
}
for _, u := range k.Users {
// Path is relative to kubeconfig path
if u.User.ClientCertificate != "" && !filepath.IsAbs(u.User.ClientCertificate) {
newPath := filepath.Join(filepath.Dir(k.Path), u.User.ClientCertificate)
u.User.ClientCertificate = newPath
}
users = append(users, u)
}
k.Clusters = clusters
k.Users = users
return k, nil
}

195
prober/kubeconfig_test.go Normal file
View File

@@ -0,0 +1,195 @@
package prober
import (
"context"
"crypto/x509"
"encoding/base64"
"encoding/pem"
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
"time"
"github.com/ribbybibby/ssl_exporter/v2/config"
"github.com/ribbybibby/ssl_exporter/v2/test"
"github.com/prometheus/client_golang/prometheus"
"gopkg.in/yaml.v3"
)
// TestProbeFile tests a file
func TestProbeKubeconfig(t *testing.T) {
cert, kubeconfig, err := createTestKubeconfig("", "kubeconfig")
if err != nil {
t.Fatalf(err.Error())
}
defer os.Remove(kubeconfig)
module := config.Module{}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeKubeconfig(ctx, newTestLogger(), kubeconfig, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkKubeconfigMetrics(cert, kubeconfig, registry, t)
}
func TestParseKubeConfigRelative(t *testing.T) {
tmpFile, err := ioutil.TempFile("", "kubeconfig")
if err != nil {
t.Fatalf("Unable to create Tempfile: %s", err.Error())
}
defer os.Remove(tmpFile.Name())
file := []byte(`
clusters:
- cluster:
certificate-authority: certs/example/ca.pem
server: https://master.example.com
name: example
users:
- user:
client-certificate: test/ca.pem
name: example`)
if _, err := tmpFile.Write(file); err != nil {
t.Fatalf("Unable to write Tempfile: %s", err.Error())
}
expectedClusterPath := filepath.Join(filepath.Dir(tmpFile.Name()), "certs/example/ca.pem")
expectedUserPath := filepath.Join(filepath.Dir(tmpFile.Name()), "test/ca.pem")
k, err := ParseKubeConfig(tmpFile.Name())
if err != nil {
t.Fatalf("Error parsing kubeconfig: %s", err.Error())
}
if len(k.Clusters) != 1 {
t.Fatalf("Unexpected length for Clusters, got %d", len(k.Clusters))
}
if k.Clusters[0].Cluster.CertificateAuthority != expectedClusterPath {
t.Errorf("Unexpected CertificateAuthority value\nExpected: %s\nGot: %s", expectedClusterPath, k.Clusters[0].Cluster.CertificateAuthority)
}
if len(k.Users) != 1 {
t.Fatalf("Unexpected length for Users, got %d", len(k.Users))
}
if k.Users[0].User.ClientCertificate != expectedUserPath {
t.Errorf("Unexpected ClientCertificate value\nExpected: %s\nGot: %s", expectedUserPath, k.Users[0].User.ClientCertificate)
}
}
// Create a certificate and write it to a file
func createTestKubeconfig(dir, filename string) (*x509.Certificate, string, error) {
certPEM, _ := test.GenerateTestCertificate(time.Now().Add(time.Hour * 1))
clusterCert := KubeConfigClusterCert{CertificateAuthorityData: base64.StdEncoding.EncodeToString([]byte(certPEM))}
clusters := []KubeConfigCluster{KubeConfigCluster{Name: "kubernetes", Cluster: clusterCert}}
userCert := KubeConfigUserCert{ClientCertificateData: base64.StdEncoding.EncodeToString([]byte(certPEM))}
users := []KubeConfigUser{KubeConfigUser{Name: "kubernetes-admin", User: userCert}}
k := KubeConfig{
Clusters: clusters,
Users: users,
}
block, _ := pem.Decode([]byte(certPEM))
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return nil, "", err
}
tmpFile, err := ioutil.TempFile(dir, filename)
if err != nil {
return nil, tmpFile.Name(), err
}
k.Path = tmpFile.Name()
d, err := yaml.Marshal(&k)
if err != nil {
return nil, tmpFile.Name(), err
}
if _, err := tmpFile.Write(d); err != nil {
return nil, tmpFile.Name(), err
}
if err := tmpFile.Close(); err != nil {
return nil, tmpFile.Name(), err
}
return cert, tmpFile.Name(), nil
}
// Check metrics
func checkKubeconfigMetrics(cert *x509.Certificate, kubeconfig string, registry *prometheus.Registry, t *testing.T) {
mfs, err := registry.Gather()
if err != nil {
t.Fatal(err)
}
ips := ","
for _, ip := range cert.IPAddresses {
ips = ips + ip.String() + ","
}
expectedResults := []*registryResult{
&registryResult{
Name: "ssl_kubeconfig_cert_not_after",
LabelValues: map[string]string{
"kubeconfig": kubeconfig,
"name": "kubernetes",
"type": "cluster",
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
},
Value: float64(cert.NotAfter.Unix()),
},
&registryResult{
Name: "ssl_kubeconfig_cert_not_before",
LabelValues: map[string]string{
"kubeconfig": kubeconfig,
"name": "kubernetes",
"type": "cluster",
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
},
Value: float64(cert.NotBefore.Unix()),
},
&registryResult{
Name: "ssl_kubeconfig_cert_not_after",
LabelValues: map[string]string{
"kubeconfig": kubeconfig,
"name": "kubernetes-admin",
"type": "user",
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
},
Value: float64(cert.NotAfter.Unix()),
},
&registryResult{
Name: "ssl_kubeconfig_cert_not_before",
LabelValues: map[string]string{
"kubeconfig": kubeconfig,
"name": "kubernetes-admin",
"type": "user",
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
},
Value: float64(cert.NotBefore.Unix()),
},
}
checkRegistryResults(expectedResults, mfs, t)
}

86
prober/kubernetes.go Normal file
View File

@@ -0,0 +1,86 @@
package prober
import (
"context"
"fmt"
"strings"
"github.com/bmatcuk/doublestar/v2"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
// Support oidc in kube config files
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
)
var (
// ErrKubeBadTarget is returned when the target doesn't match the
// expected form for the kubernetes prober
ErrKubeBadTarget = fmt.Errorf("Target secret must be provided in the form: <namespace>/<name>")
)
// ProbeKubernetes collects certificate metrics from kubernetes.io/tls Secrets
func ProbeKubernetes(ctx context.Context, logger log.Logger, target string, module config.Module, registry *prometheus.Registry) error {
client, err := newKubeClient(module.Kubernetes.Kubeconfig)
if err != nil {
return err
}
return probeKubernetes(ctx, target, module, registry, client)
}
func probeKubernetes(ctx context.Context, target string, module config.Module, registry *prometheus.Registry, client kubernetes.Interface) error {
parts := strings.Split(target, "/")
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
return ErrKubeBadTarget
}
ns := parts[0]
name := parts[1]
var tlsSecrets []v1.Secret
secrets, err := client.CoreV1().Secrets("").List(ctx, metav1.ListOptions{FieldSelector: "type=kubernetes.io/tls"})
if err != nil {
return err
}
for _, secret := range secrets.Items {
nMatch, err := doublestar.Match(ns, secret.Namespace)
if err != nil {
return err
}
sMatch, err := doublestar.Match(name, secret.Name)
if err != nil {
return err
}
if nMatch && sMatch {
tlsSecrets = append(tlsSecrets, secret)
}
}
return collectKubernetesSecretMetrics(tlsSecrets, registry)
}
// newKubeClient returns a Kubernetes client (clientset) from the supplied
// kubeconfig path, the KUBECONFIG environment variable, the default config file
// location ($HOME/.kube/config) or from the in-cluster service account environment.
func newKubeClient(path string) (*kubernetes.Clientset, error) {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
if path != "" {
loadingRules.ExplicitPath = path
}
kubeConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
loadingRules,
&clientcmd.ConfigOverrides{},
)
config, err := kubeConfig.ClientConfig()
if err != nil {
return nil, err
}
return kubernetes.NewForConfig(config)
}

190
prober/kubernetes_test.go Normal file
View File

@@ -0,0 +1,190 @@
package prober
import (
"context"
"crypto/x509"
"encoding/pem"
"strings"
"testing"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
"github.com/ribbybibby/ssl_exporter/v2/test"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
)
func TestKubernetesProbe(t *testing.T) {
certPEM, _ := test.GenerateTestCertificate(time.Now().Add(time.Hour * 1))
block, _ := pem.Decode([]byte(certPEM))
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
t.Fatal(err)
}
caPEM, _ := test.GenerateTestCertificate(time.Now().Add(time.Hour * 10))
block, _ = pem.Decode([]byte(caPEM))
caCert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
t.Fatal(err)
}
fakeKubeClient := fake.NewSimpleClientset(&v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
Namespace: "bar",
},
Data: map[string][]byte{
"tls.crt": certPEM,
"ca.crt": caPEM,
},
Type: "kubernetes.io/tls",
})
module := config.Module{}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := probeKubernetes(ctx, "bar/foo", module, registry, fakeKubeClient); err != nil {
t.Fatalf("error: %s", err)
}
checkKubernetesMetrics(cert, "bar", "foo", "tls.crt", registry, t)
checkKubernetesMetrics(caCert, "bar", "foo", "ca.crt", registry, t)
}
func TestKubernetesProbeGlob(t *testing.T) {
certPEM, _ := test.GenerateTestCertificate(time.Now().Add(time.Hour * 1))
block, _ := pem.Decode([]byte(certPEM))
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
t.Fatal(err)
}
caPEM, _ := test.GenerateTestCertificate(time.Now().Add(time.Hour * 10))
block, _ = pem.Decode([]byte(caPEM))
caCert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
t.Fatal(err)
}
certPEM2, _ := test.GenerateTestCertificate(time.Now().Add(time.Hour * 1))
block, _ = pem.Decode([]byte(certPEM2))
cert2, err := x509.ParseCertificate(block.Bytes)
if err != nil {
t.Fatal(err)
}
caPEM2, _ := test.GenerateTestCertificate(time.Now().Add(time.Hour * 10))
block, _ = pem.Decode([]byte(caPEM2))
caCert2, err := x509.ParseCertificate(block.Bytes)
if err != nil {
t.Fatal(err)
}
fakeKubeClient := fake.NewSimpleClientset(&v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
Namespace: "bar",
},
Data: map[string][]byte{
"tls.crt": certPEM,
"ca.crt": caPEM,
},
Type: "kubernetes.io/tls",
},
&v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "fooz",
Namespace: "baz",
},
Data: map[string][]byte{
"tls.crt": certPEM2,
"ca.crt": caPEM2,
},
Type: "kubernetes.io/tls",
})
module := config.Module{}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := probeKubernetes(ctx, "ba*/*", module, registry, fakeKubeClient); err != nil {
t.Fatalf("error: %s", err)
}
checkKubernetesMetrics(cert, "bar", "foo", "tls.crt", registry, t)
checkKubernetesMetrics(caCert, "bar", "foo", "ca.crt", registry, t)
checkKubernetesMetrics(cert2, "baz", "fooz", "tls.crt", registry, t)
checkKubernetesMetrics(caCert2, "baz", "fooz", "ca.crt", registry, t)
}
func TestKubernetesProbeBadTarget(t *testing.T) {
fakeKubeClient := fake.NewSimpleClientset()
module := config.Module{}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := probeKubernetes(ctx, "bar/foo/bar", module, registry, fakeKubeClient); err != ErrKubeBadTarget {
t.Fatalf("Expected error: %v, but got %v", ErrKubeBadTarget, err)
}
}
func checkKubernetesMetrics(cert *x509.Certificate, namespace, name, key string, registry *prometheus.Registry, t *testing.T) {
mfs, err := registry.Gather()
if err != nil {
t.Fatal(err)
}
ips := ","
for _, ip := range cert.IPAddresses {
ips = ips + ip.String() + ","
}
expectedResults := []*registryResult{
&registryResult{
Name: "ssl_kubernetes_cert_not_after",
LabelValues: map[string]string{
"namespace": namespace,
"secret": name,
"key": key,
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
},
Value: float64(cert.NotAfter.Unix()),
},
&registryResult{
Name: "ssl_kubernetes_cert_not_before",
LabelValues: map[string]string{
"namespace": namespace,
"secret": name,
"key": key,
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
},
Value: float64(cert.NotBefore.Unix()),
},
}
checkRegistryResults(expectedResults, mfs, t)
}

482
prober/metrics.go Normal file
View File

@@ -0,0 +1,482 @@
package prober
import (
"crypto/tls"
"crypto/x509"
"encoding/base64"
"fmt"
"io/ioutil"
"sort"
"strconv"
"strings"
"time"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"golang.org/x/crypto/ocsp"
v1 "k8s.io/api/core/v1"
)
const (
namespace = "ssl"
)
func collectConnectionStateMetrics(state tls.ConnectionState, registry *prometheus.Registry) error {
if err := collectTLSVersionMetrics(state.Version, registry); err != nil {
return err
}
if err := collectCertificateMetrics(state.PeerCertificates, registry); err != nil {
return err
}
if err := collectVerifiedChainMetrics(state.VerifiedChains, registry); err != nil {
return err
}
return collectOCSPMetrics(state.OCSPResponse, registry)
}
func collectTLSVersionMetrics(version uint16, registry *prometheus.Registry) error {
var (
tlsVersion = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "tls_version_info"),
Help: "The TLS version used",
},
[]string{"version"},
)
)
registry.MustRegister(tlsVersion)
var v string
switch version {
case tls.VersionTLS10:
v = "TLS 1.0"
case tls.VersionTLS11:
v = "TLS 1.1"
case tls.VersionTLS12:
v = "TLS 1.2"
case tls.VersionTLS13:
v = "TLS 1.3"
default:
v = "unknown"
}
tlsVersion.WithLabelValues(v).Set(1)
return nil
}
func collectCertificateMetrics(certs []*x509.Certificate, registry *prometheus.Registry) error {
var (
notAfter = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "cert_not_after"),
Help: "NotAfter expressed as a Unix Epoch Time",
},
[]string{"serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
notBefore = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "cert_not_before"),
Help: "NotBefore expressed as a Unix Epoch Time",
},
[]string{"serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
)
registry.MustRegister(notAfter, notBefore)
certs = uniq(certs)
if len(certs) == 0 {
return fmt.Errorf("No certificates found")
}
for _, cert := range certs {
labels := labelValues(cert)
if !cert.NotAfter.IsZero() {
notAfter.WithLabelValues(labels...).Set(float64(cert.NotAfter.Unix()))
}
if !cert.NotBefore.IsZero() {
notBefore.WithLabelValues(labels...).Set(float64(cert.NotBefore.Unix()))
}
}
return nil
}
func collectVerifiedChainMetrics(verifiedChains [][]*x509.Certificate, registry *prometheus.Registry) error {
var (
verifiedNotAfter = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "verified_cert_not_after"),
Help: "NotAfter expressed as a Unix Epoch Time",
},
[]string{"chain_no", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
verifiedNotBefore = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "verified_cert_not_before"),
Help: "NotBefore expressed as a Unix Epoch Time",
},
[]string{"chain_no", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
)
registry.MustRegister(verifiedNotAfter, verifiedNotBefore)
sort.Slice(verifiedChains, func(i, j int) bool {
iExpiry := time.Time{}
for _, cert := range verifiedChains[i] {
if (iExpiry.IsZero() || cert.NotAfter.Before(iExpiry)) && !cert.NotAfter.IsZero() {
iExpiry = cert.NotAfter
}
}
jExpiry := time.Time{}
for _, cert := range verifiedChains[j] {
if (jExpiry.IsZero() || cert.NotAfter.Before(jExpiry)) && !cert.NotAfter.IsZero() {
jExpiry = cert.NotAfter
}
}
return iExpiry.After(jExpiry)
})
for i, chain := range verifiedChains {
chain = uniq(chain)
for _, cert := range chain {
chainNo := strconv.Itoa(i)
labels := append([]string{chainNo}, labelValues(cert)...)
if !cert.NotAfter.IsZero() {
verifiedNotAfter.WithLabelValues(labels...).Set(float64(cert.NotAfter.Unix()))
}
if !cert.NotBefore.IsZero() {
verifiedNotBefore.WithLabelValues(labels...).Set(float64(cert.NotBefore.Unix()))
}
}
}
return nil
}
func collectOCSPMetrics(ocspResponse []byte, registry *prometheus.Registry) error {
var (
ocspStapled = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "ocsp_response_stapled"),
Help: "If the connection state contains a stapled OCSP response",
},
)
ocspStatus = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "ocsp_response_status"),
Help: "The status in the OCSP response 0=Good 1=Revoked 2=Unknown",
},
)
ocspProducedAt = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "ocsp_response_produced_at"),
Help: "The producedAt value in the OCSP response, expressed as a Unix Epoch Time",
},
)
ocspThisUpdate = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "ocsp_response_this_update"),
Help: "The thisUpdate value in the OCSP response, expressed as a Unix Epoch Time",
},
)
ocspNextUpdate = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "ocsp_response_next_update"),
Help: "The nextUpdate value in the OCSP response, expressed as a Unix Epoch Time",
},
)
ocspRevokedAt = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "ocsp_response_revoked_at"),
Help: "The revocationTime value in the OCSP response, expressed as a Unix Epoch Time",
},
)
)
registry.MustRegister(
ocspStapled,
ocspStatus,
ocspProducedAt,
ocspThisUpdate,
ocspNextUpdate,
ocspRevokedAt,
)
if len(ocspResponse) == 0 {
return nil
}
resp, err := ocsp.ParseResponse(ocspResponse, nil)
if err != nil {
return err
}
ocspStapled.Set(1)
ocspStatus.Set(float64(resp.Status))
ocspProducedAt.Set(float64(resp.ProducedAt.Unix()))
ocspThisUpdate.Set(float64(resp.ThisUpdate.Unix()))
ocspNextUpdate.Set(float64(resp.NextUpdate.Unix()))
ocspRevokedAt.Set(float64(resp.RevokedAt.Unix()))
return nil
}
func collectFileMetrics(logger log.Logger, files []string, registry *prometheus.Registry) error {
var (
totalCerts []*x509.Certificate
fileNotAfter = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "file_cert_not_after"),
Help: "NotAfter expressed as a Unix Epoch Time for a certificate found in a file",
},
[]string{"file", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
fileNotBefore = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "file_cert_not_before"),
Help: "NotBefore expressed as a Unix Epoch Time for a certificate found in a file",
},
[]string{"file", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
)
registry.MustRegister(fileNotAfter, fileNotBefore)
for _, f := range files {
data, err := ioutil.ReadFile(f)
if err != nil {
level.Debug(logger).Log("msg", fmt.Sprintf("Error reading file %s: %s", f, err))
continue
}
certs, err := decodeCertificates(data)
if err != nil {
return err
}
totalCerts = append(totalCerts, certs...)
for _, cert := range certs {
labels := append([]string{f}, labelValues(cert)...)
if !cert.NotAfter.IsZero() {
fileNotAfter.WithLabelValues(labels...).Set(float64(cert.NotAfter.Unix()))
}
if !cert.NotBefore.IsZero() {
fileNotBefore.WithLabelValues(labels...).Set(float64(cert.NotBefore.Unix()))
}
}
}
if len(totalCerts) == 0 {
return fmt.Errorf("No certificates found")
}
return nil
}
func collectKubernetesSecretMetrics(secrets []v1.Secret, registry *prometheus.Registry) error {
var (
totalCerts []*x509.Certificate
kubernetesNotAfter = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "kubernetes_cert_not_after"),
Help: "NotAfter expressed as a Unix Epoch Time for a certificate found in a kubernetes secret",
},
[]string{"namespace", "secret", "key", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
kubernetesNotBefore = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "kubernetes_cert_not_before"),
Help: "NotBefore expressed as a Unix Epoch Time for a certificate found in a kubernetes secret",
},
[]string{"namespace", "secret", "key", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
)
registry.MustRegister(kubernetesNotAfter, kubernetesNotBefore)
for _, secret := range secrets {
for _, key := range []string{"tls.crt", "ca.crt"} {
data := secret.Data[key]
if len(data) == 0 {
continue
}
certs, err := decodeCertificates(data)
if err != nil {
return err
}
totalCerts = append(totalCerts, certs...)
for _, cert := range certs {
labels := append([]string{secret.Namespace, secret.Name, key}, labelValues(cert)...)
if !cert.NotAfter.IsZero() {
kubernetesNotAfter.WithLabelValues(labels...).Set(float64(cert.NotAfter.Unix()))
}
if !cert.NotBefore.IsZero() {
kubernetesNotBefore.WithLabelValues(labels...).Set(float64(cert.NotBefore.Unix()))
}
}
}
}
if len(totalCerts) == 0 {
return fmt.Errorf("No certificates found")
}
return nil
}
func collectKubeconfigMetrics(logger log.Logger, kubeconfig KubeConfig, registry *prometheus.Registry) error {
var (
totalCerts []*x509.Certificate
kubeconfigNotAfter = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "kubeconfig", "cert_not_after"),
Help: "NotAfter expressed as a Unix Epoch Time for a certificate found in a kubeconfig",
},
[]string{"kubeconfig", "name", "type", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
kubeconfigNotBefore = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "kubeconfig", "cert_not_before"),
Help: "NotBefore expressed as a Unix Epoch Time for a certificate found in a kubeconfig",
},
[]string{"kubeconfig", "name", "type", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"},
)
)
registry.MustRegister(kubeconfigNotAfter, kubeconfigNotBefore)
for _, c := range kubeconfig.Clusters {
var data []byte
var err error
if c.Cluster.CertificateAuthorityData != "" {
data, err = base64.StdEncoding.DecodeString(c.Cluster.CertificateAuthorityData)
if err != nil {
return err
}
} else if c.Cluster.CertificateAuthority != "" {
data, err = ioutil.ReadFile(c.Cluster.CertificateAuthority)
if err != nil {
level.Debug(logger).Log("msg", fmt.Sprintf("Error reading file %s: %s", c.Cluster.CertificateAuthority, err))
return err
}
}
if data == nil {
continue
}
certs, err := decodeCertificates(data)
if err != nil {
return err
}
totalCerts = append(totalCerts, certs...)
for _, cert := range certs {
labels := append([]string{kubeconfig.Path, c.Name, "cluster"}, labelValues(cert)...)
if !cert.NotAfter.IsZero() {
kubeconfigNotAfter.WithLabelValues(labels...).Set(float64(cert.NotAfter.Unix()))
}
if !cert.NotBefore.IsZero() {
kubeconfigNotBefore.WithLabelValues(labels...).Set(float64(cert.NotBefore.Unix()))
}
}
}
for _, u := range kubeconfig.Users {
var data []byte
var err error
if u.User.ClientCertificateData != "" {
data, err = base64.StdEncoding.DecodeString(u.User.ClientCertificateData)
if err != nil {
return err
}
} else if u.User.ClientCertificate != "" {
data, err = ioutil.ReadFile(u.User.ClientCertificate)
if err != nil {
level.Debug(logger).Log("msg", fmt.Sprintf("Error reading file %s: %s", u.User.ClientCertificate, err))
return err
}
}
if data == nil {
continue
}
certs, err := decodeCertificates(data)
if err != nil {
return err
}
totalCerts = append(totalCerts, certs...)
for _, cert := range certs {
labels := append([]string{kubeconfig.Path, u.Name, "user"}, labelValues(cert)...)
if !cert.NotAfter.IsZero() {
kubeconfigNotAfter.WithLabelValues(labels...).Set(float64(cert.NotAfter.Unix()))
}
if !cert.NotBefore.IsZero() {
kubeconfigNotBefore.WithLabelValues(labels...).Set(float64(cert.NotBefore.Unix()))
}
}
}
if len(totalCerts) == 0 {
return fmt.Errorf("No certificates found")
}
return nil
}
func labelValues(cert *x509.Certificate) []string {
return []string{
cert.SerialNumber.String(),
cert.Issuer.CommonName,
cert.Subject.CommonName,
dnsNames(cert),
ipAddresses(cert),
emailAddresses(cert),
organizationalUnits(cert),
}
}
func dnsNames(cert *x509.Certificate) string {
if len(cert.DNSNames) > 0 {
return "," + strings.Join(cert.DNSNames, ",") + ","
}
return ""
}
func emailAddresses(cert *x509.Certificate) string {
if len(cert.EmailAddresses) > 0 {
return "," + strings.Join(cert.EmailAddresses, ",") + ","
}
return ""
}
func ipAddresses(cert *x509.Certificate) string {
if len(cert.IPAddresses) > 0 {
ips := ","
for _, ip := range cert.IPAddresses {
ips = ips + ip.String() + ","
}
return ips
}
return ""
}
func organizationalUnits(cert *x509.Certificate) string {
if len(cert.Subject.OrganizationalUnit) > 0 {
return "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ","
}
return ""
}

223
prober/metrics_test.go Normal file
View File

@@ -0,0 +1,223 @@
package prober
import (
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"fmt"
"reflect"
"strconv"
"strings"
"testing"
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
"golang.org/x/crypto/ocsp"
)
type registryResult struct {
Name string
LabelValues map[string]string
Value float64
}
func (rr *registryResult) String() string {
var labels []string
for k, v := range rr.LabelValues {
labels = append(labels, k+"=\""+v+"\"")
}
m := rr.Name
if len(labels) > 0 {
m = fmt.Sprintf("%s{%s}", m, strings.Join(labels, ","))
}
return fmt.Sprintf("%s %f", m, rr.Value)
}
func checkRegistryResults(expectedResults []*registryResult, mfs []*dto.MetricFamily, t *testing.T) {
for _, expRes := range expectedResults {
checkRegistryResult(expRes, mfs, t)
}
}
func checkRegistryResult(expRes *registryResult, mfs []*dto.MetricFamily, t *testing.T) {
var results []*registryResult
for _, mf := range mfs {
for _, metric := range mf.Metric {
result := &registryResult{
Name: mf.GetName(),
Value: metric.GetGauge().GetValue(),
}
if len(metric.GetLabel()) > 0 {
labelValues := make(map[string]string)
for _, l := range metric.GetLabel() {
labelValues[l.GetName()] = l.GetValue()
}
result.LabelValues = labelValues
}
results = append(results, result)
}
}
var ok bool
var resStr string
for _, res := range results {
resStr = resStr + "\n" + res.String()
if reflect.DeepEqual(res, expRes) {
ok = true
}
}
if !ok {
t.Fatalf("Expected %s, got: %s", expRes.String(), resStr)
}
}
func checkCertificateMetrics(cert *x509.Certificate, registry *prometheus.Registry, t *testing.T) {
mfs, err := registry.Gather()
if err != nil {
t.Fatal(err)
}
ips := ","
for _, ip := range cert.IPAddresses {
ips = ips + ip.String() + ","
}
expectedLabels := map[string]string{
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
}
expectedResults := []*registryResult{
&registryResult{
Name: "ssl_cert_not_after",
LabelValues: expectedLabels,
Value: float64(cert.NotAfter.Unix()),
},
&registryResult{
Name: "ssl_cert_not_before",
LabelValues: expectedLabels,
Value: float64(cert.NotBefore.Unix()),
},
}
checkRegistryResults(expectedResults, mfs, t)
}
func checkVerifiedChainMetrics(verifiedChains [][]*x509.Certificate, registry *prometheus.Registry, t *testing.T) {
mfs, err := registry.Gather()
if err != nil {
t.Fatal(err)
}
for i, chain := range verifiedChains {
for _, cert := range chain {
ips := ","
for _, ip := range cert.IPAddresses {
ips = ips + ip.String() + ","
}
expectedLabels := map[string]string{
"chain_no": strconv.Itoa(i),
"serial_no": cert.SerialNumber.String(),
"issuer_cn": cert.Issuer.CommonName,
"cn": cert.Subject.CommonName,
"dnsnames": "," + strings.Join(cert.DNSNames, ",") + ",",
"ips": ips,
"emails": "," + strings.Join(cert.EmailAddresses, ",") + ",",
"ou": "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ",",
}
expectedResults := []*registryResult{
&registryResult{
Name: "ssl_verified_cert_not_after",
LabelValues: expectedLabels,
Value: float64(cert.NotAfter.Unix()),
},
&registryResult{
Name: "ssl_verified_cert_not_before",
LabelValues: expectedLabels,
Value: float64(cert.NotBefore.Unix()),
},
}
checkRegistryResults(expectedResults, mfs, t)
}
}
}
func checkOCSPMetrics(resp []byte, registry *prometheus.Registry, t *testing.T) {
var (
stapled float64
status float64
nextUpdate float64
thisUpdate float64
revokedAt float64
producedAt float64
)
mfs, err := registry.Gather()
if err != nil {
t.Fatal(err)
}
if len(resp) > 0 {
parsedResponse, err := ocsp.ParseResponse(resp, nil)
if err != nil {
t.Fatal(err)
}
stapled = 1
status = float64(parsedResponse.Status)
nextUpdate = float64(parsedResponse.NextUpdate.Unix())
thisUpdate = float64(parsedResponse.ThisUpdate.Unix())
revokedAt = float64(parsedResponse.RevokedAt.Unix())
producedAt = float64(parsedResponse.ProducedAt.Unix())
}
expectedResults := []*registryResult{
&registryResult{
Name: "ssl_ocsp_response_stapled",
Value: stapled,
},
&registryResult{
Name: "ssl_ocsp_response_status",
Value: status,
},
&registryResult{
Name: "ssl_ocsp_response_next_update",
Value: nextUpdate,
},
&registryResult{
Name: "ssl_ocsp_response_this_update",
Value: thisUpdate,
},
&registryResult{
Name: "ssl_ocsp_response_revoked_at",
Value: revokedAt,
},
&registryResult{
Name: "ssl_ocsp_response_produced_at",
Value: producedAt,
},
}
checkRegistryResults(expectedResults, mfs, t)
}
func checkTLSVersionMetrics(version string, registry *prometheus.Registry, t *testing.T) {
mfs, err := registry.Gather()
if err != nil {
t.Fatal(err)
}
expectedResults := []*registryResult{
&registryResult{
Name: "ssl_tls_version_info",
LabelValues: map[string]string{
"version": version,
},
Value: 1,
},
}
checkRegistryResults(expectedResults, mfs, t)
}
func newCertificate(certPEM []byte) (*x509.Certificate, error) {
block, _ := pem.Decode(certPEM)
return x509.ParseCertificate(block.Bytes)
}
func newKey(keyPEM []byte) (*rsa.PrivateKey, error) {
block, _ := pem.Decode([]byte(keyPEM))
return x509.ParsePKCS1PrivateKey(block.Bytes)
}

View File

@@ -1,20 +1,25 @@
package prober
import (
"crypto/tls"
"time"
"context"
"github.com/ribbybibby/ssl_exporter/config"
"github.com/go-kit/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
)
var (
// Probers maps a friendly name to a corresponding probe function
Probers = map[string]ProbeFn{
"https": ProbeHTTPS,
"http": ProbeHTTPS,
"tcp": ProbeTCP,
"https": ProbeHTTPS,
"http": ProbeHTTPS,
"tcp": ProbeTCP,
"file": ProbeFile,
"http_file": ProbeHTTPFile,
"kubernetes": ProbeKubernetes,
"kubeconfig": ProbeKubeconfig,
}
)
// ProbeFn probes
type ProbeFn func(target string, module config.Module, timeout time.Duration) (*tls.ConnectionState, error)
type ProbeFn func(ctx context.Context, logger log.Logger, target string, module config.Module, registry *prometheus.Registry) error

View File

@@ -2,67 +2,57 @@ package prober
import (
"bufio"
"bytes"
"context"
"crypto/tls"
"fmt"
"io"
"net"
"regexp"
"time"
"github.com/ribbybibby/ssl_exporter/config"
pconfig "github.com/prometheus/common/config"
"github.com/prometheus/common/log"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
)
// ProbeTCP performs a tcp probe
func ProbeTCP(target string, module config.Module, timeout time.Duration) (*tls.ConnectionState, error) {
dialer := &net.Dialer{Timeout: timeout}
conn, err := dialer.Dial("tcp", target)
func ProbeTCP(ctx context.Context, logger log.Logger, target string, module config.Module, registry *prometheus.Registry) error {
tlsConfig, err := newTLSConfig(target, registry, &module.TLSConfig)
if err != nil {
return nil, err
return err
}
dialer := &net.Dialer{}
conn, err := dialer.DialContext(ctx, "tcp", target)
if err != nil {
return err
}
defer conn.Close()
if err := conn.SetDeadline(time.Now().Add(timeout)); err != nil {
return nil, fmt.Errorf("Error setting deadline")
deadline, _ := ctx.Deadline()
if err := conn.SetDeadline(deadline); err != nil {
return fmt.Errorf("Error setting deadline")
}
if module.TCP.StartTLS != "" {
err = startTLS(conn, module.TCP.StartTLS)
err = startTLS(logger, conn, module.TCP.StartTLS)
if err != nil {
return nil, err
return err
}
}
tlsConfig, err := pconfig.NewTLSConfig(&module.TLSConfig)
if err != nil {
return nil, err
}
if tlsConfig.ServerName == "" {
targetAddress, _, err := net.SplitHostPort(target)
if err != nil {
return nil, err
}
tlsConfig.ServerName = targetAddress
}
tlsConn := tls.Client(conn, tlsConfig)
defer tlsConn.Close()
if err := tlsConn.Handshake(); err != nil {
return nil, err
}
state := tlsConn.ConnectionState()
return &state, nil
return tlsConn.Handshake()
}
type queryResponse struct {
expect string
send string
expect string
send string
sendBytes []byte
expectBytes []byte
}
var (
@@ -81,7 +71,7 @@ var (
send: "EHLO prober",
},
queryResponse{
expect: "^250-STARTTLS",
expect: "^250(-| )STARTTLS",
},
queryResponse{
send: "STARTTLS",
@@ -121,11 +111,30 @@ var (
expect: "OK",
},
},
"postgres": []queryResponse{
queryResponse{
sendBytes: []byte{0x00, 0x00, 0x00, 0x08, 0x04, 0xd2, 0x16, 0x2f},
},
queryResponse{
expectBytes: []byte{0x53},
},
},
"pop3": []queryResponse{
queryResponse{
expect: "OK",
},
queryResponse{
send: "STLS",
},
queryResponse{
expect: "OK",
},
},
}
)
// startTLS will send the STARTTLS command for the given protocol
func startTLS(conn net.Conn, proto string) error {
func startTLS(logger log.Logger, conn net.Conn, proto string) error {
var err error
qr, ok := startTLSqueryResponses[proto]
@@ -138,13 +147,13 @@ func startTLS(conn net.Conn, proto string) error {
if qr.expect != "" {
var match bool
for scanner.Scan() {
log.Debugf("read line: %s", scanner.Text())
level.Debug(logger).Log("msg", fmt.Sprintf("read line: %s", scanner.Text()))
match, err = regexp.Match(qr.expect, scanner.Bytes())
if err != nil {
return err
}
if match {
log.Debugf("regex: %s matched: %s", qr.expect, scanner.Text())
level.Debug(logger).Log("msg", fmt.Sprintf("regex: %s matched: %s", qr.expect, scanner.Text()))
break
}
}
@@ -155,12 +164,31 @@ func startTLS(conn net.Conn, proto string) error {
return fmt.Errorf("regex: %s didn't match: %s", qr.expect, scanner.Text())
}
}
if len(qr.expectBytes) > 0 {
buffer := make([]byte, len(qr.expectBytes))
_, err = io.ReadFull(conn, buffer)
if err != nil {
return nil
}
level.Debug(logger).Log("msg", fmt.Sprintf("read bytes: %x", buffer))
if bytes.Compare(buffer, qr.expectBytes) != 0 {
return fmt.Errorf("read bytes %x didn't match with expected bytes %x", buffer, qr.expectBytes)
} else {
level.Debug(logger).Log("msg", fmt.Sprintf("expected bytes %x matched with read bytes %x", qr.expectBytes, buffer))
}
}
if qr.send != "" {
log.Debugf("sending line: %s", qr.send)
level.Debug(logger).Log("msg", fmt.Sprintf("sending line: %s", qr.send))
if _, err := fmt.Fprintf(conn, "%s\r\n", qr.send); err != nil {
return err
}
}
if len(qr.sendBytes) > 0 {
level.Debug(logger).Log("msg", fmt.Sprintf("sending bytes: %x", qr.sendBytes))
if _, err = conn.Write(qr.sendBytes); err != nil {
return err
}
}
}
return nil
}

View File

@@ -1,22 +1,29 @@
package prober
import (
"bytes"
"context"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"math/big"
"net"
"testing"
"time"
"github.com/ribbybibby/ssl_exporter/config"
"github.com/ribbybibby/ssl_exporter/test"
"github.com/ribbybibby/ssl_exporter/v2/config"
"github.com/ribbybibby/ssl_exporter/v2/test"
"golang.org/x/crypto/ocsp"
pconfig "github.com/prometheus/common/config"
"github.com/prometheus/client_golang/prometheus"
)
// TestProbeTCP tests the typical case
func TestProbeTCP(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
t.Fatal(err)
}
defer teardown()
@@ -24,15 +31,28 @@ func TestProbeTCP(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
if _, err := ProbeTCP(server.Listener.Addr().String(), module, 10*time.Second); err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPInvalidName tests hitting the server on an address which isn't
@@ -48,7 +68,7 @@ func TestProbeTCPInvalidName(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
@@ -56,7 +76,12 @@ func TestProbeTCPInvalidName(t *testing.T) {
_, listenPort, _ := net.SplitHostPort(server.Listener.Addr().String())
if _, err := ProbeTCP("localhost:"+listenPort, module, 10*time.Second); err == nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), "localhost:"+listenPort, module, registry); err == nil {
t.Fatalf("expected error but err was nil")
}
}
@@ -64,7 +89,7 @@ func TestProbeTCPInvalidName(t *testing.T) {
// TestProbeTCPServerName tests that the probe is successful when the
// servername is provided in the TLS config
func TestProbeTCPServerName(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -76,16 +101,29 @@ func TestProbeTCPServerName(t *testing.T) {
host, listenPort, _ := net.SplitHostPort(server.Listener.Addr().String())
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
ServerName: host,
},
}
if _, err := ProbeTCP("localhost:"+listenPort, module, 10*time.Second); err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), "localhost:"+listenPort, module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPExpired tests that the probe fails with an expired server cert
@@ -108,13 +146,18 @@ func TestProbeTCPExpired(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
if _, err := ProbeTCP(server.Listener.Addr().String(), module, 5*time.Second); err == nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err == nil {
t.Fatalf("expected error but err is nil")
}
}
@@ -122,7 +165,7 @@ func TestProbeTCPExpired(t *testing.T) {
// TestProbeTCPExpiredInsecure tests that the probe succeeds with an expired server cert
// when skipping cert verification
func TestProbeTCPExpiredInsecure(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -140,24 +183,33 @@ func TestProbeTCPExpiredInsecure(t *testing.T) {
defer server.Close()
module := config.Module{
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: true,
},
}
state, err := ProbeTCP(server.Listener.Addr().String(), module, 5*time.Second)
if err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
if state == nil {
t.Fatalf("expected state but got nil")
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPStartTLSSMTP tests STARTTLS against a mock SMTP server
func TestProbeTCPStartTLSSMTP(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -170,20 +222,73 @@ func TestProbeTCPStartTLSSMTP(t *testing.T) {
TCP: config.TCPProbe{
StartTLS: "smtp",
},
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
if _, err := ProbeTCP(server.Listener.Addr().String(), module, 10*time.Second); err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPStartTLSSMTPWithDashInResponse tests STARTTLS against a mock SMTP server
// which provides STARTTLS as option with dash which is okay when it used as the last option
func TestProbeTCPStartTLSSMTPWithDashInResponse(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartSMTPWithDashInResponse()
defer server.Close()
module := config.Module{
TCP: config.TCPProbe{
StartTLS: "smtp",
},
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPStartTLSFTP tests STARTTLS against a mock FTP server
func TestProbeTCPStartTLSFTP(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -196,20 +301,33 @@ func TestProbeTCPStartTLSFTP(t *testing.T) {
TCP: config.TCPProbe{
StartTLS: "ftp",
},
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
if _, err := ProbeTCP(server.Listener.Addr().String(), module, 10*time.Second); err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPStartTLSIMAP tests STARTTLS against a mock IMAP server
func TestProbeTCPStartTLSIMAP(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
@@ -222,13 +340,262 @@ func TestProbeTCPStartTLSIMAP(t *testing.T) {
TCP: config.TCPProbe{
StartTLS: "imap",
},
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
if _, err := ProbeTCP(server.Listener.Addr().String(), module, 10*time.Second); err != nil {
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPStartTLSPOP3 tests STARTTLS against a mock POP3 server
func TestProbeTCPStartTLSPOP3(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartPOP3()
defer server.Close()
module := config.Module{
TCP: config.TCPProbe{
StartTLS: "pop3",
},
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPStartTLSPostgreSQL tests STARTTLS against a mock PostgreSQL server
func TestProbeTCPStartTLSPostgreSQL(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartPostgreSQL()
defer server.Close()
module := config.Module{
TCP: config.TCPProbe{
StartTLS: "postgres",
},
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPTimeout tests that the TCP probe respects the timeout in the
// context
func TestProbeTCPTimeout(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatal(err)
}
defer teardown()
server.StartTLSWait(time.Second * 3)
defer server.Close()
module := config.Module{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err == nil {
t.Fatalf("Expected error but returned error was nil")
}
}
// TestProbeTCPOCSP tests a TCP probe with OCSP stapling
func TestProbeTCPOCSP(t *testing.T) {
server, certPEM, keyPEM, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatal(err)
}
defer teardown()
cert, err := newCertificate(certPEM)
if err != nil {
t.Fatal(err)
}
key, err := newKey(keyPEM)
if err != nil {
t.Fatal(err)
}
resp, err := ocsp.CreateResponse(cert, cert, ocsp.Response{SerialNumber: big.NewInt(64), Status: 1}, key)
if err != nil {
t.Fatalf(err.Error())
}
server.TLS.Certificates[0].OCSPStaple = resp
server.StartTLS()
defer server.Close()
module := config.Module{
TLSConfig: config.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: false,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkCertificateMetrics(cert, registry, t)
checkOCSPMetrics(resp, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}
// TestProbeTCPVerifiedChains tests the verified chain metrics returned by a tcp
// probe
func TestProbeTCPVerifiedChains(t *testing.T) {
rootPrivateKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
t.Fatalf(err.Error())
}
rootCertExpiry := time.Now().AddDate(0, 0, 5)
rootCertTmpl := test.GenerateCertificateTemplate(rootCertExpiry)
rootCertTmpl.IsCA = true
rootCertTmpl.SerialNumber = big.NewInt(1)
rootCert, rootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(rootCertTmpl, rootPrivateKey)
olderRootCertExpiry := time.Now().AddDate(0, 0, 3)
olderRootCertTmpl := test.GenerateCertificateTemplate(olderRootCertExpiry)
olderRootCertTmpl.IsCA = true
olderRootCertTmpl.SerialNumber = big.NewInt(2)
olderRootCert, olderRootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(olderRootCertTmpl, rootPrivateKey)
oldestRootCertExpiry := time.Now().AddDate(0, 0, 1)
oldestRootCertTmpl := test.GenerateCertificateTemplate(oldestRootCertExpiry)
oldestRootCertTmpl.IsCA = true
oldestRootCertTmpl.SerialNumber = big.NewInt(3)
oldestRootCert, oldestRootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(oldestRootCertTmpl, rootPrivateKey)
serverCertExpiry := time.Now().AddDate(0, 0, 4)
serverCertTmpl := test.GenerateCertificateTemplate(serverCertExpiry)
serverCertTmpl.SerialNumber = big.NewInt(4)
serverCert, serverCertPem, serverKey := test.GenerateSignedCertificate(serverCertTmpl, olderRootCert, rootPrivateKey)
verifiedChains := [][]*x509.Certificate{
[]*x509.Certificate{
serverCert,
rootCert,
},
[]*x509.Certificate{
serverCert,
olderRootCert,
},
[]*x509.Certificate{
serverCert,
oldestRootCert,
},
}
caCertPem := bytes.Join([][]byte{oldestRootCertPem, olderRootCertPem, rootCertPem}, []byte(""))
server, caFile, teardown, err := test.SetupTCPServerWithCertAndKey(
caCertPem,
serverCertPem,
serverKey,
)
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartTLS()
defer server.Close()
module := config.Module{
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
}
registry := prometheus.NewRegistry()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ProbeTCP(ctx, newTestLogger(), server.Listener.Addr().String(), module, registry); err != nil {
t.Fatalf("error: %s", err)
}
checkCertificateMetrics(serverCert, registry, t)
checkOCSPMetrics([]byte{}, registry, t)
checkVerifiedChainMetrics(verifiedChains, registry, t)
checkTLSVersionMetrics("TLS 1.3", registry, t)
}

11
prober/test.go Normal file
View File

@@ -0,0 +1,11 @@
package prober
import (
"os"
"github.com/go-kit/log"
)
func newTestLogger() log.Logger {
return log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout))
}

72
prober/tls.go Normal file
View File

@@ -0,0 +1,72 @@
package prober
import (
"crypto/tls"
"crypto/x509"
"encoding/pem"
"net"
"github.com/prometheus/client_golang/prometheus"
"github.com/ribbybibby/ssl_exporter/v2/config"
)
// newTLSConfig sets up TLS config and instruments it with a function that
// collects metrics for the verified chain
func newTLSConfig(target string, registry *prometheus.Registry, cfg *config.TLSConfig) (*tls.Config, error) {
tlsConfig, err := config.NewTLSConfig(cfg)
if err != nil {
return nil, err
}
if tlsConfig.ServerName == "" && target != "" {
targetAddress, _, err := net.SplitHostPort(target)
if err != nil {
return nil, err
}
tlsConfig.ServerName = targetAddress
}
tlsConfig.VerifyConnection = func(state tls.ConnectionState) error {
return collectConnectionStateMetrics(state, registry)
}
return tlsConfig, nil
}
func uniq(certs []*x509.Certificate) []*x509.Certificate {
r := []*x509.Certificate{}
for _, c := range certs {
if !contains(r, c) {
r = append(r, c)
}
}
return r
}
func contains(certs []*x509.Certificate, cert *x509.Certificate) bool {
for _, c := range certs {
if (c.SerialNumber.String() == cert.SerialNumber.String()) && (c.Issuer.CommonName == cert.Issuer.CommonName) {
return true
}
}
return false
}
func decodeCertificates(data []byte) ([]*x509.Certificate, error) {
var certs []*x509.Certificate
for block, rest := pem.Decode(data); block != nil; block, rest = pem.Decode(rest) {
if block.Type == "CERTIFICATE" || block.Type == "TRUSTED CERTIFICATE" {
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return certs, err
}
if !contains(certs, cert) {
certs = append(certs, cert)
}
}
}
return certs, nil
}

86
prober/tls_test.go Normal file
View File

@@ -0,0 +1,86 @@
package prober
import (
"testing"
)
func TestDecodeCertificates(t *testing.T) {
data := []byte(`
-----BEGIN CERTIFICATE-----
MIIFszCCA5ugAwIBAgIUdpzowWDU/AI7QBhLSRB9DPqpWvcwDQYJKoZIhvcNAQEL
BQAwaTELMAkGA1UEBhMCWFgxEjAQBgNVBAgMCVN0YXRlTmFtZTERMA8GA1UEBwwI
Q2l0eU5hbWUxFDASBgNVBAoMC0NvbXBhbnlOYW1lMQwwCgYDVQQLDANGb28xDzAN
BgNVBAMMBkZvb2JhcjAeFw0yNDA0MjgxNjIzMzNaFw0zNDA0MjYxNjIzMzNaMGkx
CzAJBgNVBAYTAlhYMRIwEAYDVQQIDAlTdGF0ZU5hbWUxETAPBgNVBAcMCENpdHlO
YW1lMRQwEgYDVQQKDAtDb21wYW55TmFtZTEMMAoGA1UECwwDRm9vMQ8wDQYDVQQD
DAZGb29iYXIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCmxZBW/Ays
VBxt7jJQeTrPdLQpUdxnGVXOa4M54FHWA2DwwmZ2DZth5Eioq1wC9WCrByWkd8px
mvU0XDUT5ESceEKcwmDhKgYHAcJ4qXEyk1jYuBy6zw95cmV2BiTf0Xoo/8JxiQR4
YBd7Tbm52eV5Hw5oaqgVEdaOCVMnO8S57AuQGfeC5AO18ty0cZ7mKsXQje2celMH
QipujXrhRwdLBgu6FISTuS0XtqiuJnp+vllMjiTMF/uCmJUTUCpmayWSpM1FRpKe
lM8B9666uuEmVJ4V5gzy4Oe0i5Apfh4qXX6pj+Y/oeOfXRc3NfYqcIJ2hm82ghJL
5Kt6FZ9fkQ6pyk7nXMAOaf8WX+JkhqaWvlzTme8Z+6DBXLPfyyoi3VR2kG3lRida
qfyh2EPvEXip810s4f8EOHi+sehmjWLbsgn2HAHQmE7zYTEMp2Xsh0M7lGfUiMfP
P1RU/RNDkZK59yXsG9RmoDMD03qI8M4990TL7BW0FQZvBg9Rfr3KgFpWngsn84mN
l6/MKyc5X1e+RGyYJPbi0S1j+fhBeNzFqQPFZZUnWIPTxVdqF91SgY9EwE6MHuD6
t7G+eANWWQkolvwACMT+0GqiRMVHVWeIqF2SQ7Dx2tiNXjE9rSX3lMpMNt4PvWcM
PGuao3FMqXts0j2L8FSljcVU/hiLOGC2mwIDAQABo1MwUTAdBgNVHQ4EFgQU+231
i0rY0CwQ0JwT2EUX4cq81LIwHwYDVR0jBBgwFoAU+231i0rY0CwQ0JwT2EUX4cq8
1LIwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAVRqUVxovcZ9C
BYUUhvyT8BMAC6PS8U59BoDhnbrp1JDiPDbMnWlRp+450kZlEkSdN9PRLGWoGi84
QqT7RIry/jA4z62B6N+IDDDcuWWstDTTC/gRA46eeHGRnIz7GJeseqrxb9z8KoO3
/c8Pdb4lu/cfG5/m2X8hkYzjLZrIQ8qz53nUAag4uael19uy3HPQ3EEr936y+vQW
rH8itgQ+5kOTr9d3ihcTSwTJGTyWq7j3T0xPu9tCzoPragpEDjoUCqCjU7uv6Tdq
UhaQ8vneimSf9VdbDfEuEU9S2Xbdg6e+BsdV9uMeWkhtD1WgtSLHcliYktwnh/kh
r4vQG86xn+LDxTrdTssjt+UmJnXzRiGOEZCDz7kBchYLiWSQn9tqcn28KK2/YobD
AghlR24hUL1uslJOjLFc4BwLmlv/4Iy+b/3iQWY+yoban/OLZo6gCZx/4Nvg5R9e
tcuxgn5Jhm2T/e8REucXQfiqKgxQVibFmqXeLH3Yj7ussA5t/ozo4VuypXUnnil0
hB+PQsViaHFId1LyBKFwMoA6JzlNle6cbWVtXJswffkjk0UUr26SLHMV87lrk2kn
IQ7ROZcT8K7gzzTxuiC2g6npmG1fDfgmgJeS7IqgiFcRTKa0hj6bvc/WoHoCpLNs
v5KXxrGlqibeNjyc3Wng6S0Kpg6YNqU=
-----END CERTIFICATE-----
-----BEGIN TRUSTED CERTIFICATE-----
MIIF7zCCA9egAwIBAgIUZXsnB0gxoPSFaUjnQeLwTW+sqbYwDQYJKoZIhvcNAQEL
BQAwgYYxCzAJBgNVBAYTAlhYMRIwEAYDVQQIDAlTdGF0ZU5hbWUxETAPBgNVBAcM
CENpdHlOYW1lMRQwEgYDVQQKDAtDb21wYW55TmFtZTEbMBkGA1UECwwSQ29tcGFu
eVNlY3Rpb25OYW1lMR0wGwYDVQQDDBRDb21tb25OYW1lT3JIb3N0bmFtZTAeFw0y
NDA0MjgxNjE3MzZaFw0zNDA0MjYxNjE3MzZaMIGGMQswCQYDVQQGEwJYWDESMBAG
A1UECAwJU3RhdGVOYW1lMREwDwYDVQQHDAhDaXR5TmFtZTEUMBIGA1UECgwLQ29t
cGFueU5hbWUxGzAZBgNVBAsMEkNvbXBhbnlTZWN0aW9uTmFtZTEdMBsGA1UEAwwU
Q29tbW9uTmFtZU9ySG9zdG5hbWUwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK
AoICAQDJ+g1z+nZFj0DFc+wV1AH4giAqDFhwx25yziaQJnHkoXpllN0bRpr50Fhx
QsMuMqjMCM0eFPo7kDWSrAUusdh3j1jdF7KidZRZdyUgioQvbmNtqtwdU/38Gq0s
yTspOTZTWziIMFsLYxvFb0Ia8dwQFvyDB3pM1/eXFJMr2Cz/nF4/g4IUFEtPY2pe
4sQDMhszvROMfI5LB4SmGiim/zUS7+ZDEQigf10/CrbLntjXHiFy+V0hcrDQXkuY
gXU9HsiwRpCx+GRqcI6SWPIZ2rYa7goAVBENv5KWFXdWaOFCpi55XldYYzDeaXWa
On9EOjyylLbJ2pv/yhpt/VYDC6Lsqki1XcU/zeETcBRlqzCwZhMqFErSPRBGxaTN
lEGKNS2qQnE7I7bmUksRcCxA0WIn4V2xxxGHkLFnEsCipFGgMm7LaGSukYZGQ2MF
0ELqaaRMaTBfvPhFRtolkFChm7JQlCrk9j47b07OTMj4KUoBkhYPNoK26sHjvKue
Iks7BTGy8NdZk3jVDNEgGwngj+bNyJcjDvq+cHhWhS/N8c8Tg2tFb3ZOnwbyFPLf
1DdsP/SW6TybMc6NtpWvmSCFMvxqRpr+7LscpBGThepAUaoPkATnT6UdYGqr+l5N
z1+cZEuHBF7Jk5xThXCCFwuBC/KREvRuU2LyAhm9RV1ci8XyYwIDAQABo1MwUTAd
BgNVHQ4EFgQUa1FF7jssOpGY0UY1qbFQH83rxOQwHwYDVR0jBBgwFoAUa1FF7jss
OpGY0UY1qbFQH83rxOQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOC
AgEAhRPDPmVetjE5w/igfnGiNUKhQiqB3E6yT1wMwF5LbNIC+h8E8WqF0mzF3XQC
t9+Eib+mFptHGfRGdA8JmJ79xBgch6JLagw8Ot6hA1oEqWy/PAzcFLemccvhonhZ
+N7umRO27agIYVJ71mxlS7rD1SItBKZ+g4/Lt3sr/iDM0kIhWjWdQiYgJMNhH2Mt
XAd53tmPSom1Vsca1ZorB0HJNIw8RB7QYwceAi0xk0Tui7Z6pHg8p99XHCK/2vD+
+u5a1nHjtrAvLdNti8o82EUScK4j91CODCm5bv2KDMIUUv+1O83UMicK8DLSIi0P
1acuIdbRrY5JiYewCC+NFCfODZT6y00QL7aF9cbT+ZySiYtpBUiLFlIZCw4dcJ6r
yYJca1gv+kqZugdgLD7nXWIzXgfreQ+Q/BiVouXroXaWqU/9dqzGw4LlPSpyqI9o
nFdu970j4BvVc809i1aNo3odGz9yRx7wOOMyiyHoRpYenlf3hFbqJHS2FUgJ/Ajl
TC1joWDPdwdqQY56WOpk2LMK1KJuMYQEEHb3Ib0ZaxXAT6Nd12VashG5gYhw9vGn
aZ3nCnZFahgB2tB5DtkK+p0PQVpZOv6/0CKSsfQkBFLTjeVIRnlL/UlK3VlwHH9h
0LfsLFabedOS9R1XIaQ7aKXPK4tKmNb5H/ONzH2zxU5Oqoo=
-----END TRUSTED CERTIFICATE-----
`)
certs, err := decodeCertificates(data)
if err != nil {
t.Errorf("unexpected error: %s", err)
}
if len(certs) != 2 {
t.Errorf("unexpected number of certs: %d", len(certs))
}
}

View File

@@ -1,225 +1,38 @@
package main
import (
"crypto/tls"
"crypto/x509"
"context"
"fmt"
"net/http"
"sort"
"os"
"strconv"
"strings"
"time"
"github.com/alecthomas/kingpin/v2"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
versioncollector "github.com/prometheus/client_golang/prometheus/collectors/version"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/prometheus/common/log"
"github.com/prometheus/common/promlog"
promlogflag "github.com/prometheus/common/promlog/flag"
"github.com/prometheus/common/version"
"github.com/ribbybibby/ssl_exporter/config"
"github.com/ribbybibby/ssl_exporter/prober"
"gopkg.in/alecthomas/kingpin.v2"
"github.com/ribbybibby/ssl_exporter/v2/config"
"github.com/ribbybibby/ssl_exporter/v2/prober"
)
const (
namespace = "ssl"
)
var (
tlsConnectSuccess = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "tls_connect_success"),
"If the TLS connection was a success",
nil, nil,
)
tlsVersion = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "tls_version_info"),
"The TLS version used",
[]string{"version"}, nil,
)
proberType = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "prober"),
"The prober used by the exporter to connect to the target",
[]string{"prober"}, nil,
)
notBefore = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "cert_not_before"),
"NotBefore expressed as a Unix Epoch Time",
[]string{"serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"}, nil,
)
notAfter = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "cert_not_after"),
"NotAfter expressed as a Unix Epoch Time",
[]string{"serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"}, nil,
)
verifiedNotBefore = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "verified_cert_not_before"),
"NotBefore expressed as a Unix Epoch Time for a certificate in the list of verified chains",
[]string{"chain_no", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"}, nil,
)
verifiedNotAfter = prometheus.NewDesc(
prometheus.BuildFQName(namespace, "", "verified_cert_not_after"),
"NotAfter expressed as a Unix Epoch Time for a certificate in the list of verified chains",
[]string{"chain_no", "serial_no", "issuer_cn", "cn", "dnsnames", "ips", "emails", "ou"}, nil,
)
)
// Exporter is the exporter type...
type Exporter struct {
target string
prober prober.ProbeFn
timeout time.Duration
module config.Module
}
// Describe metrics
func (e *Exporter) Describe(ch chan<- *prometheus.Desc) {
ch <- tlsConnectSuccess
ch <- tlsVersion
ch <- proberType
ch <- notAfter
ch <- notBefore
ch <- verifiedNotAfter
ch <- verifiedNotBefore
}
// Collect metrics
func (e *Exporter) Collect(ch chan<- prometheus.Metric) {
ch <- prometheus.MustNewConstMetric(
proberType, prometheus.GaugeValue, 1, e.module.Prober,
)
state, err := e.prober(e.target, e.module, e.timeout)
if err != nil {
log.Errorf("error=%s target=%s prober=%s timeout=%s", err, e.target, e.module.Prober, e.timeout)
ch <- prometheus.MustNewConstMetric(
tlsConnectSuccess, prometheus.GaugeValue, 0,
)
return
}
// Get the TLS version from the connection state and export it as a metric
ch <- prometheus.MustNewConstMetric(
tlsVersion, prometheus.GaugeValue, 1, getTLSVersion(state),
)
// Retrieve certificates from the connection state
peerCertificates := state.PeerCertificates
if len(peerCertificates) < 1 {
log.Errorf("error=No certificates found in connection state. target=%s prober=%s", e.target, e.module.Prober)
ch <- prometheus.MustNewConstMetric(
tlsConnectSuccess, prometheus.GaugeValue, 0,
)
return
}
// If there are peer certificates in the connection state then consider
// the tls connection a success
ch <- prometheus.MustNewConstMetric(
tlsConnectSuccess, prometheus.GaugeValue, 1,
)
// Remove duplicate certificates from the response
peerCertificates = uniq(peerCertificates)
// Loop through peer certificates and create metrics
for _, cert := range peerCertificates {
if !cert.NotAfter.IsZero() {
ch <- prometheus.MustNewConstMetric(
notAfter,
prometheus.GaugeValue,
float64(cert.NotAfter.UnixNano()/1e9),
cert.SerialNumber.String(),
cert.Issuer.CommonName,
cert.Subject.CommonName,
getDNSNames(cert),
getIPAddresses(cert),
getEmailAddresses(cert),
getOrganizationalUnits(cert),
)
}
if !cert.NotBefore.IsZero() {
ch <- prometheus.MustNewConstMetric(
notBefore,
prometheus.GaugeValue,
float64(cert.NotBefore.UnixNano()/1e9),
cert.SerialNumber.String(),
cert.Issuer.CommonName,
cert.Subject.CommonName,
getDNSNames(cert),
getIPAddresses(cert),
getEmailAddresses(cert),
getOrganizationalUnits(cert),
)
}
}
// Retrieve the list of verified chains from the connection state
verifiedChains := state.VerifiedChains
// Sort the verified chains from the chain that is valid for longest to the chain
// that expires the soonest
sort.Slice(verifiedChains, func(i, j int) bool {
iExpiry := time.Time{}
for _, cert := range verifiedChains[i] {
if (iExpiry.IsZero() || cert.NotAfter.Before(iExpiry)) && !cert.NotAfter.IsZero() {
iExpiry = cert.NotAfter
}
}
jExpiry := time.Time{}
for _, cert := range verifiedChains[j] {
if (jExpiry.IsZero() || cert.NotAfter.Before(jExpiry)) && !cert.NotAfter.IsZero() {
jExpiry = cert.NotAfter
}
}
return iExpiry.After(jExpiry)
})
// Loop through the verified chains creating metrics. Label the metrics
// with the index of the chain.
for i, chain := range verifiedChains {
chain = uniq(chain)
for _, cert := range chain {
chainNo := strconv.Itoa(i)
if !cert.NotAfter.IsZero() {
ch <- prometheus.MustNewConstMetric(
verifiedNotAfter,
prometheus.GaugeValue,
float64(cert.NotAfter.UnixNano()/1e9),
chainNo,
cert.SerialNumber.String(),
cert.Issuer.CommonName,
cert.Subject.CommonName,
getDNSNames(cert),
getIPAddresses(cert),
getEmailAddresses(cert),
getOrganizationalUnits(cert),
)
}
if !cert.NotBefore.IsZero() {
ch <- prometheus.MustNewConstMetric(
verifiedNotBefore,
prometheus.GaugeValue,
float64(cert.NotBefore.UnixNano()/1e9),
chainNo,
cert.SerialNumber.String(),
cert.Issuer.CommonName,
cert.Subject.CommonName,
getDNSNames(cert),
getIPAddresses(cert),
getEmailAddresses(cert),
getOrganizationalUnits(cert),
)
}
}
}
}
func probeHandler(w http.ResponseWriter, r *http.Request, conf *config.Config) {
func probeHandler(logger log.Logger, w http.ResponseWriter, r *http.Request, conf *config.Config) {
moduleName := r.URL.Query().Get("module")
if moduleName == "" {
moduleName = "tcp"
moduleName = conf.DefaultModule
if moduleName == "" {
http.Error(w, "Module parameter must be set", http.StatusBadRequest)
return
}
}
module, ok := conf.Modules[moduleName]
if !ok {
@@ -227,126 +40,83 @@ func probeHandler(w http.ResponseWriter, r *http.Request, conf *config.Config) {
return
}
// The following timeout block was taken wholly from the blackbox exporter
// https://github.com/prometheus/blackbox_exporter/blob/master/main.go
var timeoutSeconds float64
if v := r.Header.Get("X-Prometheus-Scrape-Timeout-Seconds"); v != "" {
var err error
timeoutSeconds, err = strconv.ParseFloat(v, 64)
if err != nil {
http.Error(w, fmt.Sprintf("Failed to parse timeout from Prometheus header: %s", err), http.StatusInternalServerError)
timeout := module.Timeout
if timeout == 0 {
// The following timeout block was taken wholly from the blackbox exporter
// https://github.com/prometheus/blackbox_exporter/blob/master/main.go
var timeoutSeconds float64
if v := r.Header.Get("X-Prometheus-Scrape-Timeout-Seconds"); v != "" {
var err error
timeoutSeconds, err = strconv.ParseFloat(v, 64)
if err != nil {
http.Error(w, fmt.Sprintf("Failed to parse timeout from Prometheus header: %s", err), http.StatusInternalServerError)
return
}
} else {
timeoutSeconds = 10
}
if timeoutSeconds == 0 {
timeoutSeconds = 10
}
timeout = time.Duration((timeoutSeconds) * 1e9)
}
ctx, cancel := context.WithTimeout(r.Context(), timeout)
defer cancel()
target := module.Target
if target == "" {
target = r.URL.Query().Get("target")
if target == "" {
http.Error(w, "Target parameter is missing", http.StatusBadRequest)
return
}
} else {
timeoutSeconds = 10
}
if timeoutSeconds == 0 {
timeoutSeconds = 10
}
timeout := time.Duration((timeoutSeconds) * 1e9)
target := r.URL.Query().Get("target")
if target == "" {
http.Error(w, "Target parameter is missing", http.StatusBadRequest)
return
}
prober, ok := prober.Probers[module.Prober]
probeFunc, ok := prober.Probers[module.Prober]
if !ok {
http.Error(w, fmt.Sprintf("Unknown prober %q", module.Prober), http.StatusBadRequest)
return
}
exporter := &Exporter{
target: target,
prober: prober,
timeout: timeout,
module: module,
}
var (
probeSuccess = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "probe_success"),
Help: "If the probe was a success",
},
)
proberType = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: prometheus.BuildFQName(namespace, "", "prober"),
Help: "The prober used by the exporter to connect to the target",
},
[]string{"prober"},
)
)
registry := prometheus.NewRegistry()
registry.MustRegister(exporter)
registry.MustRegister(probeSuccess, proberType)
proberType.WithLabelValues(module.Prober).Set(1)
logger = log.With(logger, "target", target, "prober", module.Prober, "timeout", timeout)
err := probeFunc(ctx, logger, target, module, registry)
if err != nil {
level.Error(logger).Log("msg", err)
probeSuccess.Set(0)
} else {
probeSuccess.Set(1)
}
// Serve
h := promhttp.HandlerFor(registry, promhttp.HandlerOpts{})
h.ServeHTTP(w, r)
}
func uniq(certs []*x509.Certificate) []*x509.Certificate {
r := []*x509.Certificate{}
for _, c := range certs {
if !contains(r, c) {
r = append(r, c)
}
}
return r
}
func contains(certs []*x509.Certificate, cert *x509.Certificate) bool {
for _, c := range certs {
if (c.SerialNumber.String() == cert.SerialNumber.String()) && (c.Issuer.CommonName == cert.Issuer.CommonName) {
return true
}
}
return false
}
func getTLSVersion(state *tls.ConnectionState) string {
switch state.Version {
case tls.VersionTLS10:
return "TLS 1.0"
case tls.VersionTLS11:
return "TLS 1.1"
case tls.VersionTLS12:
return "TLS 1.2"
case tls.VersionTLS13:
return "TLS 1.3"
default:
return "unknown"
}
}
func getDNSNames(cert *x509.Certificate) string {
if len(cert.DNSNames) > 0 {
return "," + strings.Join(cert.DNSNames, ",") + ","
}
return ""
}
func getEmailAddresses(cert *x509.Certificate) string {
if len(cert.EmailAddresses) > 0 {
return "," + strings.Join(cert.EmailAddresses, ",") + ","
}
return ""
}
func getIPAddresses(cert *x509.Certificate) string {
if len(cert.IPAddresses) > 0 {
ips := ","
for _, ip := range cert.IPAddresses {
ips = ips + ip.String() + ","
}
return ips
}
return ""
}
func getOrganizationalUnits(cert *x509.Certificate) string {
if len(cert.Subject.OrganizationalUnit) > 0 {
return "," + strings.Join(cert.Subject.OrganizationalUnit, ",") + ","
}
return ""
}
func init() {
prometheus.MustRegister(version.NewCollector(namespace + "_exporter"))
prometheus.MustRegister(versioncollector.NewCollector(namespace + "_exporter"))
}
func main() {
@@ -355,28 +125,32 @@ func main() {
metricsPath = kingpin.Flag("web.metrics-path", "Path under which to expose metrics").Default("/metrics").String()
probePath = kingpin.Flag("web.probe-path", "Path under which to expose the probe endpoint").Default("/probe").String()
configFile = kingpin.Flag("config.file", "SSL exporter configuration file").Default("").String()
promlogConfig = promlog.Config{}
err error
)
log.AddFlags(kingpin.CommandLine)
promlogflag.AddFlags(kingpin.CommandLine, &promlogConfig)
kingpin.Version(version.Print(namespace + "_exporter"))
kingpin.HelpFlag.Short('h')
kingpin.Parse()
logger := promlog.New(&promlogConfig)
conf := config.DefaultConfig
if *configFile != "" {
conf, err = config.LoadConfig(*configFile)
if err != nil {
log.Fatalln(err)
level.Error(logger).Log("msg", err)
os.Exit(1)
}
}
log.Infoln("Starting "+namespace+"_exporter", version.Info())
log.Infoln("Build context", version.BuildContext())
level.Info(logger).Log("msg", fmt.Sprintf("Starting %s_exporter %s", namespace, version.Info()))
level.Info(logger).Log("msg", fmt.Sprintf("Build context %s", version.BuildContext()))
http.Handle(*metricsPath, promhttp.Handler())
http.HandleFunc(*probePath, func(w http.ResponseWriter, r *http.Request) {
probeHandler(w, r, conf)
probeHandler(logger, w, r, conf)
})
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte(`<html>
@@ -389,6 +163,7 @@ func main() {
</html>`))
})
log.Infoln("Listening on", *listenAddress)
log.Fatal(http.ListenAndServe(*listenAddress, nil))
level.Info(logger).Log("msg", fmt.Sprintf("Listening on %s", *listenAddress))
level.Error(logger).Log("msg", http.ListenAndServe(*listenAddress, nil))
os.Exit(1)
}

View File

@@ -1,565 +1,20 @@
package main
import (
"bytes"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"encoding/pem"
"fmt"
"math/big"
"net/http"
"net/http/httptest"
"net/url"
"os"
"strconv"
"strings"
"testing"
"time"
pconfig "github.com/prometheus/common/config"
"github.com/ribbybibby/ssl_exporter/config"
"github.com/ribbybibby/ssl_exporter/test"
"github.com/go-kit/log"
"github.com/ribbybibby/ssl_exporter/v2/config"
"github.com/ribbybibby/ssl_exporter/v2/test"
)
// TestProbeHandlerHTTPS tests a typical HTTPS probe
func TestProbeHandlerHTTPS(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartTLS()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"https": config.Module{
Prober: "https",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
},
},
},
}
rr, err := probe(server.URL, "https", conf)
if err != nil {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 1"); !ok {
t.Errorf("expected `ssl_tls_connect_success 1`")
}
// Check probe metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"https\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"https\"} 1`")
}
// Check notAfter and notBefore metrics for the peer certificate
if err := checkDates(certPEM, rr.Body.String()); err != nil {
t.Errorf(err.Error())
}
// Check TLS version metric
ok := strings.Contains(rr.Body.String(), "ssl_tls_version_info{version=\"TLS 1.3\"} 1")
if !ok {
t.Errorf("expected `ssl_tls_version_info{version=\"TLS 1.3\"} 1`")
}
}
// TestProbeHandlerHTTPSVerifiedChains checks that metrics are generated
// correctly for the verified chains
func TestProbeHandlerHTTPSVerifiedChains(t *testing.T) {
rootPrivateKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
t.Fatalf(err.Error())
}
rootCertExpiry := time.Now().AddDate(0, 0, 5)
rootCertTmpl := test.GenerateCertificateTemplate(rootCertExpiry)
rootCertTmpl.IsCA = true
rootCertTmpl.SerialNumber = big.NewInt(1)
rootCert, rootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(rootCertTmpl, rootPrivateKey)
olderRootCertExpiry := time.Now().AddDate(0, 0, 3)
olderRootCertTmpl := test.GenerateCertificateTemplate(olderRootCertExpiry)
olderRootCertTmpl.IsCA = true
olderRootCertTmpl.SerialNumber = big.NewInt(2)
olderRootCert, olderRootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(olderRootCertTmpl, rootPrivateKey)
oldestRootCertExpiry := time.Now().AddDate(0, 0, 1)
oldestRootCertTmpl := test.GenerateCertificateTemplate(oldestRootCertExpiry)
oldestRootCertTmpl.IsCA = true
oldestRootCertTmpl.SerialNumber = big.NewInt(3)
oldestRootCert, oldestRootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(oldestRootCertTmpl, rootPrivateKey)
serverCertExpiry := time.Now().AddDate(0, 0, 4)
serverCertTmpl := test.GenerateCertificateTemplate(serverCertExpiry)
serverCertTmpl.SerialNumber = big.NewInt(4)
serverCert, serverCertPem, serverKey := test.GenerateSignedCertificate(serverCertTmpl, olderRootCert, rootPrivateKey)
verifiedChains := [][]*x509.Certificate{
[]*x509.Certificate{
serverCert,
rootCert,
},
[]*x509.Certificate{
serverCert,
olderRootCert,
},
[]*x509.Certificate{
serverCert,
oldestRootCert,
},
}
caCertPem := bytes.Join([][]byte{oldestRootCertPem, olderRootCertPem, rootCertPem}, []byte(""))
server, caFile, teardown, err := test.SetupHTTPSServerWithCertAndKey(
caCertPem,
serverCertPem,
serverKey,
)
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartTLS()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"https": config.Module{
Prober: "https",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
},
},
},
}
rr, err := probe(server.URL, "https", conf)
if err != nil {
t.Fatalf(err.Error())
}
// Check verifiedNotAfter and verifiedNotBefore metrics
if err := checkVerifiedChainDates(verifiedChains, rr.Body.String()); err != nil {
t.Errorf(err.Error())
}
}
func TestProbeHandlerHTTPSNoServer(t *testing.T) {
rr, err := probe("localhost:6666", "https", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0"); !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
}
}
// TestProbeHandlerHTTPSEmptyTarget tests a https probe with an empty target
func TestProbeHandlerHTTPSEmptyTarget(t *testing.T) {
rr, err := probe("", "https", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
if rr.Code != 400 {
t.Fatalf("expected 400 status code, got %v", rr.Code)
}
}
// TestProbeHandlerHTTPSSpaces tests an invalid address with spaces in it
func TestProbeHandlerHTTPSSpaces(t *testing.T) {
rr, err := probe("with spaces", "https", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0")
if !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
}
}
// TestProbeHandlerHTTPSHTTP tests a https probe against a http server
func TestProbeHandlerHTTPSHTTP(t *testing.T) {
server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello world")
}))
server.Start()
defer server.Close()
u, err := url.Parse(server.URL)
if err != nil {
t.Fatalf(err.Error())
}
rr, err := probe(u.Host, "https", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0")
if !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
}
}
func TestProbeHandlerHTTPSClientAuthWrongClientCert(t *testing.T) {
server, serverCertPEM, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
// Configure client auth on the server
certPool := x509.NewCertPool()
certPool.AppendCertsFromPEM(serverCertPEM)
server.TLS.ClientAuth = tls.RequireAndVerifyClientCert
server.TLS.RootCAs = certPool
server.TLS.ClientCAs = certPool
server.StartTLS()
defer server.Close()
// Create a different cert/key pair that won't be accepted by the server
certPEM, keyPEM := test.GenerateTestCertificate(time.Now().AddDate(0, 0, 1))
// Create cert file
certFile, err := test.WriteFile("cert.pem", certPEM)
if err != nil {
t.Fatalf(err.Error())
}
defer os.Remove(certFile)
// Create key file
keyFile, err := test.WriteFile("key.pem", keyPEM)
if err != nil {
t.Fatalf(err.Error())
}
defer os.Remove(keyFile)
conf := &config.Config{
Modules: map[string]config.Module{
"https": config.Module{
Prober: "https",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
CertFile: certFile,
KeyFile: keyFile,
},
},
},
}
rr, err := probe(server.Listener.Addr().String(), "https", conf)
if err != nil {
t.Fatalf(err.Error())
}
ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0")
if !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
}
}
// TestProbeHandlerTCP tests a typical TCP probe
func TestProbeHandlerTCP(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartTLS()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"tcp": config.Module{
Prober: "tcp",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
},
},
},
}
rr, err := probe(server.Listener.Addr().String(), "tcp", conf)
if err != nil {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 1"); !ok {
t.Errorf("expected `ssl_tls_connect_success 1`")
}
// Check probe metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"tcp\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"tcp\"} 1`")
}
// Check notAfter and notBefore metrics
if err := checkDates(certPEM, rr.Body.String()); err != nil {
t.Errorf(err.Error())
}
}
// TestProbeHandlerTCPVerifiedChains checks that metrics are generated
// correctly for the verified chains
func TestProbeHandlerTCPVerifiedChains(t *testing.T) {
rootPrivateKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
t.Fatalf(err.Error())
}
rootCertExpiry := time.Now().AddDate(0, 0, 5)
rootCertTmpl := test.GenerateCertificateTemplate(rootCertExpiry)
rootCertTmpl.IsCA = true
rootCertTmpl.SerialNumber = big.NewInt(1)
rootCert, rootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(rootCertTmpl, rootPrivateKey)
olderRootCertExpiry := time.Now().AddDate(0, 0, 3)
olderRootCertTmpl := test.GenerateCertificateTemplate(olderRootCertExpiry)
olderRootCertTmpl.IsCA = true
olderRootCertTmpl.SerialNumber = big.NewInt(2)
olderRootCert, olderRootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(olderRootCertTmpl, rootPrivateKey)
oldestRootCertExpiry := time.Now().AddDate(0, 0, 1)
oldestRootCertTmpl := test.GenerateCertificateTemplate(oldestRootCertExpiry)
oldestRootCertTmpl.IsCA = true
oldestRootCertTmpl.SerialNumber = big.NewInt(3)
oldestRootCert, oldestRootCertPem := test.GenerateSelfSignedCertificateWithPrivateKey(oldestRootCertTmpl, rootPrivateKey)
serverCertExpiry := time.Now().AddDate(0, 0, 4)
serverCertTmpl := test.GenerateCertificateTemplate(serverCertExpiry)
serverCertTmpl.SerialNumber = big.NewInt(4)
serverCert, serverCertPem, serverKey := test.GenerateSignedCertificate(serverCertTmpl, olderRootCert, rootPrivateKey)
verifiedChains := [][]*x509.Certificate{
[]*x509.Certificate{
serverCert,
rootCert,
},
[]*x509.Certificate{
serverCert,
olderRootCert,
},
[]*x509.Certificate{
serverCert,
oldestRootCert,
},
}
caCertPem := bytes.Join([][]byte{oldestRootCertPem, olderRootCertPem, rootCertPem}, []byte(""))
server, caFile, teardown, err := test.SetupTCPServerWithCertAndKey(
caCertPem,
serverCertPem,
serverKey,
)
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartTLS()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"tcp": config.Module{
Prober: "tcp",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
},
},
},
}
rr, err := probe(server.Listener.Addr().String(), "tcp", conf)
if err != nil {
t.Fatalf(err.Error())
}
// Check verifiedNotAfter and verifiedNotBefore metrics
if err := checkVerifiedChainDates(verifiedChains, rr.Body.String()); err != nil {
t.Errorf(err.Error())
}
}
// TestProbeHandlerTCPNoServer tests against a tcp server that doesn't exist
func TestProbeHandlerTCPNoServer(t *testing.T) {
rr, err := probe("localhost:6666", "tcp", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0"); !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
}
}
// TestProbeHandlerTCPEmptyTarget tests a TCP probe with an empty target
func TestProbeHandlerTCPEmptyTarget(t *testing.T) {
rr, err := probe("", "tcp", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
if rr.Code != 400 {
t.Fatalf("expected 400 status code, got %v", rr.Code)
}
}
// TestProbeHandlerTCPSpaces tests an invalid address with spaces in it
func TestProbeHandlerTCPSpaces(t *testing.T) {
rr, err := probe("with spaces", "tcp", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0")
if !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
}
}
// TestProbeHandlerTCPHTTP tests a tcp probe against a HTTP server
func TestProbeHandlerTCPHTTP(t *testing.T) {
server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Hello world")
}))
server.Start()
defer server.Close()
u, err := url.Parse(server.URL)
if err != nil {
t.Fatalf(err.Error())
}
rr, err := probe(u.Host, "tcp", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0")
if !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
}
}
func TestProbeHandlerTCPExpired(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
// Create a certificate with a notAfter date in the past
certPEM, keyPEM := test.GenerateTestCertificate(time.Now().AddDate(0, 0, -1))
testcert, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
t.Fatalf(err.Error())
}
server.TLS.Certificates = []tls.Certificate{testcert}
server.StartTLS()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"tcp": config.Module{
Prober: "tcp",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
},
},
},
}
rr, err := probe(server.Listener.Addr().String(), "tcp", conf)
if err != nil {
t.Fatalf(err.Error())
}
ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0")
if !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
}
}
func TestProbeHandlerTCPExpiredInsecure(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
// Create a certificate with a notAfter date in the past
certPEM, keyPEM := test.GenerateTestCertificate(time.Now().AddDate(0, 0, -1))
testcert, err := tls.X509KeyPair(certPEM, keyPEM)
if err != nil {
t.Fatalf(err.Error())
}
server.TLS.Certificates = []tls.Certificate{testcert}
server.StartTLS()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"tcp": config.Module{
Prober: "tcp",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
InsecureSkipVerify: true,
},
},
},
}
rr, err := probe(server.Listener.Addr().String(), "tcp", conf)
if err != nil {
t.Fatalf(err.Error())
}
ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 1")
if !ok {
t.Errorf("expected `ssl_tls_connect_success 1`")
}
}
// TestProbeHandlerDefaultModule tests that the default module uses the tcp prober
func TestProbeHandlerDefaultModule(t *testing.T) {
rr, err := probe("localhost:6666", "", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
// Check probe metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"tcp\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"tcp\"} 1`")
}
}
func TestProbeHandlerProxy(t *testing.T) {
// TestProbeHandler tests that the probe handler sets the ssl_probe_success and
// ssl_prober metrics correctly
func TestProbeHandler(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
@@ -569,23 +24,13 @@ func TestProbeHandlerProxy(t *testing.T) {
server.StartTLS()
defer server.Close()
// Test with a proxy that doesn't exist first
badProxyURL, err := url.Parse("http://localhost:6666")
if err != nil {
t.Fatalf(err.Error())
}
conf := &config.Config{
Modules: map[string]config.Module{
"https": config.Module{
Prober: "https",
TLSConfig: pconfig.TLSConfig{
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
HTTPS: config.HTTPSProbe{
// Check with a bad proxy url initially
ProxyURL: config.URL{URL: badProxyURL},
},
},
},
}
@@ -595,243 +40,161 @@ func TestProbeHandlerProxy(t *testing.T) {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 0"); !ok {
t.Errorf("expected `ssl_tls_connect_success 0`")
// Check probe success
if ok := strings.Contains(rr.Body.String(), "ssl_probe_success 1"); !ok {
t.Errorf("expected `ssl_probe_success 1`")
}
// Test with an actual proxy server
proxyServer, err := test.SetupHTTPProxyServer()
// Check prober metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"https\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"https\"} 1`")
}
}
// TestProbeHandlerFail tests that the probe handler sets the ssl_probe_success and
// ssl_prober metrics correctly when the probe fails
func TestProbeHandlerFail(t *testing.T) {
rr, err := probe("localhost:6666", "", config.DefaultConfig)
if err != nil {
t.Fatalf(err.Error())
}
proxyServer.Start()
defer proxyServer.Close()
// Check probe success
if ok := strings.Contains(rr.Body.String(), "ssl_probe_success 0"); !ok {
t.Errorf("expected `ssl_probe_success 0`")
}
proxyURL, err := url.Parse(proxyServer.URL)
// Check prober metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"tcp\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"tcp\"} 1`")
}
}
// TestProbeHandlerDefaultModule tests the default module is used correctly
func TestProbeHandlerDefaultModule(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartTLS()
defer server.Close()
conf := &config.Config{
DefaultModule: "https",
Modules: map[string]config.Module{
"tcp": config.Module{
Prober: "tcp",
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
},
"https": config.Module{
Prober: "https",
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
},
},
}
rr, err := probe(server.URL, "", conf)
if err != nil {
t.Fatalf(err.Error())
}
conf = &config.Config{
// Should have used the https prober
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"https\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"https\"} 1`")
}
conf.DefaultModule = ""
rr, err = probe(server.URL, "", conf)
if err != nil {
t.Fatalf(err.Error())
}
// It should fail when there's no default module
if rr.Code != http.StatusBadRequest {
t.Errorf("expected code: %d, got: %d", http.StatusBadRequest, rr.Code)
}
}
// TestProbeHandlerTarget tests the target module parameter is used correctly
func TestProbeHandlerDefaultTarget(t *testing.T) {
server, _, _, caFile, teardown, err := test.SetupHTTPSServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartTLS()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"https": config.Module{
Prober: "https",
TLSConfig: pconfig.TLSConfig{
Target: server.URL,
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
HTTPS: config.HTTPSProbe{
// Check with a valid URL
ProxyURL: config.URL{URL: proxyURL},
},
},
},
}
rr, err = probe(server.URL, "https", conf)
// Should use the target in the module configuration
rr, err := probe("", "https", conf)
if err != nil {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 1"); !ok {
t.Errorf("expected `ssl_tls_connect_success 1`")
// Check probe success
if ok := strings.Contains(rr.Body.String(), "ssl_probe_success 1"); !ok {
t.Errorf("expected `ssl_probe_success 1`")
}
}
// TestProbeHandlerTCPStartTLSSMTP tests STARTTLS with a smtp server
func TestProbeHandlerTCPStartTLSSMTP(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
// Check prober metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"https\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"https\"} 1`")
}
// Should ignore a different target in the target parameter
rr, err = probe("localhost:6666", "https", conf)
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartSMTP()
defer server.Close()
// Check probe success
if ok := strings.Contains(rr.Body.String(), "ssl_probe_success 1"); !ok {
t.Errorf("expected `ssl_probe_success 1`")
}
conf := &config.Config{
Modules: map[string]config.Module{
"smtp": config.Module{
Prober: "tcp",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
},
TCP: config.TCPProbe{
StartTLS: "smtp",
},
},
// Check prober metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"https\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"https\"} 1`")
}
conf.Modules["tcp"] = config.Module{
Prober: "tcp",
TLSConfig: config.TLSConfig{
CAFile: caFile,
},
}
rr, err := probe(server.Listener.Addr().String(), "smtp", conf)
rr, err = probe("", "tcp", conf)
if err != nil {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 1"); !ok {
t.Errorf("expected `ssl_tls_connect_success 1`")
// It should fail when there's no target in the module configuration or
// the query parameters
if rr.Code != http.StatusBadRequest {
t.Errorf("expected code: %d, got: %d", http.StatusBadRequest, rr.Code)
}
// Check probe metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"tcp\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"tcp\"} 1`")
}
// Check notAfter and notBefore metrics
if err := checkDates(certPEM, rr.Body.String()); err != nil {
t.Errorf(err.Error())
}
}
// TestProbeHandlerTCPStartTLSFTP tests STARTTLS with a ftp server
func TestProbeHandlerTCPStartTLSFTP(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartFTP()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"ftp": config.Module{
Prober: "tcp",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
},
TCP: config.TCPProbe{
StartTLS: "ftp",
},
},
},
}
rr, err := probe(server.Listener.Addr().String(), "ftp", conf)
if err != nil {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 1"); !ok {
t.Errorf("expected `ssl_tls_connect_success 1`")
}
// Check probe metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"tcp\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"tcp\"} 1`")
}
// Check notAfter and notBefore metrics
if err := checkDates(certPEM, rr.Body.String()); err != nil {
t.Errorf(err.Error())
}
}
// TestProbeHandlerTCPStartTLSIMAP tests STARTTLS with an imap server
func TestProbeHandlerTCPStartTLSIMAP(t *testing.T) {
server, certPEM, _, caFile, teardown, err := test.SetupTCPServer()
if err != nil {
t.Fatalf(err.Error())
}
defer teardown()
server.StartIMAP()
defer server.Close()
conf := &config.Config{
Modules: map[string]config.Module{
"imap": config.Module{
Prober: "tcp",
TLSConfig: pconfig.TLSConfig{
CAFile: caFile,
},
TCP: config.TCPProbe{
StartTLS: "imap",
},
},
},
}
rr, err := probe(server.Listener.Addr().String(), "imap", conf)
if err != nil {
t.Fatalf(err.Error())
}
// Check success metric
if ok := strings.Contains(rr.Body.String(), "ssl_tls_connect_success 1"); !ok {
t.Errorf("expected `ssl_tls_connect_success 1`")
}
// Check probe metric
if ok := strings.Contains(rr.Body.String(), "ssl_prober{prober=\"tcp\"} 1"); !ok {
t.Errorf("expected `ssl_prober{prober=\"tcp\"} 1`")
}
// Check notAfter and notBefore metrics
if err := checkDates(certPEM, rr.Body.String()); err != nil {
t.Errorf(err.Error())
}
}
func checkDates(certPEM []byte, body string) error {
// Check notAfter and notBefore metrics
block, _ := pem.Decode(certPEM)
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return err
}
notAfter := strconv.FormatFloat(float64(cert.NotAfter.UnixNano()/1e9), 'g', -1, 64)
if ok := strings.Contains(body, "ssl_cert_not_after{cn=\"example.ribbybibby.me\",dnsnames=\",example.ribbybibby.me,example-2.ribbybibby.me,example-3.ribbybibby.me,\",emails=\",me@ribbybibby.me,example@ribbybibby.me,\",ips=\",127.0.0.1,::1,\",issuer_cn=\"example.ribbybibby.me\",ou=\",ribbybibbys org,\",serial_no=\"100\"} "+notAfter); !ok {
return fmt.Errorf("expected `ssl_cert_not_after{cn=\"example.ribbybibby.me\",dnsnames=\",example.ribbybibby.me,example-2.ribbybibby.me,example-3.ribbybibby.me,\",emails=\",me@ribbybibby.me,example@ribbybibby.me,\",ips=\",127.0.0.1,::1,\",issuer_cn=\"example.ribbybibby.me\",ou=\",ribbybibbys org,\",serial_no=\"100\"} " + notAfter + "`")
}
notBefore := strconv.FormatFloat(float64(cert.NotBefore.UnixNano()/1e9), 'g', -1, 64)
if ok := strings.Contains(body, "ssl_cert_not_before{cn=\"example.ribbybibby.me\",dnsnames=\",example.ribbybibby.me,example-2.ribbybibby.me,example-3.ribbybibby.me,\",emails=\",me@ribbybibby.me,example@ribbybibby.me,\",ips=\",127.0.0.1,::1,\",issuer_cn=\"example.ribbybibby.me\",ou=\",ribbybibbys org,\",serial_no=\"100\"} "+notBefore); !ok {
return fmt.Errorf("expected `ssl_cert_not_before{cn=\"example.ribbybibby.me\",dnsnames=\",example.ribbybibby.me,example-2.ribbybibby.me,example-3.ribbybibby.me,\",emails=\",me@ribbybibby.me,example@ribbybibby.me,\",ips=\",127.0.0.1,::1,\",issuer_cn=\"example.ribbybibby.me\",ou=\",ribbybibbys org,\",serial_no=\"100\"} " + notBefore + "`")
}
return nil
}
func checkVerifiedChainDates(verifiedChains [][]*x509.Certificate, body string) error {
for i, chain := range verifiedChains {
for _, cert := range chain {
notAfter := strconv.FormatFloat(float64(cert.NotAfter.UnixNano()/1e9), 'g', -1, 64)
notAfterMetric := "ssl_verified_cert_not_after{" + strings.Join([]string{
"chain_no=\"" + strconv.Itoa(i) + "\"",
"cn=\"example.ribbybibby.me\"",
"dnsnames=\",example.ribbybibby.me,example-2.ribbybibby.me,example-3.ribbybibby.me,\"",
"emails=\",me@ribbybibby.me,example@ribbybibby.me,\"",
"ips=\",127.0.0.1,::1,\"",
"issuer_cn=\"example.ribbybibby.me\"",
"ou=\",ribbybibbys org,\"",
"serial_no=\"" + cert.SerialNumber.String() + "\"",
}, ",") + "} " + notAfter
if ok := strings.Contains(body, notAfterMetric); !ok {
return fmt.Errorf("expected `%s` in: %s", notAfterMetric, body)
}
notBefore := strconv.FormatFloat(float64(cert.NotBefore.UnixNano()/1e9), 'g', -1, 64)
notBeforeMetric := "ssl_verified_cert_not_before{" + strings.Join([]string{
"chain_no=\"" + strconv.Itoa(i) + "\"",
"cn=\"example.ribbybibby.me\"",
"dnsnames=\",example.ribbybibby.me,example-2.ribbybibby.me,example-3.ribbybibby.me,\"",
"emails=\",me@ribbybibby.me,example@ribbybibby.me,\"",
"ips=\",127.0.0.1,::1,\"",
"issuer_cn=\"example.ribbybibby.me\"",
"ou=\",ribbybibbys org,\"",
"serial_no=\"" + cert.SerialNumber.String() + "\"",
}, ",") + "} " + notBefore
if ok := strings.Contains(body, notBeforeMetric); !ok {
return fmt.Errorf("expected `%s` in: %s", notBeforeMetric, body)
}
}
}
return nil
}
func probe(target, module string, conf *config.Config) (*httptest.ResponseRecorder, error) {
@@ -846,10 +209,14 @@ func probe(target, module string, conf *config.Config) (*httptest.ResponseRecord
rr := httptest.NewRecorder()
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
probeHandler(w, r, conf)
probeHandler(newTestLogger(), w, r, conf)
})
handler.ServeHTTP(rr, req)
return rr, nil
}
func newTestLogger() log.Logger {
return log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout))
}

View File

@@ -1,13 +1,16 @@
package test
import (
"bytes"
"crypto/tls"
"fmt"
"io"
"net"
"os"
"time"
"github.com/prometheus/common/log"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
)
// TCPServer allows manipulation of the tls.Config before starting the listener
@@ -15,6 +18,7 @@ type TCPServer struct {
Listener net.Listener
TLS *tls.Config
stopCh chan struct{}
logger log.Logger
}
// StartTLS starts a listener that performs an immediate TLS handshake
@@ -29,7 +33,31 @@ func (t *TCPServer) StartTLS() {
// Immediately upgrade to TLS.
if err := conn.(*tls.Conn).Handshake(); err != nil {
log.Errorln(err)
level.Error(t.logger).Log("msg", err)
} else {
// Send some bytes before terminating the connection.
fmt.Fprintf(conn, "Hello World!\n")
}
t.stopCh <- struct{}{}
}()
}
// StartTLSWait starts a listener and waits for duration 'd' before performing
// the TLS handshake
func (t *TCPServer) StartTLSWait(d time.Duration) {
go func() {
ln := tls.NewListener(t.Listener, t.TLS)
conn, err := ln.Accept()
if err != nil {
panic(fmt.Sprintf("Error accepting on socket: %s", err))
}
defer conn.Close()
time.Sleep(d)
if err := conn.(*tls.Conn).Handshake(); err != nil {
level.Error(t.logger).Log(err)
} else {
// Send some bytes before terminating the connection.
fmt.Fprintf(conn, "Hello World!\n")
@@ -69,7 +97,46 @@ func (t *TCPServer) StartSMTP() {
// Upgrade to TLS.
tlsConn := tls.Server(conn, t.TLS)
if err := tlsConn.Handshake(); err != nil {
log.Errorln(err)
level.Error(t.logger).Log("msg", err)
}
defer tlsConn.Close()
t.stopCh <- struct{}{}
}()
}
// StartSMTPWithDashInResponse starts a listener that negotiates a TLS connection with an smtp
// client using STARTTLS. The server provides the STARTTLS response in the form '250 STARTTLS'
// (with a space, rather than a dash)
func (t *TCPServer) StartSMTPWithDashInResponse() {
go func() {
conn, err := t.Listener.Accept()
if err != nil {
panic(fmt.Sprintf("Error accepting on socket: %s", err))
}
defer conn.Close()
if err := conn.SetDeadline(time.Now().Add(5 * time.Second)); err != nil {
panic("Error setting deadline")
}
fmt.Fprintf(conn, "220 ESMTP StartTLS pseudo-server\n")
if _, e := fmt.Fscanf(conn, "EHLO prober\n"); e != nil {
panic("Error in dialog. No EHLO received.")
}
fmt.Fprintf(conn, "250-pseudo-server.example.net\n")
fmt.Fprintf(conn, "250-DSN\n")
fmt.Fprintf(conn, "250 STARTTLS\n")
if _, e := fmt.Fscanf(conn, "STARTTLS\n"); e != nil {
panic("Error in dialog. No (TLS) STARTTLS received.")
}
fmt.Fprintf(conn, "220 2.0.0 Ready to start TLS\n")
// Upgrade to TLS.
tlsConn := tls.Server(conn, t.TLS)
if err := tlsConn.Handshake(); err != nil {
level.Error(t.logger).Log("msg", err)
}
defer tlsConn.Close()
@@ -96,7 +163,7 @@ func (t *TCPServer) StartFTP() {
// Upgrade to TLS.
tlsConn := tls.Server(conn, t.TLS)
if err := tlsConn.Handshake(); err != nil {
log.Errorln(err)
level.Error(t.logger).Log(err)
}
defer tlsConn.Close()
@@ -128,7 +195,73 @@ func (t *TCPServer) StartIMAP() {
// Upgrade to TLS.
tlsConn := tls.Server(conn, t.TLS)
if err := tlsConn.Handshake(); err != nil {
log.Errorln(err)
level.Error(t.logger).Log("msg", err)
}
defer tlsConn.Close()
t.stopCh <- struct{}{}
}()
}
// StartPOP3 starts a listener that negotiates a TLS connection with an pop3
// client using STARTTLS
func (t *TCPServer) StartPOP3() {
go func() {
conn, err := t.Listener.Accept()
if err != nil {
panic(fmt.Sprintf("Error accepting on socket: %s", err))
}
defer conn.Close()
fmt.Fprintf(conn, "+OK XPOP3 ready.\n")
if _, e := fmt.Fscanf(conn, "STLS\n"); e != nil {
panic("Error in dialog. No STLS received.")
}
fmt.Fprintf(conn, "+OK Begin TLS negotiation now.\n")
// Upgrade to TLS.
tlsConn := tls.Server(conn, t.TLS)
if err := tlsConn.Handshake(); err != nil {
level.Error(t.logger).Log("msg", err)
}
defer tlsConn.Close()
t.stopCh <- struct{}{}
}()
}
// StartPostgreSQL starts a listener that negotiates a TLS connection with an postgresql
// client using STARTTLS
func (t *TCPServer) StartPostgreSQL() {
go func() {
conn, err := t.Listener.Accept()
if err != nil {
panic(fmt.Sprintf("Error accepting on socket: %s", err))
}
defer conn.Close()
sslRequestMessage := []byte{0x00, 0x00, 0x00, 0x08, 0x04, 0xd2, 0x16, 0x2f}
buffer := make([]byte, len(sslRequestMessage))
_, err = io.ReadFull(conn, buffer)
if err != nil {
panic("Error reading input from client")
}
if bytes.Compare(buffer, sslRequestMessage) != 0 {
panic(fmt.Sprintf("Error in dialog. No %x received", buffer))
}
sslRequestResponse := []byte{0x53}
if _, err := conn.Write(sslRequestResponse); err != nil {
panic("Error writing response to client")
}
tlsConn := tls.Server(conn, t.TLS)
if err := tlsConn.Handshake(); err != nil {
level.Error(t.logger).Log("msg", err)
}
defer tlsConn.Close()
@@ -189,6 +322,7 @@ func SetupTCPServerWithCertAndKey(caPEM, certPEM, keyPEM []byte) (*TCPServer, st
Listener: ln,
TLS: tlsConfig,
stopCh: make(chan (struct{})),
logger: log.NewLogfmtLogger(log.NewSyncWriter(os.Stdout)),
}
return server, caFile, teardown, err

View File

@@ -29,6 +29,7 @@ func GenerateTestCertificate(expiry time.Time) ([]byte, []byte) {
return pemCert, pemKey
}
// GenerateSignedCertificate generates a certificate that is signed
func GenerateSignedCertificate(cert, parentCert *x509.Certificate, parentKey *rsa.PrivateKey) (*x509.Certificate, []byte, []byte) {
privateKey, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
@@ -49,6 +50,9 @@ func GenerateSignedCertificate(cert, parentCert *x509.Certificate, parentKey *rs
pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derCert}),
pem.EncodeToMemory(&pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(privateKey)})
}
// GenerateSelfSignedCertificateWithPrivateKey generates a self signed
// certificate with the given private key
func GenerateSelfSignedCertificateWithPrivateKey(cert *x509.Certificate, privateKey *rsa.PrivateKey) (*x509.Certificate, []byte) {
derCert, err := x509.CreateCertificate(rand.Reader, cert, cert, &privateKey.PublicKey, privateKey)
if err != nil {
@@ -63,6 +67,7 @@ func GenerateSelfSignedCertificateWithPrivateKey(cert *x509.Certificate, private
return genCert, pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derCert})
}
// GenerateCertificateTemplate generates the template used to issue test certificates
func GenerateCertificateTemplate(expiry time.Time) *x509.Certificate {
return &x509.Certificate{
BasicConstraintsValid: true,

View File

@@ -1,27 +0,0 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,25 +0,0 @@
# Go's `text/template` package with newline elision
This is a fork of Go 1.4's [text/template](http://golang.org/pkg/text/template/) package with one addition: a backslash immediately after a closing delimiter will delete all subsequent newlines until a non-newline.
eg.
```
{{if true}}\
hello
{{end}}\
```
Will result in:
```
hello\n
```
Rather than:
```
\n
hello\n
\n
```

View File

@@ -1,406 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package template implements data-driven templates for generating textual output.
To generate HTML output, see package html/template, which has the same interface
as this package but automatically secures HTML output against certain attacks.
Templates are executed by applying them to a data structure. Annotations in the
template refer to elements of the data structure (typically a field of a struct
or a key in a map) to control execution and derive values to be displayed.
Execution of the template walks the structure and sets the cursor, represented
by a period '.' and called "dot", to the value at the current location in the
structure as execution proceeds.
The input text for a template is UTF-8-encoded text in any format.
"Actions"--data evaluations or control structures--are delimited by
"{{" and "}}"; all text outside actions is copied to the output unchanged.
Actions may not span newlines, although comments can.
Once parsed, a template may be executed safely in parallel.
Here is a trivial example that prints "17 items are made of wool".
type Inventory struct {
Material string
Count uint
}
sweaters := Inventory{"wool", 17}
tmpl, err := template.New("test").Parse("{{.Count}} items are made of {{.Material}}")
if err != nil { panic(err) }
err = tmpl.Execute(os.Stdout, sweaters)
if err != nil { panic(err) }
More intricate examples appear below.
Actions
Here is the list of actions. "Arguments" and "pipelines" are evaluations of
data, defined in detail below.
*/
// {{/* a comment */}}
// A comment; discarded. May contain newlines.
// Comments do not nest and must start and end at the
// delimiters, as shown here.
/*
{{pipeline}}
The default textual representation of the value of the pipeline
is copied to the output.
{{if pipeline}} T1 {{end}}
If the value of the pipeline is empty, no output is generated;
otherwise, T1 is executed. The empty values are false, 0, any
nil pointer or interface value, and any array, slice, map, or
string of length zero.
Dot is unaffected.
{{if pipeline}} T1 {{else}} T0 {{end}}
If the value of the pipeline is empty, T0 is executed;
otherwise, T1 is executed. Dot is unaffected.
{{if pipeline}} T1 {{else if pipeline}} T0 {{end}}
To simplify the appearance of if-else chains, the else action
of an if may include another if directly; the effect is exactly
the same as writing
{{if pipeline}} T1 {{else}}{{if pipeline}} T0 {{end}}{{end}}
{{range pipeline}} T1 {{end}}
The value of the pipeline must be an array, slice, map, or channel.
If the value of the pipeline has length zero, nothing is output;
otherwise, dot is set to the successive elements of the array,
slice, or map and T1 is executed. If the value is a map and the
keys are of basic type with a defined order ("comparable"), the
elements will be visited in sorted key order.
{{range pipeline}} T1 {{else}} T0 {{end}}
The value of the pipeline must be an array, slice, map, or channel.
If the value of the pipeline has length zero, dot is unaffected and
T0 is executed; otherwise, dot is set to the successive elements
of the array, slice, or map and T1 is executed.
{{template "name"}}
The template with the specified name is executed with nil data.
{{template "name" pipeline}}
The template with the specified name is executed with dot set
to the value of the pipeline.
{{with pipeline}} T1 {{end}}
If the value of the pipeline is empty, no output is generated;
otherwise, dot is set to the value of the pipeline and T1 is
executed.
{{with pipeline}} T1 {{else}} T0 {{end}}
If the value of the pipeline is empty, dot is unaffected and T0
is executed; otherwise, dot is set to the value of the pipeline
and T1 is executed.
Arguments
An argument is a simple value, denoted by one of the following.
- A boolean, string, character, integer, floating-point, imaginary
or complex constant in Go syntax. These behave like Go's untyped
constants, although raw strings may not span newlines.
- The keyword nil, representing an untyped Go nil.
- The character '.' (period):
.
The result is the value of dot.
- A variable name, which is a (possibly empty) alphanumeric string
preceded by a dollar sign, such as
$piOver2
or
$
The result is the value of the variable.
Variables are described below.
- The name of a field of the data, which must be a struct, preceded
by a period, such as
.Field
The result is the value of the field. Field invocations may be
chained:
.Field1.Field2
Fields can also be evaluated on variables, including chaining:
$x.Field1.Field2
- The name of a key of the data, which must be a map, preceded
by a period, such as
.Key
The result is the map element value indexed by the key.
Key invocations may be chained and combined with fields to any
depth:
.Field1.Key1.Field2.Key2
Although the key must be an alphanumeric identifier, unlike with
field names they do not need to start with an upper case letter.
Keys can also be evaluated on variables, including chaining:
$x.key1.key2
- The name of a niladic method of the data, preceded by a period,
such as
.Method
The result is the value of invoking the method with dot as the
receiver, dot.Method(). Such a method must have one return value (of
any type) or two return values, the second of which is an error.
If it has two and the returned error is non-nil, execution terminates
and an error is returned to the caller as the value of Execute.
Method invocations may be chained and combined with fields and keys
to any depth:
.Field1.Key1.Method1.Field2.Key2.Method2
Methods can also be evaluated on variables, including chaining:
$x.Method1.Field
- The name of a niladic function, such as
fun
The result is the value of invoking the function, fun(). The return
types and values behave as in methods. Functions and function
names are described below.
- A parenthesized instance of one the above, for grouping. The result
may be accessed by a field or map key invocation.
print (.F1 arg1) (.F2 arg2)
(.StructValuedMethod "arg").Field
Arguments may evaluate to any type; if they are pointers the implementation
automatically indirects to the base type when required.
If an evaluation yields a function value, such as a function-valued
field of a struct, the function is not invoked automatically, but it
can be used as a truth value for an if action and the like. To invoke
it, use the call function, defined below.
A pipeline is a possibly chained sequence of "commands". A command is a simple
value (argument) or a function or method call, possibly with multiple arguments:
Argument
The result is the value of evaluating the argument.
.Method [Argument...]
The method can be alone or the last element of a chain but,
unlike methods in the middle of a chain, it can take arguments.
The result is the value of calling the method with the
arguments:
dot.Method(Argument1, etc.)
functionName [Argument...]
The result is the value of calling the function associated
with the name:
function(Argument1, etc.)
Functions and function names are described below.
Pipelines
A pipeline may be "chained" by separating a sequence of commands with pipeline
characters '|'. In a chained pipeline, the result of the each command is
passed as the last argument of the following command. The output of the final
command in the pipeline is the value of the pipeline.
The output of a command will be either one value or two values, the second of
which has type error. If that second value is present and evaluates to
non-nil, execution terminates and the error is returned to the caller of
Execute.
Variables
A pipeline inside an action may initialize a variable to capture the result.
The initialization has syntax
$variable := pipeline
where $variable is the name of the variable. An action that declares a
variable produces no output.
If a "range" action initializes a variable, the variable is set to the
successive elements of the iteration. Also, a "range" may declare two
variables, separated by a comma:
range $index, $element := pipeline
in which case $index and $element are set to the successive values of the
array/slice index or map key and element, respectively. Note that if there is
only one variable, it is assigned the element; this is opposite to the
convention in Go range clauses.
A variable's scope extends to the "end" action of the control structure ("if",
"with", or "range") in which it is declared, or to the end of the template if
there is no such control structure. A template invocation does not inherit
variables from the point of its invocation.
When execution begins, $ is set to the data argument passed to Execute, that is,
to the starting value of dot.
Examples
Here are some example one-line templates demonstrating pipelines and variables.
All produce the quoted word "output":
{{"\"output\""}}
A string constant.
{{`"output"`}}
A raw string constant.
{{printf "%q" "output"}}
A function call.
{{"output" | printf "%q"}}
A function call whose final argument comes from the previous
command.
{{printf "%q" (print "out" "put")}}
A parenthesized argument.
{{"put" | printf "%s%s" "out" | printf "%q"}}
A more elaborate call.
{{"output" | printf "%s" | printf "%q"}}
A longer chain.
{{with "output"}}{{printf "%q" .}}{{end}}
A with action using dot.
{{with $x := "output" | printf "%q"}}{{$x}}{{end}}
A with action that creates and uses a variable.
{{with $x := "output"}}{{printf "%q" $x}}{{end}}
A with action that uses the variable in another action.
{{with $x := "output"}}{{$x | printf "%q"}}{{end}}
The same, but pipelined.
Functions
During execution functions are found in two function maps: first in the
template, then in the global function map. By default, no functions are defined
in the template but the Funcs method can be used to add them.
Predefined global functions are named as follows.
and
Returns the boolean AND of its arguments by returning the
first empty argument or the last argument, that is,
"and x y" behaves as "if x then y else x". All the
arguments are evaluated.
call
Returns the result of calling the first argument, which
must be a function, with the remaining arguments as parameters.
Thus "call .X.Y 1 2" is, in Go notation, dot.X.Y(1, 2) where
Y is a func-valued field, map entry, or the like.
The first argument must be the result of an evaluation
that yields a value of function type (as distinct from
a predefined function such as print). The function must
return either one or two result values, the second of which
is of type error. If the arguments don't match the function
or the returned error value is non-nil, execution stops.
html
Returns the escaped HTML equivalent of the textual
representation of its arguments.
index
Returns the result of indexing its first argument by the
following arguments. Thus "index x 1 2 3" is, in Go syntax,
x[1][2][3]. Each indexed item must be a map, slice, or array.
js
Returns the escaped JavaScript equivalent of the textual
representation of its arguments.
len
Returns the integer length of its argument.
not
Returns the boolean negation of its single argument.
or
Returns the boolean OR of its arguments by returning the
first non-empty argument or the last argument, that is,
"or x y" behaves as "if x then x else y". All the
arguments are evaluated.
print
An alias for fmt.Sprint
printf
An alias for fmt.Sprintf
println
An alias for fmt.Sprintln
urlquery
Returns the escaped value of the textual representation of
its arguments in a form suitable for embedding in a URL query.
The boolean functions take any zero value to be false and a non-zero
value to be true.
There is also a set of binary comparison operators defined as
functions:
eq
Returns the boolean truth of arg1 == arg2
ne
Returns the boolean truth of arg1 != arg2
lt
Returns the boolean truth of arg1 < arg2
le
Returns the boolean truth of arg1 <= arg2
gt
Returns the boolean truth of arg1 > arg2
ge
Returns the boolean truth of arg1 >= arg2
For simpler multi-way equality tests, eq (only) accepts two or more
arguments and compares the second and subsequent to the first,
returning in effect
arg1==arg2 || arg1==arg3 || arg1==arg4 ...
(Unlike with || in Go, however, eq is a function call and all the
arguments will be evaluated.)
The comparison functions work on basic types only (or named basic
types, such as "type Celsius float32"). They implement the Go rules
for comparison of values, except that size and exact type are
ignored, so any integer value, signed or unsigned, may be compared
with any other integer value. (The arithmetic value is compared,
not the bit pattern, so all negative integers are less than all
unsigned integers.) However, as usual, one may not compare an int
with a float32 and so on.
Associated templates
Each template is named by a string specified when it is created. Also, each
template is associated with zero or more other templates that it may invoke by
name; such associations are transitive and form a name space of templates.
A template may use a template invocation to instantiate another associated
template; see the explanation of the "template" action above. The name must be
that of a template associated with the template that contains the invocation.
Nested template definitions
When parsing a template, another template may be defined and associated with the
template being parsed. Template definitions must appear at the top level of the
template, much like global variables in a Go program.
The syntax of such definitions is to surround each template declaration with a
"define" and "end" action.
The define action names the template being created by providing a string
constant. Here is a simple example:
`{{define "T1"}}ONE{{end}}
{{define "T2"}}TWO{{end}}
{{define "T3"}}{{template "T1"}} {{template "T2"}}{{end}}
{{template "T3"}}`
This defines two templates, T1 and T2, and a third T3 that invokes the other two
when it is executed. Finally it invokes T3. If executed this template will
produce the text
ONE TWO
By construction, a template may reside in only one association. If it's
necessary to have a template addressable from multiple associations, the
template definition must be parsed multiple times to create distinct *Template
values, or must be copied with the Clone or AddParseTree method.
Parse may be called multiple times to assemble the various associated templates;
see the ParseFiles and ParseGlob functions and methods for simple ways to parse
related templates stored in files.
A template may be executed directly or through ExecuteTemplate, which executes
an associated template identified by name. To invoke our example above, we
might write,
err := tmpl.Execute(os.Stdout, "no data needed")
if err != nil {
log.Fatalf("execution failed: %s", err)
}
or to invoke a particular template explicitly by name,
err := tmpl.ExecuteTemplate(os.Stdout, "T2", "no data needed")
if err != nil {
log.Fatalf("execution failed: %s", err)
}
*/
package template

View File

@@ -1,845 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package template
import (
"bytes"
"fmt"
"io"
"reflect"
"runtime"
"sort"
"strings"
"github.com/alecthomas/template/parse"
)
// state represents the state of an execution. It's not part of the
// template so that multiple executions of the same template
// can execute in parallel.
type state struct {
tmpl *Template
wr io.Writer
node parse.Node // current node, for errors
vars []variable // push-down stack of variable values.
}
// variable holds the dynamic value of a variable such as $, $x etc.
type variable struct {
name string
value reflect.Value
}
// push pushes a new variable on the stack.
func (s *state) push(name string, value reflect.Value) {
s.vars = append(s.vars, variable{name, value})
}
// mark returns the length of the variable stack.
func (s *state) mark() int {
return len(s.vars)
}
// pop pops the variable stack up to the mark.
func (s *state) pop(mark int) {
s.vars = s.vars[0:mark]
}
// setVar overwrites the top-nth variable on the stack. Used by range iterations.
func (s *state) setVar(n int, value reflect.Value) {
s.vars[len(s.vars)-n].value = value
}
// varValue returns the value of the named variable.
func (s *state) varValue(name string) reflect.Value {
for i := s.mark() - 1; i >= 0; i-- {
if s.vars[i].name == name {
return s.vars[i].value
}
}
s.errorf("undefined variable: %s", name)
return zero
}
var zero reflect.Value
// at marks the state to be on node n, for error reporting.
func (s *state) at(node parse.Node) {
s.node = node
}
// doublePercent returns the string with %'s replaced by %%, if necessary,
// so it can be used safely inside a Printf format string.
func doublePercent(str string) string {
if strings.Contains(str, "%") {
str = strings.Replace(str, "%", "%%", -1)
}
return str
}
// errorf formats the error and terminates processing.
func (s *state) errorf(format string, args ...interface{}) {
name := doublePercent(s.tmpl.Name())
if s.node == nil {
format = fmt.Sprintf("template: %s: %s", name, format)
} else {
location, context := s.tmpl.ErrorContext(s.node)
format = fmt.Sprintf("template: %s: executing %q at <%s>: %s", location, name, doublePercent(context), format)
}
panic(fmt.Errorf(format, args...))
}
// errRecover is the handler that turns panics into returns from the top
// level of Parse.
func errRecover(errp *error) {
e := recover()
if e != nil {
switch err := e.(type) {
case runtime.Error:
panic(e)
case error:
*errp = err
default:
panic(e)
}
}
}
// ExecuteTemplate applies the template associated with t that has the given name
// to the specified data object and writes the output to wr.
// If an error occurs executing the template or writing its output,
// execution stops, but partial results may already have been written to
// the output writer.
// A template may be executed safely in parallel.
func (t *Template) ExecuteTemplate(wr io.Writer, name string, data interface{}) error {
tmpl := t.tmpl[name]
if tmpl == nil {
return fmt.Errorf("template: no template %q associated with template %q", name, t.name)
}
return tmpl.Execute(wr, data)
}
// Execute applies a parsed template to the specified data object,
// and writes the output to wr.
// If an error occurs executing the template or writing its output,
// execution stops, but partial results may already have been written to
// the output writer.
// A template may be executed safely in parallel.
func (t *Template) Execute(wr io.Writer, data interface{}) (err error) {
defer errRecover(&err)
value := reflect.ValueOf(data)
state := &state{
tmpl: t,
wr: wr,
vars: []variable{{"$", value}},
}
t.init()
if t.Tree == nil || t.Root == nil {
var b bytes.Buffer
for name, tmpl := range t.tmpl {
if tmpl.Tree == nil || tmpl.Root == nil {
continue
}
if b.Len() > 0 {
b.WriteString(", ")
}
fmt.Fprintf(&b, "%q", name)
}
var s string
if b.Len() > 0 {
s = "; defined templates are: " + b.String()
}
state.errorf("%q is an incomplete or empty template%s", t.Name(), s)
}
state.walk(value, t.Root)
return
}
// Walk functions step through the major pieces of the template structure,
// generating output as they go.
func (s *state) walk(dot reflect.Value, node parse.Node) {
s.at(node)
switch node := node.(type) {
case *parse.ActionNode:
// Do not pop variables so they persist until next end.
// Also, if the action declares variables, don't print the result.
val := s.evalPipeline(dot, node.Pipe)
if len(node.Pipe.Decl) == 0 {
s.printValue(node, val)
}
case *parse.IfNode:
s.walkIfOrWith(parse.NodeIf, dot, node.Pipe, node.List, node.ElseList)
case *parse.ListNode:
for _, node := range node.Nodes {
s.walk(dot, node)
}
case *parse.RangeNode:
s.walkRange(dot, node)
case *parse.TemplateNode:
s.walkTemplate(dot, node)
case *parse.TextNode:
if _, err := s.wr.Write(node.Text); err != nil {
s.errorf("%s", err)
}
case *parse.WithNode:
s.walkIfOrWith(parse.NodeWith, dot, node.Pipe, node.List, node.ElseList)
default:
s.errorf("unknown node: %s", node)
}
}
// walkIfOrWith walks an 'if' or 'with' node. The two control structures
// are identical in behavior except that 'with' sets dot.
func (s *state) walkIfOrWith(typ parse.NodeType, dot reflect.Value, pipe *parse.PipeNode, list, elseList *parse.ListNode) {
defer s.pop(s.mark())
val := s.evalPipeline(dot, pipe)
truth, ok := isTrue(val)
if !ok {
s.errorf("if/with can't use %v", val)
}
if truth {
if typ == parse.NodeWith {
s.walk(val, list)
} else {
s.walk(dot, list)
}
} else if elseList != nil {
s.walk(dot, elseList)
}
}
// isTrue reports whether the value is 'true', in the sense of not the zero of its type,
// and whether the value has a meaningful truth value.
func isTrue(val reflect.Value) (truth, ok bool) {
if !val.IsValid() {
// Something like var x interface{}, never set. It's a form of nil.
return false, true
}
switch val.Kind() {
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
truth = val.Len() > 0
case reflect.Bool:
truth = val.Bool()
case reflect.Complex64, reflect.Complex128:
truth = val.Complex() != 0
case reflect.Chan, reflect.Func, reflect.Ptr, reflect.Interface:
truth = !val.IsNil()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
truth = val.Int() != 0
case reflect.Float32, reflect.Float64:
truth = val.Float() != 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
truth = val.Uint() != 0
case reflect.Struct:
truth = true // Struct values are always true.
default:
return
}
return truth, true
}
func (s *state) walkRange(dot reflect.Value, r *parse.RangeNode) {
s.at(r)
defer s.pop(s.mark())
val, _ := indirect(s.evalPipeline(dot, r.Pipe))
// mark top of stack before any variables in the body are pushed.
mark := s.mark()
oneIteration := func(index, elem reflect.Value) {
// Set top var (lexically the second if there are two) to the element.
if len(r.Pipe.Decl) > 0 {
s.setVar(1, elem)
}
// Set next var (lexically the first if there are two) to the index.
if len(r.Pipe.Decl) > 1 {
s.setVar(2, index)
}
s.walk(elem, r.List)
s.pop(mark)
}
switch val.Kind() {
case reflect.Array, reflect.Slice:
if val.Len() == 0 {
break
}
for i := 0; i < val.Len(); i++ {
oneIteration(reflect.ValueOf(i), val.Index(i))
}
return
case reflect.Map:
if val.Len() == 0 {
break
}
for _, key := range sortKeys(val.MapKeys()) {
oneIteration(key, val.MapIndex(key))
}
return
case reflect.Chan:
if val.IsNil() {
break
}
i := 0
for ; ; i++ {
elem, ok := val.Recv()
if !ok {
break
}
oneIteration(reflect.ValueOf(i), elem)
}
if i == 0 {
break
}
return
case reflect.Invalid:
break // An invalid value is likely a nil map, etc. and acts like an empty map.
default:
s.errorf("range can't iterate over %v", val)
}
if r.ElseList != nil {
s.walk(dot, r.ElseList)
}
}
func (s *state) walkTemplate(dot reflect.Value, t *parse.TemplateNode) {
s.at(t)
tmpl := s.tmpl.tmpl[t.Name]
if tmpl == nil {
s.errorf("template %q not defined", t.Name)
}
// Variables declared by the pipeline persist.
dot = s.evalPipeline(dot, t.Pipe)
newState := *s
newState.tmpl = tmpl
// No dynamic scoping: template invocations inherit no variables.
newState.vars = []variable{{"$", dot}}
newState.walk(dot, tmpl.Root)
}
// Eval functions evaluate pipelines, commands, and their elements and extract
// values from the data structure by examining fields, calling methods, and so on.
// The printing of those values happens only through walk functions.
// evalPipeline returns the value acquired by evaluating a pipeline. If the
// pipeline has a variable declaration, the variable will be pushed on the
// stack. Callers should therefore pop the stack after they are finished
// executing commands depending on the pipeline value.
func (s *state) evalPipeline(dot reflect.Value, pipe *parse.PipeNode) (value reflect.Value) {
if pipe == nil {
return
}
s.at(pipe)
for _, cmd := range pipe.Cmds {
value = s.evalCommand(dot, cmd, value) // previous value is this one's final arg.
// If the object has type interface{}, dig down one level to the thing inside.
if value.Kind() == reflect.Interface && value.Type().NumMethod() == 0 {
value = reflect.ValueOf(value.Interface()) // lovely!
}
}
for _, variable := range pipe.Decl {
s.push(variable.Ident[0], value)
}
return value
}
func (s *state) notAFunction(args []parse.Node, final reflect.Value) {
if len(args) > 1 || final.IsValid() {
s.errorf("can't give argument to non-function %s", args[0])
}
}
func (s *state) evalCommand(dot reflect.Value, cmd *parse.CommandNode, final reflect.Value) reflect.Value {
firstWord := cmd.Args[0]
switch n := firstWord.(type) {
case *parse.FieldNode:
return s.evalFieldNode(dot, n, cmd.Args, final)
case *parse.ChainNode:
return s.evalChainNode(dot, n, cmd.Args, final)
case *parse.IdentifierNode:
// Must be a function.
return s.evalFunction(dot, n, cmd, cmd.Args, final)
case *parse.PipeNode:
// Parenthesized pipeline. The arguments are all inside the pipeline; final is ignored.
return s.evalPipeline(dot, n)
case *parse.VariableNode:
return s.evalVariableNode(dot, n, cmd.Args, final)
}
s.at(firstWord)
s.notAFunction(cmd.Args, final)
switch word := firstWord.(type) {
case *parse.BoolNode:
return reflect.ValueOf(word.True)
case *parse.DotNode:
return dot
case *parse.NilNode:
s.errorf("nil is not a command")
case *parse.NumberNode:
return s.idealConstant(word)
case *parse.StringNode:
return reflect.ValueOf(word.Text)
}
s.errorf("can't evaluate command %q", firstWord)
panic("not reached")
}
// idealConstant is called to return the value of a number in a context where
// we don't know the type. In that case, the syntax of the number tells us
// its type, and we use Go rules to resolve. Note there is no such thing as
// a uint ideal constant in this situation - the value must be of int type.
func (s *state) idealConstant(constant *parse.NumberNode) reflect.Value {
// These are ideal constants but we don't know the type
// and we have no context. (If it was a method argument,
// we'd know what we need.) The syntax guides us to some extent.
s.at(constant)
switch {
case constant.IsComplex:
return reflect.ValueOf(constant.Complex128) // incontrovertible.
case constant.IsFloat && !isHexConstant(constant.Text) && strings.IndexAny(constant.Text, ".eE") >= 0:
return reflect.ValueOf(constant.Float64)
case constant.IsInt:
n := int(constant.Int64)
if int64(n) != constant.Int64 {
s.errorf("%s overflows int", constant.Text)
}
return reflect.ValueOf(n)
case constant.IsUint:
s.errorf("%s overflows int", constant.Text)
}
return zero
}
func isHexConstant(s string) bool {
return len(s) > 2 && s[0] == '0' && (s[1] == 'x' || s[1] == 'X')
}
func (s *state) evalFieldNode(dot reflect.Value, field *parse.FieldNode, args []parse.Node, final reflect.Value) reflect.Value {
s.at(field)
return s.evalFieldChain(dot, dot, field, field.Ident, args, final)
}
func (s *state) evalChainNode(dot reflect.Value, chain *parse.ChainNode, args []parse.Node, final reflect.Value) reflect.Value {
s.at(chain)
// (pipe).Field1.Field2 has pipe as .Node, fields as .Field. Eval the pipeline, then the fields.
pipe := s.evalArg(dot, nil, chain.Node)
if len(chain.Field) == 0 {
s.errorf("internal error: no fields in evalChainNode")
}
return s.evalFieldChain(dot, pipe, chain, chain.Field, args, final)
}
func (s *state) evalVariableNode(dot reflect.Value, variable *parse.VariableNode, args []parse.Node, final reflect.Value) reflect.Value {
// $x.Field has $x as the first ident, Field as the second. Eval the var, then the fields.
s.at(variable)
value := s.varValue(variable.Ident[0])
if len(variable.Ident) == 1 {
s.notAFunction(args, final)
return value
}
return s.evalFieldChain(dot, value, variable, variable.Ident[1:], args, final)
}
// evalFieldChain evaluates .X.Y.Z possibly followed by arguments.
// dot is the environment in which to evaluate arguments, while
// receiver is the value being walked along the chain.
func (s *state) evalFieldChain(dot, receiver reflect.Value, node parse.Node, ident []string, args []parse.Node, final reflect.Value) reflect.Value {
n := len(ident)
for i := 0; i < n-1; i++ {
receiver = s.evalField(dot, ident[i], node, nil, zero, receiver)
}
// Now if it's a method, it gets the arguments.
return s.evalField(dot, ident[n-1], node, args, final, receiver)
}
func (s *state) evalFunction(dot reflect.Value, node *parse.IdentifierNode, cmd parse.Node, args []parse.Node, final reflect.Value) reflect.Value {
s.at(node)
name := node.Ident
function, ok := findFunction(name, s.tmpl)
if !ok {
s.errorf("%q is not a defined function", name)
}
return s.evalCall(dot, function, cmd, name, args, final)
}
// evalField evaluates an expression like (.Field) or (.Field arg1 arg2).
// The 'final' argument represents the return value from the preceding
// value of the pipeline, if any.
func (s *state) evalField(dot reflect.Value, fieldName string, node parse.Node, args []parse.Node, final, receiver reflect.Value) reflect.Value {
if !receiver.IsValid() {
return zero
}
typ := receiver.Type()
receiver, _ = indirect(receiver)
// Unless it's an interface, need to get to a value of type *T to guarantee
// we see all methods of T and *T.
ptr := receiver
if ptr.Kind() != reflect.Interface && ptr.CanAddr() {
ptr = ptr.Addr()
}
if method := ptr.MethodByName(fieldName); method.IsValid() {
return s.evalCall(dot, method, node, fieldName, args, final)
}
hasArgs := len(args) > 1 || final.IsValid()
// It's not a method; must be a field of a struct or an element of a map. The receiver must not be nil.
receiver, isNil := indirect(receiver)
if isNil {
s.errorf("nil pointer evaluating %s.%s", typ, fieldName)
}
switch receiver.Kind() {
case reflect.Struct:
tField, ok := receiver.Type().FieldByName(fieldName)
if ok {
field := receiver.FieldByIndex(tField.Index)
if tField.PkgPath != "" { // field is unexported
s.errorf("%s is an unexported field of struct type %s", fieldName, typ)
}
// If it's a function, we must call it.
if hasArgs {
s.errorf("%s has arguments but cannot be invoked as function", fieldName)
}
return field
}
s.errorf("%s is not a field of struct type %s", fieldName, typ)
case reflect.Map:
// If it's a map, attempt to use the field name as a key.
nameVal := reflect.ValueOf(fieldName)
if nameVal.Type().AssignableTo(receiver.Type().Key()) {
if hasArgs {
s.errorf("%s is not a method but has arguments", fieldName)
}
return receiver.MapIndex(nameVal)
}
}
s.errorf("can't evaluate field %s in type %s", fieldName, typ)
panic("not reached")
}
var (
errorType = reflect.TypeOf((*error)(nil)).Elem()
fmtStringerType = reflect.TypeOf((*fmt.Stringer)(nil)).Elem()
)
// evalCall executes a function or method call. If it's a method, fun already has the receiver bound, so
// it looks just like a function call. The arg list, if non-nil, includes (in the manner of the shell), arg[0]
// as the function itself.
func (s *state) evalCall(dot, fun reflect.Value, node parse.Node, name string, args []parse.Node, final reflect.Value) reflect.Value {
if args != nil {
args = args[1:] // Zeroth arg is function name/node; not passed to function.
}
typ := fun.Type()
numIn := len(args)
if final.IsValid() {
numIn++
}
numFixed := len(args)
if typ.IsVariadic() {
numFixed = typ.NumIn() - 1 // last arg is the variadic one.
if numIn < numFixed {
s.errorf("wrong number of args for %s: want at least %d got %d", name, typ.NumIn()-1, len(args))
}
} else if numIn < typ.NumIn()-1 || !typ.IsVariadic() && numIn != typ.NumIn() {
s.errorf("wrong number of args for %s: want %d got %d", name, typ.NumIn(), len(args))
}
if !goodFunc(typ) {
// TODO: This could still be a confusing error; maybe goodFunc should provide info.
s.errorf("can't call method/function %q with %d results", name, typ.NumOut())
}
// Build the arg list.
argv := make([]reflect.Value, numIn)
// Args must be evaluated. Fixed args first.
i := 0
for ; i < numFixed && i < len(args); i++ {
argv[i] = s.evalArg(dot, typ.In(i), args[i])
}
// Now the ... args.
if typ.IsVariadic() {
argType := typ.In(typ.NumIn() - 1).Elem() // Argument is a slice.
for ; i < len(args); i++ {
argv[i] = s.evalArg(dot, argType, args[i])
}
}
// Add final value if necessary.
if final.IsValid() {
t := typ.In(typ.NumIn() - 1)
if typ.IsVariadic() {
t = t.Elem()
}
argv[i] = s.validateType(final, t)
}
result := fun.Call(argv)
// If we have an error that is not nil, stop execution and return that error to the caller.
if len(result) == 2 && !result[1].IsNil() {
s.at(node)
s.errorf("error calling %s: %s", name, result[1].Interface().(error))
}
return result[0]
}
// canBeNil reports whether an untyped nil can be assigned to the type. See reflect.Zero.
func canBeNil(typ reflect.Type) bool {
switch typ.Kind() {
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return true
}
return false
}
// validateType guarantees that the value is valid and assignable to the type.
func (s *state) validateType(value reflect.Value, typ reflect.Type) reflect.Value {
if !value.IsValid() {
if typ == nil || canBeNil(typ) {
// An untyped nil interface{}. Accept as a proper nil value.
return reflect.Zero(typ)
}
s.errorf("invalid value; expected %s", typ)
}
if typ != nil && !value.Type().AssignableTo(typ) {
if value.Kind() == reflect.Interface && !value.IsNil() {
value = value.Elem()
if value.Type().AssignableTo(typ) {
return value
}
// fallthrough
}
// Does one dereference or indirection work? We could do more, as we
// do with method receivers, but that gets messy and method receivers
// are much more constrained, so it makes more sense there than here.
// Besides, one is almost always all you need.
switch {
case value.Kind() == reflect.Ptr && value.Type().Elem().AssignableTo(typ):
value = value.Elem()
if !value.IsValid() {
s.errorf("dereference of nil pointer of type %s", typ)
}
case reflect.PtrTo(value.Type()).AssignableTo(typ) && value.CanAddr():
value = value.Addr()
default:
s.errorf("wrong type for value; expected %s; got %s", typ, value.Type())
}
}
return value
}
func (s *state) evalArg(dot reflect.Value, typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
switch arg := n.(type) {
case *parse.DotNode:
return s.validateType(dot, typ)
case *parse.NilNode:
if canBeNil(typ) {
return reflect.Zero(typ)
}
s.errorf("cannot assign nil to %s", typ)
case *parse.FieldNode:
return s.validateType(s.evalFieldNode(dot, arg, []parse.Node{n}, zero), typ)
case *parse.VariableNode:
return s.validateType(s.evalVariableNode(dot, arg, nil, zero), typ)
case *parse.PipeNode:
return s.validateType(s.evalPipeline(dot, arg), typ)
case *parse.IdentifierNode:
return s.evalFunction(dot, arg, arg, nil, zero)
case *parse.ChainNode:
return s.validateType(s.evalChainNode(dot, arg, nil, zero), typ)
}
switch typ.Kind() {
case reflect.Bool:
return s.evalBool(typ, n)
case reflect.Complex64, reflect.Complex128:
return s.evalComplex(typ, n)
case reflect.Float32, reflect.Float64:
return s.evalFloat(typ, n)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return s.evalInteger(typ, n)
case reflect.Interface:
if typ.NumMethod() == 0 {
return s.evalEmptyInterface(dot, n)
}
case reflect.String:
return s.evalString(typ, n)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return s.evalUnsignedInteger(typ, n)
}
s.errorf("can't handle %s for arg of type %s", n, typ)
panic("not reached")
}
func (s *state) evalBool(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.BoolNode); ok {
value := reflect.New(typ).Elem()
value.SetBool(n.True)
return value
}
s.errorf("expected bool; found %s", n)
panic("not reached")
}
func (s *state) evalString(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.StringNode); ok {
value := reflect.New(typ).Elem()
value.SetString(n.Text)
return value
}
s.errorf("expected string; found %s", n)
panic("not reached")
}
func (s *state) evalInteger(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.NumberNode); ok && n.IsInt {
value := reflect.New(typ).Elem()
value.SetInt(n.Int64)
return value
}
s.errorf("expected integer; found %s", n)
panic("not reached")
}
func (s *state) evalUnsignedInteger(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.NumberNode); ok && n.IsUint {
value := reflect.New(typ).Elem()
value.SetUint(n.Uint64)
return value
}
s.errorf("expected unsigned integer; found %s", n)
panic("not reached")
}
func (s *state) evalFloat(typ reflect.Type, n parse.Node) reflect.Value {
s.at(n)
if n, ok := n.(*parse.NumberNode); ok && n.IsFloat {
value := reflect.New(typ).Elem()
value.SetFloat(n.Float64)
return value
}
s.errorf("expected float; found %s", n)
panic("not reached")
}
func (s *state) evalComplex(typ reflect.Type, n parse.Node) reflect.Value {
if n, ok := n.(*parse.NumberNode); ok && n.IsComplex {
value := reflect.New(typ).Elem()
value.SetComplex(n.Complex128)
return value
}
s.errorf("expected complex; found %s", n)
panic("not reached")
}
func (s *state) evalEmptyInterface(dot reflect.Value, n parse.Node) reflect.Value {
s.at(n)
switch n := n.(type) {
case *parse.BoolNode:
return reflect.ValueOf(n.True)
case *parse.DotNode:
return dot
case *parse.FieldNode:
return s.evalFieldNode(dot, n, nil, zero)
case *parse.IdentifierNode:
return s.evalFunction(dot, n, n, nil, zero)
case *parse.NilNode:
// NilNode is handled in evalArg, the only place that calls here.
s.errorf("evalEmptyInterface: nil (can't happen)")
case *parse.NumberNode:
return s.idealConstant(n)
case *parse.StringNode:
return reflect.ValueOf(n.Text)
case *parse.VariableNode:
return s.evalVariableNode(dot, n, nil, zero)
case *parse.PipeNode:
return s.evalPipeline(dot, n)
}
s.errorf("can't handle assignment of %s to empty interface argument", n)
panic("not reached")
}
// indirect returns the item at the end of indirection, and a bool to indicate if it's nil.
// We indirect through pointers and empty interfaces (only) because
// non-empty interfaces have methods we might need.
func indirect(v reflect.Value) (rv reflect.Value, isNil bool) {
for ; v.Kind() == reflect.Ptr || v.Kind() == reflect.Interface; v = v.Elem() {
if v.IsNil() {
return v, true
}
if v.Kind() == reflect.Interface && v.NumMethod() > 0 {
break
}
}
return v, false
}
// printValue writes the textual representation of the value to the output of
// the template.
func (s *state) printValue(n parse.Node, v reflect.Value) {
s.at(n)
iface, ok := printableValue(v)
if !ok {
s.errorf("can't print %s of type %s", n, v.Type())
}
fmt.Fprint(s.wr, iface)
}
// printableValue returns the, possibly indirected, interface value inside v that
// is best for a call to formatted printer.
func printableValue(v reflect.Value) (interface{}, bool) {
if v.Kind() == reflect.Ptr {
v, _ = indirect(v) // fmt.Fprint handles nil.
}
if !v.IsValid() {
return "<no value>", true
}
if !v.Type().Implements(errorType) && !v.Type().Implements(fmtStringerType) {
if v.CanAddr() && (reflect.PtrTo(v.Type()).Implements(errorType) || reflect.PtrTo(v.Type()).Implements(fmtStringerType)) {
v = v.Addr()
} else {
switch v.Kind() {
case reflect.Chan, reflect.Func:
return nil, false
}
}
}
return v.Interface(), true
}
// Types to help sort the keys in a map for reproducible output.
type rvs []reflect.Value
func (x rvs) Len() int { return len(x) }
func (x rvs) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
type rvInts struct{ rvs }
func (x rvInts) Less(i, j int) bool { return x.rvs[i].Int() < x.rvs[j].Int() }
type rvUints struct{ rvs }
func (x rvUints) Less(i, j int) bool { return x.rvs[i].Uint() < x.rvs[j].Uint() }
type rvFloats struct{ rvs }
func (x rvFloats) Less(i, j int) bool { return x.rvs[i].Float() < x.rvs[j].Float() }
type rvStrings struct{ rvs }
func (x rvStrings) Less(i, j int) bool { return x.rvs[i].String() < x.rvs[j].String() }
// sortKeys sorts (if it can) the slice of reflect.Values, which is a slice of map keys.
func sortKeys(v []reflect.Value) []reflect.Value {
if len(v) <= 1 {
return v
}
switch v[0].Kind() {
case reflect.Float32, reflect.Float64:
sort.Sort(rvFloats{v})
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
sort.Sort(rvInts{v})
case reflect.String:
sort.Sort(rvStrings{v})
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
sort.Sort(rvUints{v})
}
return v
}

View File

@@ -1,598 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package template
import (
"bytes"
"errors"
"fmt"
"io"
"net/url"
"reflect"
"strings"
"unicode"
"unicode/utf8"
)
// FuncMap is the type of the map defining the mapping from names to functions.
// Each function must have either a single return value, or two return values of
// which the second has type error. In that case, if the second (error)
// return value evaluates to non-nil during execution, execution terminates and
// Execute returns that error.
type FuncMap map[string]interface{}
var builtins = FuncMap{
"and": and,
"call": call,
"html": HTMLEscaper,
"index": index,
"js": JSEscaper,
"len": length,
"not": not,
"or": or,
"print": fmt.Sprint,
"printf": fmt.Sprintf,
"println": fmt.Sprintln,
"urlquery": URLQueryEscaper,
// Comparisons
"eq": eq, // ==
"ge": ge, // >=
"gt": gt, // >
"le": le, // <=
"lt": lt, // <
"ne": ne, // !=
}
var builtinFuncs = createValueFuncs(builtins)
// createValueFuncs turns a FuncMap into a map[string]reflect.Value
func createValueFuncs(funcMap FuncMap) map[string]reflect.Value {
m := make(map[string]reflect.Value)
addValueFuncs(m, funcMap)
return m
}
// addValueFuncs adds to values the functions in funcs, converting them to reflect.Values.
func addValueFuncs(out map[string]reflect.Value, in FuncMap) {
for name, fn := range in {
v := reflect.ValueOf(fn)
if v.Kind() != reflect.Func {
panic("value for " + name + " not a function")
}
if !goodFunc(v.Type()) {
panic(fmt.Errorf("can't install method/function %q with %d results", name, v.Type().NumOut()))
}
out[name] = v
}
}
// addFuncs adds to values the functions in funcs. It does no checking of the input -
// call addValueFuncs first.
func addFuncs(out, in FuncMap) {
for name, fn := range in {
out[name] = fn
}
}
// goodFunc checks that the function or method has the right result signature.
func goodFunc(typ reflect.Type) bool {
// We allow functions with 1 result or 2 results where the second is an error.
switch {
case typ.NumOut() == 1:
return true
case typ.NumOut() == 2 && typ.Out(1) == errorType:
return true
}
return false
}
// findFunction looks for a function in the template, and global map.
func findFunction(name string, tmpl *Template) (reflect.Value, bool) {
if tmpl != nil && tmpl.common != nil {
if fn := tmpl.execFuncs[name]; fn.IsValid() {
return fn, true
}
}
if fn := builtinFuncs[name]; fn.IsValid() {
return fn, true
}
return reflect.Value{}, false
}
// Indexing.
// index returns the result of indexing its first argument by the following
// arguments. Thus "index x 1 2 3" is, in Go syntax, x[1][2][3]. Each
// indexed item must be a map, slice, or array.
func index(item interface{}, indices ...interface{}) (interface{}, error) {
v := reflect.ValueOf(item)
for _, i := range indices {
index := reflect.ValueOf(i)
var isNil bool
if v, isNil = indirect(v); isNil {
return nil, fmt.Errorf("index of nil pointer")
}
switch v.Kind() {
case reflect.Array, reflect.Slice, reflect.String:
var x int64
switch index.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
x = index.Int()
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
x = int64(index.Uint())
default:
return nil, fmt.Errorf("cannot index slice/array with type %s", index.Type())
}
if x < 0 || x >= int64(v.Len()) {
return nil, fmt.Errorf("index out of range: %d", x)
}
v = v.Index(int(x))
case reflect.Map:
if !index.IsValid() {
index = reflect.Zero(v.Type().Key())
}
if !index.Type().AssignableTo(v.Type().Key()) {
return nil, fmt.Errorf("%s is not index type for %s", index.Type(), v.Type())
}
if x := v.MapIndex(index); x.IsValid() {
v = x
} else {
v = reflect.Zero(v.Type().Elem())
}
default:
return nil, fmt.Errorf("can't index item of type %s", v.Type())
}
}
return v.Interface(), nil
}
// Length
// length returns the length of the item, with an error if it has no defined length.
func length(item interface{}) (int, error) {
v, isNil := indirect(reflect.ValueOf(item))
if isNil {
return 0, fmt.Errorf("len of nil pointer")
}
switch v.Kind() {
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
return v.Len(), nil
}
return 0, fmt.Errorf("len of type %s", v.Type())
}
// Function invocation
// call returns the result of evaluating the first argument as a function.
// The function must return 1 result, or 2 results, the second of which is an error.
func call(fn interface{}, args ...interface{}) (interface{}, error) {
v := reflect.ValueOf(fn)
typ := v.Type()
if typ.Kind() != reflect.Func {
return nil, fmt.Errorf("non-function of type %s", typ)
}
if !goodFunc(typ) {
return nil, fmt.Errorf("function called with %d args; should be 1 or 2", typ.NumOut())
}
numIn := typ.NumIn()
var dddType reflect.Type
if typ.IsVariadic() {
if len(args) < numIn-1 {
return nil, fmt.Errorf("wrong number of args: got %d want at least %d", len(args), numIn-1)
}
dddType = typ.In(numIn - 1).Elem()
} else {
if len(args) != numIn {
return nil, fmt.Errorf("wrong number of args: got %d want %d", len(args), numIn)
}
}
argv := make([]reflect.Value, len(args))
for i, arg := range args {
value := reflect.ValueOf(arg)
// Compute the expected type. Clumsy because of variadics.
var argType reflect.Type
if !typ.IsVariadic() || i < numIn-1 {
argType = typ.In(i)
} else {
argType = dddType
}
if !value.IsValid() && canBeNil(argType) {
value = reflect.Zero(argType)
}
if !value.Type().AssignableTo(argType) {
return nil, fmt.Errorf("arg %d has type %s; should be %s", i, value.Type(), argType)
}
argv[i] = value
}
result := v.Call(argv)
if len(result) == 2 && !result[1].IsNil() {
return result[0].Interface(), result[1].Interface().(error)
}
return result[0].Interface(), nil
}
// Boolean logic.
func truth(a interface{}) bool {
t, _ := isTrue(reflect.ValueOf(a))
return t
}
// and computes the Boolean AND of its arguments, returning
// the first false argument it encounters, or the last argument.
func and(arg0 interface{}, args ...interface{}) interface{} {
if !truth(arg0) {
return arg0
}
for i := range args {
arg0 = args[i]
if !truth(arg0) {
break
}
}
return arg0
}
// or computes the Boolean OR of its arguments, returning
// the first true argument it encounters, or the last argument.
func or(arg0 interface{}, args ...interface{}) interface{} {
if truth(arg0) {
return arg0
}
for i := range args {
arg0 = args[i]
if truth(arg0) {
break
}
}
return arg0
}
// not returns the Boolean negation of its argument.
func not(arg interface{}) (truth bool) {
truth, _ = isTrue(reflect.ValueOf(arg))
return !truth
}
// Comparison.
// TODO: Perhaps allow comparison between signed and unsigned integers.
var (
errBadComparisonType = errors.New("invalid type for comparison")
errBadComparison = errors.New("incompatible types for comparison")
errNoComparison = errors.New("missing argument for comparison")
)
type kind int
const (
invalidKind kind = iota
boolKind
complexKind
intKind
floatKind
integerKind
stringKind
uintKind
)
func basicKind(v reflect.Value) (kind, error) {
switch v.Kind() {
case reflect.Bool:
return boolKind, nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return intKind, nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return uintKind, nil
case reflect.Float32, reflect.Float64:
return floatKind, nil
case reflect.Complex64, reflect.Complex128:
return complexKind, nil
case reflect.String:
return stringKind, nil
}
return invalidKind, errBadComparisonType
}
// eq evaluates the comparison a == b || a == c || ...
func eq(arg1 interface{}, arg2 ...interface{}) (bool, error) {
v1 := reflect.ValueOf(arg1)
k1, err := basicKind(v1)
if err != nil {
return false, err
}
if len(arg2) == 0 {
return false, errNoComparison
}
for _, arg := range arg2 {
v2 := reflect.ValueOf(arg)
k2, err := basicKind(v2)
if err != nil {
return false, err
}
truth := false
if k1 != k2 {
// Special case: Can compare integer values regardless of type's sign.
switch {
case k1 == intKind && k2 == uintKind:
truth = v1.Int() >= 0 && uint64(v1.Int()) == v2.Uint()
case k1 == uintKind && k2 == intKind:
truth = v2.Int() >= 0 && v1.Uint() == uint64(v2.Int())
default:
return false, errBadComparison
}
} else {
switch k1 {
case boolKind:
truth = v1.Bool() == v2.Bool()
case complexKind:
truth = v1.Complex() == v2.Complex()
case floatKind:
truth = v1.Float() == v2.Float()
case intKind:
truth = v1.Int() == v2.Int()
case stringKind:
truth = v1.String() == v2.String()
case uintKind:
truth = v1.Uint() == v2.Uint()
default:
panic("invalid kind")
}
}
if truth {
return true, nil
}
}
return false, nil
}
// ne evaluates the comparison a != b.
func ne(arg1, arg2 interface{}) (bool, error) {
// != is the inverse of ==.
equal, err := eq(arg1, arg2)
return !equal, err
}
// lt evaluates the comparison a < b.
func lt(arg1, arg2 interface{}) (bool, error) {
v1 := reflect.ValueOf(arg1)
k1, err := basicKind(v1)
if err != nil {
return false, err
}
v2 := reflect.ValueOf(arg2)
k2, err := basicKind(v2)
if err != nil {
return false, err
}
truth := false
if k1 != k2 {
// Special case: Can compare integer values regardless of type's sign.
switch {
case k1 == intKind && k2 == uintKind:
truth = v1.Int() < 0 || uint64(v1.Int()) < v2.Uint()
case k1 == uintKind && k2 == intKind:
truth = v2.Int() >= 0 && v1.Uint() < uint64(v2.Int())
default:
return false, errBadComparison
}
} else {
switch k1 {
case boolKind, complexKind:
return false, errBadComparisonType
case floatKind:
truth = v1.Float() < v2.Float()
case intKind:
truth = v1.Int() < v2.Int()
case stringKind:
truth = v1.String() < v2.String()
case uintKind:
truth = v1.Uint() < v2.Uint()
default:
panic("invalid kind")
}
}
return truth, nil
}
// le evaluates the comparison <= b.
func le(arg1, arg2 interface{}) (bool, error) {
// <= is < or ==.
lessThan, err := lt(arg1, arg2)
if lessThan || err != nil {
return lessThan, err
}
return eq(arg1, arg2)
}
// gt evaluates the comparison a > b.
func gt(arg1, arg2 interface{}) (bool, error) {
// > is the inverse of <=.
lessOrEqual, err := le(arg1, arg2)
if err != nil {
return false, err
}
return !lessOrEqual, nil
}
// ge evaluates the comparison a >= b.
func ge(arg1, arg2 interface{}) (bool, error) {
// >= is the inverse of <.
lessThan, err := lt(arg1, arg2)
if err != nil {
return false, err
}
return !lessThan, nil
}
// HTML escaping.
var (
htmlQuot = []byte("&#34;") // shorter than "&quot;"
htmlApos = []byte("&#39;") // shorter than "&apos;" and apos was not in HTML until HTML5
htmlAmp = []byte("&amp;")
htmlLt = []byte("&lt;")
htmlGt = []byte("&gt;")
)
// HTMLEscape writes to w the escaped HTML equivalent of the plain text data b.
func HTMLEscape(w io.Writer, b []byte) {
last := 0
for i, c := range b {
var html []byte
switch c {
case '"':
html = htmlQuot
case '\'':
html = htmlApos
case '&':
html = htmlAmp
case '<':
html = htmlLt
case '>':
html = htmlGt
default:
continue
}
w.Write(b[last:i])
w.Write(html)
last = i + 1
}
w.Write(b[last:])
}
// HTMLEscapeString returns the escaped HTML equivalent of the plain text data s.
func HTMLEscapeString(s string) string {
// Avoid allocation if we can.
if strings.IndexAny(s, `'"&<>`) < 0 {
return s
}
var b bytes.Buffer
HTMLEscape(&b, []byte(s))
return b.String()
}
// HTMLEscaper returns the escaped HTML equivalent of the textual
// representation of its arguments.
func HTMLEscaper(args ...interface{}) string {
return HTMLEscapeString(evalArgs(args))
}
// JavaScript escaping.
var (
jsLowUni = []byte(`\u00`)
hex = []byte("0123456789ABCDEF")
jsBackslash = []byte(`\\`)
jsApos = []byte(`\'`)
jsQuot = []byte(`\"`)
jsLt = []byte(`\x3C`)
jsGt = []byte(`\x3E`)
)
// JSEscape writes to w the escaped JavaScript equivalent of the plain text data b.
func JSEscape(w io.Writer, b []byte) {
last := 0
for i := 0; i < len(b); i++ {
c := b[i]
if !jsIsSpecial(rune(c)) {
// fast path: nothing to do
continue
}
w.Write(b[last:i])
if c < utf8.RuneSelf {
// Quotes, slashes and angle brackets get quoted.
// Control characters get written as \u00XX.
switch c {
case '\\':
w.Write(jsBackslash)
case '\'':
w.Write(jsApos)
case '"':
w.Write(jsQuot)
case '<':
w.Write(jsLt)
case '>':
w.Write(jsGt)
default:
w.Write(jsLowUni)
t, b := c>>4, c&0x0f
w.Write(hex[t : t+1])
w.Write(hex[b : b+1])
}
} else {
// Unicode rune.
r, size := utf8.DecodeRune(b[i:])
if unicode.IsPrint(r) {
w.Write(b[i : i+size])
} else {
fmt.Fprintf(w, "\\u%04X", r)
}
i += size - 1
}
last = i + 1
}
w.Write(b[last:])
}
// JSEscapeString returns the escaped JavaScript equivalent of the plain text data s.
func JSEscapeString(s string) string {
// Avoid allocation if we can.
if strings.IndexFunc(s, jsIsSpecial) < 0 {
return s
}
var b bytes.Buffer
JSEscape(&b, []byte(s))
return b.String()
}
func jsIsSpecial(r rune) bool {
switch r {
case '\\', '\'', '"', '<', '>':
return true
}
return r < ' ' || utf8.RuneSelf <= r
}
// JSEscaper returns the escaped JavaScript equivalent of the textual
// representation of its arguments.
func JSEscaper(args ...interface{}) string {
return JSEscapeString(evalArgs(args))
}
// URLQueryEscaper returns the escaped value of the textual representation of
// its arguments in a form suitable for embedding in a URL query.
func URLQueryEscaper(args ...interface{}) string {
return url.QueryEscape(evalArgs(args))
}
// evalArgs formats the list of arguments into a string. It is therefore equivalent to
// fmt.Sprint(args...)
// except that each argument is indirected (if a pointer), as required,
// using the same rules as the default string evaluation during template
// execution.
func evalArgs(args []interface{}) string {
ok := false
var s string
// Fast path for simple common case.
if len(args) == 1 {
s, ok = args[0].(string)
}
if !ok {
for i, arg := range args {
a, ok := printableValue(reflect.ValueOf(arg))
if ok {
args[i] = a
} // else left fmt do its thing
}
s = fmt.Sprint(args...)
}
return s
}

View File

@@ -1 +0,0 @@
module github.com/alecthomas/template

View File

@@ -1,108 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Helper functions to make constructing templates easier.
package template
import (
"fmt"
"io/ioutil"
"path/filepath"
)
// Functions and methods to parse templates.
// Must is a helper that wraps a call to a function returning (*Template, error)
// and panics if the error is non-nil. It is intended for use in variable
// initializations such as
// var t = template.Must(template.New("name").Parse("text"))
func Must(t *Template, err error) *Template {
if err != nil {
panic(err)
}
return t
}
// ParseFiles creates a new Template and parses the template definitions from
// the named files. The returned template's name will have the (base) name and
// (parsed) contents of the first file. There must be at least one file.
// If an error occurs, parsing stops and the returned *Template is nil.
func ParseFiles(filenames ...string) (*Template, error) {
return parseFiles(nil, filenames...)
}
// ParseFiles parses the named files and associates the resulting templates with
// t. If an error occurs, parsing stops and the returned template is nil;
// otherwise it is t. There must be at least one file.
func (t *Template) ParseFiles(filenames ...string) (*Template, error) {
return parseFiles(t, filenames...)
}
// parseFiles is the helper for the method and function. If the argument
// template is nil, it is created from the first file.
func parseFiles(t *Template, filenames ...string) (*Template, error) {
if len(filenames) == 0 {
// Not really a problem, but be consistent.
return nil, fmt.Errorf("template: no files named in call to ParseFiles")
}
for _, filename := range filenames {
b, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
s := string(b)
name := filepath.Base(filename)
// First template becomes return value if not already defined,
// and we use that one for subsequent New calls to associate
// all the templates together. Also, if this file has the same name
// as t, this file becomes the contents of t, so
// t, err := New(name).Funcs(xxx).ParseFiles(name)
// works. Otherwise we create a new template associated with t.
var tmpl *Template
if t == nil {
t = New(name)
}
if name == t.Name() {
tmpl = t
} else {
tmpl = t.New(name)
}
_, err = tmpl.Parse(s)
if err != nil {
return nil, err
}
}
return t, nil
}
// ParseGlob creates a new Template and parses the template definitions from the
// files identified by the pattern, which must match at least one file. The
// returned template will have the (base) name and (parsed) contents of the
// first file matched by the pattern. ParseGlob is equivalent to calling
// ParseFiles with the list of files matched by the pattern.
func ParseGlob(pattern string) (*Template, error) {
return parseGlob(nil, pattern)
}
// ParseGlob parses the template definitions in the files identified by the
// pattern and associates the resulting templates with t. The pattern is
// processed by filepath.Glob and must match at least one file. ParseGlob is
// equivalent to calling t.ParseFiles with the list of files matched by the
// pattern.
func (t *Template) ParseGlob(pattern string) (*Template, error) {
return parseGlob(t, pattern)
}
// parseGlob is the implementation of the function and method ParseGlob.
func parseGlob(t *Template, pattern string) (*Template, error) {
filenames, err := filepath.Glob(pattern)
if err != nil {
return nil, err
}
if len(filenames) == 0 {
return nil, fmt.Errorf("template: pattern matches no files: %#q", pattern)
}
return parseFiles(t, filenames...)
}

View File

@@ -1,556 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package parse
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
// item represents a token or text string returned from the scanner.
type item struct {
typ itemType // The type of this item.
pos Pos // The starting position, in bytes, of this item in the input string.
val string // The value of this item.
}
func (i item) String() string {
switch {
case i.typ == itemEOF:
return "EOF"
case i.typ == itemError:
return i.val
case i.typ > itemKeyword:
return fmt.Sprintf("<%s>", i.val)
case len(i.val) > 10:
return fmt.Sprintf("%.10q...", i.val)
}
return fmt.Sprintf("%q", i.val)
}
// itemType identifies the type of lex items.
type itemType int
const (
itemError itemType = iota // error occurred; value is text of error
itemBool // boolean constant
itemChar // printable ASCII character; grab bag for comma etc.
itemCharConstant // character constant
itemComplex // complex constant (1+2i); imaginary is just a number
itemColonEquals // colon-equals (':=') introducing a declaration
itemEOF
itemField // alphanumeric identifier starting with '.'
itemIdentifier // alphanumeric identifier not starting with '.'
itemLeftDelim // left action delimiter
itemLeftParen // '(' inside action
itemNumber // simple number, including imaginary
itemPipe // pipe symbol
itemRawString // raw quoted string (includes quotes)
itemRightDelim // right action delimiter
itemElideNewline // elide newline after right delim
itemRightParen // ')' inside action
itemSpace // run of spaces separating arguments
itemString // quoted string (includes quotes)
itemText // plain text
itemVariable // variable starting with '$', such as '$' or '$1' or '$hello'
// Keywords appear after all the rest.
itemKeyword // used only to delimit the keywords
itemDot // the cursor, spelled '.'
itemDefine // define keyword
itemElse // else keyword
itemEnd // end keyword
itemIf // if keyword
itemNil // the untyped nil constant, easiest to treat as a keyword
itemRange // range keyword
itemTemplate // template keyword
itemWith // with keyword
)
var key = map[string]itemType{
".": itemDot,
"define": itemDefine,
"else": itemElse,
"end": itemEnd,
"if": itemIf,
"range": itemRange,
"nil": itemNil,
"template": itemTemplate,
"with": itemWith,
}
const eof = -1
// stateFn represents the state of the scanner as a function that returns the next state.
type stateFn func(*lexer) stateFn
// lexer holds the state of the scanner.
type lexer struct {
name string // the name of the input; used only for error reports
input string // the string being scanned
leftDelim string // start of action
rightDelim string // end of action
state stateFn // the next lexing function to enter
pos Pos // current position in the input
start Pos // start position of this item
width Pos // width of last rune read from input
lastPos Pos // position of most recent item returned by nextItem
items chan item // channel of scanned items
parenDepth int // nesting depth of ( ) exprs
}
// next returns the next rune in the input.
func (l *lexer) next() rune {
if int(l.pos) >= len(l.input) {
l.width = 0
return eof
}
r, w := utf8.DecodeRuneInString(l.input[l.pos:])
l.width = Pos(w)
l.pos += l.width
return r
}
// peek returns but does not consume the next rune in the input.
func (l *lexer) peek() rune {
r := l.next()
l.backup()
return r
}
// backup steps back one rune. Can only be called once per call of next.
func (l *lexer) backup() {
l.pos -= l.width
}
// emit passes an item back to the client.
func (l *lexer) emit(t itemType) {
l.items <- item{t, l.start, l.input[l.start:l.pos]}
l.start = l.pos
}
// ignore skips over the pending input before this point.
func (l *lexer) ignore() {
l.start = l.pos
}
// accept consumes the next rune if it's from the valid set.
func (l *lexer) accept(valid string) bool {
if strings.IndexRune(valid, l.next()) >= 0 {
return true
}
l.backup()
return false
}
// acceptRun consumes a run of runes from the valid set.
func (l *lexer) acceptRun(valid string) {
for strings.IndexRune(valid, l.next()) >= 0 {
}
l.backup()
}
// lineNumber reports which line we're on, based on the position of
// the previous item returned by nextItem. Doing it this way
// means we don't have to worry about peek double counting.
func (l *lexer) lineNumber() int {
return 1 + strings.Count(l.input[:l.lastPos], "\n")
}
// errorf returns an error token and terminates the scan by passing
// back a nil pointer that will be the next state, terminating l.nextItem.
func (l *lexer) errorf(format string, args ...interface{}) stateFn {
l.items <- item{itemError, l.start, fmt.Sprintf(format, args...)}
return nil
}
// nextItem returns the next item from the input.
func (l *lexer) nextItem() item {
item := <-l.items
l.lastPos = item.pos
return item
}
// lex creates a new scanner for the input string.
func lex(name, input, left, right string) *lexer {
if left == "" {
left = leftDelim
}
if right == "" {
right = rightDelim
}
l := &lexer{
name: name,
input: input,
leftDelim: left,
rightDelim: right,
items: make(chan item),
}
go l.run()
return l
}
// run runs the state machine for the lexer.
func (l *lexer) run() {
for l.state = lexText; l.state != nil; {
l.state = l.state(l)
}
}
// state functions
const (
leftDelim = "{{"
rightDelim = "}}"
leftComment = "/*"
rightComment = "*/"
)
// lexText scans until an opening action delimiter, "{{".
func lexText(l *lexer) stateFn {
for {
if strings.HasPrefix(l.input[l.pos:], l.leftDelim) {
if l.pos > l.start {
l.emit(itemText)
}
return lexLeftDelim
}
if l.next() == eof {
break
}
}
// Correctly reached EOF.
if l.pos > l.start {
l.emit(itemText)
}
l.emit(itemEOF)
return nil
}
// lexLeftDelim scans the left delimiter, which is known to be present.
func lexLeftDelim(l *lexer) stateFn {
l.pos += Pos(len(l.leftDelim))
if strings.HasPrefix(l.input[l.pos:], leftComment) {
return lexComment
}
l.emit(itemLeftDelim)
l.parenDepth = 0
return lexInsideAction
}
// lexComment scans a comment. The left comment marker is known to be present.
func lexComment(l *lexer) stateFn {
l.pos += Pos(len(leftComment))
i := strings.Index(l.input[l.pos:], rightComment)
if i < 0 {
return l.errorf("unclosed comment")
}
l.pos += Pos(i + len(rightComment))
if !strings.HasPrefix(l.input[l.pos:], l.rightDelim) {
return l.errorf("comment ends before closing delimiter")
}
l.pos += Pos(len(l.rightDelim))
l.ignore()
return lexText
}
// lexRightDelim scans the right delimiter, which is known to be present.
func lexRightDelim(l *lexer) stateFn {
l.pos += Pos(len(l.rightDelim))
l.emit(itemRightDelim)
if l.peek() == '\\' {
l.pos++
l.emit(itemElideNewline)
}
return lexText
}
// lexInsideAction scans the elements inside action delimiters.
func lexInsideAction(l *lexer) stateFn {
// Either number, quoted string, or identifier.
// Spaces separate arguments; runs of spaces turn into itemSpace.
// Pipe symbols separate and are emitted.
if strings.HasPrefix(l.input[l.pos:], l.rightDelim+"\\") || strings.HasPrefix(l.input[l.pos:], l.rightDelim) {
if l.parenDepth == 0 {
return lexRightDelim
}
return l.errorf("unclosed left paren")
}
switch r := l.next(); {
case r == eof || isEndOfLine(r):
return l.errorf("unclosed action")
case isSpace(r):
return lexSpace
case r == ':':
if l.next() != '=' {
return l.errorf("expected :=")
}
l.emit(itemColonEquals)
case r == '|':
l.emit(itemPipe)
case r == '"':
return lexQuote
case r == '`':
return lexRawQuote
case r == '$':
return lexVariable
case r == '\'':
return lexChar
case r == '.':
// special look-ahead for ".field" so we don't break l.backup().
if l.pos < Pos(len(l.input)) {
r := l.input[l.pos]
if r < '0' || '9' < r {
return lexField
}
}
fallthrough // '.' can start a number.
case r == '+' || r == '-' || ('0' <= r && r <= '9'):
l.backup()
return lexNumber
case isAlphaNumeric(r):
l.backup()
return lexIdentifier
case r == '(':
l.emit(itemLeftParen)
l.parenDepth++
return lexInsideAction
case r == ')':
l.emit(itemRightParen)
l.parenDepth--
if l.parenDepth < 0 {
return l.errorf("unexpected right paren %#U", r)
}
return lexInsideAction
case r <= unicode.MaxASCII && unicode.IsPrint(r):
l.emit(itemChar)
return lexInsideAction
default:
return l.errorf("unrecognized character in action: %#U", r)
}
return lexInsideAction
}
// lexSpace scans a run of space characters.
// One space has already been seen.
func lexSpace(l *lexer) stateFn {
for isSpace(l.peek()) {
l.next()
}
l.emit(itemSpace)
return lexInsideAction
}
// lexIdentifier scans an alphanumeric.
func lexIdentifier(l *lexer) stateFn {
Loop:
for {
switch r := l.next(); {
case isAlphaNumeric(r):
// absorb.
default:
l.backup()
word := l.input[l.start:l.pos]
if !l.atTerminator() {
return l.errorf("bad character %#U", r)
}
switch {
case key[word] > itemKeyword:
l.emit(key[word])
case word[0] == '.':
l.emit(itemField)
case word == "true", word == "false":
l.emit(itemBool)
default:
l.emit(itemIdentifier)
}
break Loop
}
}
return lexInsideAction
}
// lexField scans a field: .Alphanumeric.
// The . has been scanned.
func lexField(l *lexer) stateFn {
return lexFieldOrVariable(l, itemField)
}
// lexVariable scans a Variable: $Alphanumeric.
// The $ has been scanned.
func lexVariable(l *lexer) stateFn {
if l.atTerminator() { // Nothing interesting follows -> "$".
l.emit(itemVariable)
return lexInsideAction
}
return lexFieldOrVariable(l, itemVariable)
}
// lexVariable scans a field or variable: [.$]Alphanumeric.
// The . or $ has been scanned.
func lexFieldOrVariable(l *lexer, typ itemType) stateFn {
if l.atTerminator() { // Nothing interesting follows -> "." or "$".
if typ == itemVariable {
l.emit(itemVariable)
} else {
l.emit(itemDot)
}
return lexInsideAction
}
var r rune
for {
r = l.next()
if !isAlphaNumeric(r) {
l.backup()
break
}
}
if !l.atTerminator() {
return l.errorf("bad character %#U", r)
}
l.emit(typ)
return lexInsideAction
}
// atTerminator reports whether the input is at valid termination character to
// appear after an identifier. Breaks .X.Y into two pieces. Also catches cases
// like "$x+2" not being acceptable without a space, in case we decide one
// day to implement arithmetic.
func (l *lexer) atTerminator() bool {
r := l.peek()
if isSpace(r) || isEndOfLine(r) {
return true
}
switch r {
case eof, '.', ',', '|', ':', ')', '(':
return true
}
// Does r start the delimiter? This can be ambiguous (with delim=="//", $x/2 will
// succeed but should fail) but only in extremely rare cases caused by willfully
// bad choice of delimiter.
if rd, _ := utf8.DecodeRuneInString(l.rightDelim); rd == r {
return true
}
return false
}
// lexChar scans a character constant. The initial quote is already
// scanned. Syntax checking is done by the parser.
func lexChar(l *lexer) stateFn {
Loop:
for {
switch l.next() {
case '\\':
if r := l.next(); r != eof && r != '\n' {
break
}
fallthrough
case eof, '\n':
return l.errorf("unterminated character constant")
case '\'':
break Loop
}
}
l.emit(itemCharConstant)
return lexInsideAction
}
// lexNumber scans a number: decimal, octal, hex, float, or imaginary. This
// isn't a perfect number scanner - for instance it accepts "." and "0x0.2"
// and "089" - but when it's wrong the input is invalid and the parser (via
// strconv) will notice.
func lexNumber(l *lexer) stateFn {
if !l.scanNumber() {
return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
}
if sign := l.peek(); sign == '+' || sign == '-' {
// Complex: 1+2i. No spaces, must end in 'i'.
if !l.scanNumber() || l.input[l.pos-1] != 'i' {
return l.errorf("bad number syntax: %q", l.input[l.start:l.pos])
}
l.emit(itemComplex)
} else {
l.emit(itemNumber)
}
return lexInsideAction
}
func (l *lexer) scanNumber() bool {
// Optional leading sign.
l.accept("+-")
// Is it hex?
digits := "0123456789"
if l.accept("0") && l.accept("xX") {
digits = "0123456789abcdefABCDEF"
}
l.acceptRun(digits)
if l.accept(".") {
l.acceptRun(digits)
}
if l.accept("eE") {
l.accept("+-")
l.acceptRun("0123456789")
}
// Is it imaginary?
l.accept("i")
// Next thing mustn't be alphanumeric.
if isAlphaNumeric(l.peek()) {
l.next()
return false
}
return true
}
// lexQuote scans a quoted string.
func lexQuote(l *lexer) stateFn {
Loop:
for {
switch l.next() {
case '\\':
if r := l.next(); r != eof && r != '\n' {
break
}
fallthrough
case eof, '\n':
return l.errorf("unterminated quoted string")
case '"':
break Loop
}
}
l.emit(itemString)
return lexInsideAction
}
// lexRawQuote scans a raw quoted string.
func lexRawQuote(l *lexer) stateFn {
Loop:
for {
switch l.next() {
case eof, '\n':
return l.errorf("unterminated raw quoted string")
case '`':
break Loop
}
}
l.emit(itemRawString)
return lexInsideAction
}
// isSpace reports whether r is a space character.
func isSpace(r rune) bool {
return r == ' ' || r == '\t'
}
// isEndOfLine reports whether r is an end-of-line character.
func isEndOfLine(r rune) bool {
return r == '\r' || r == '\n'
}
// isAlphaNumeric reports whether r is an alphabetic, digit, or underscore.
func isAlphaNumeric(r rune) bool {
return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
}

View File

@@ -1,834 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Parse nodes.
package parse
import (
"bytes"
"fmt"
"strconv"
"strings"
)
var textFormat = "%s" // Changed to "%q" in tests for better error messages.
// A Node is an element in the parse tree. The interface is trivial.
// The interface contains an unexported method so that only
// types local to this package can satisfy it.
type Node interface {
Type() NodeType
String() string
// Copy does a deep copy of the Node and all its components.
// To avoid type assertions, some XxxNodes also have specialized
// CopyXxx methods that return *XxxNode.
Copy() Node
Position() Pos // byte position of start of node in full original input string
// tree returns the containing *Tree.
// It is unexported so all implementations of Node are in this package.
tree() *Tree
}
// NodeType identifies the type of a parse tree node.
type NodeType int
// Pos represents a byte position in the original input text from which
// this template was parsed.
type Pos int
func (p Pos) Position() Pos {
return p
}
// Type returns itself and provides an easy default implementation
// for embedding in a Node. Embedded in all non-trivial Nodes.
func (t NodeType) Type() NodeType {
return t
}
const (
NodeText NodeType = iota // Plain text.
NodeAction // A non-control action such as a field evaluation.
NodeBool // A boolean constant.
NodeChain // A sequence of field accesses.
NodeCommand // An element of a pipeline.
NodeDot // The cursor, dot.
nodeElse // An else action. Not added to tree.
nodeEnd // An end action. Not added to tree.
NodeField // A field or method name.
NodeIdentifier // An identifier; always a function name.
NodeIf // An if action.
NodeList // A list of Nodes.
NodeNil // An untyped nil constant.
NodeNumber // A numerical constant.
NodePipe // A pipeline of commands.
NodeRange // A range action.
NodeString // A string constant.
NodeTemplate // A template invocation action.
NodeVariable // A $ variable.
NodeWith // A with action.
)
// Nodes.
// ListNode holds a sequence of nodes.
type ListNode struct {
NodeType
Pos
tr *Tree
Nodes []Node // The element nodes in lexical order.
}
func (t *Tree) newList(pos Pos) *ListNode {
return &ListNode{tr: t, NodeType: NodeList, Pos: pos}
}
func (l *ListNode) append(n Node) {
l.Nodes = append(l.Nodes, n)
}
func (l *ListNode) tree() *Tree {
return l.tr
}
func (l *ListNode) String() string {
b := new(bytes.Buffer)
for _, n := range l.Nodes {
fmt.Fprint(b, n)
}
return b.String()
}
func (l *ListNode) CopyList() *ListNode {
if l == nil {
return l
}
n := l.tr.newList(l.Pos)
for _, elem := range l.Nodes {
n.append(elem.Copy())
}
return n
}
func (l *ListNode) Copy() Node {
return l.CopyList()
}
// TextNode holds plain text.
type TextNode struct {
NodeType
Pos
tr *Tree
Text []byte // The text; may span newlines.
}
func (t *Tree) newText(pos Pos, text string) *TextNode {
return &TextNode{tr: t, NodeType: NodeText, Pos: pos, Text: []byte(text)}
}
func (t *TextNode) String() string {
return fmt.Sprintf(textFormat, t.Text)
}
func (t *TextNode) tree() *Tree {
return t.tr
}
func (t *TextNode) Copy() Node {
return &TextNode{tr: t.tr, NodeType: NodeText, Pos: t.Pos, Text: append([]byte{}, t.Text...)}
}
// PipeNode holds a pipeline with optional declaration
type PipeNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
Decl []*VariableNode // Variable declarations in lexical order.
Cmds []*CommandNode // The commands in lexical order.
}
func (t *Tree) newPipeline(pos Pos, line int, decl []*VariableNode) *PipeNode {
return &PipeNode{tr: t, NodeType: NodePipe, Pos: pos, Line: line, Decl: decl}
}
func (p *PipeNode) append(command *CommandNode) {
p.Cmds = append(p.Cmds, command)
}
func (p *PipeNode) String() string {
s := ""
if len(p.Decl) > 0 {
for i, v := range p.Decl {
if i > 0 {
s += ", "
}
s += v.String()
}
s += " := "
}
for i, c := range p.Cmds {
if i > 0 {
s += " | "
}
s += c.String()
}
return s
}
func (p *PipeNode) tree() *Tree {
return p.tr
}
func (p *PipeNode) CopyPipe() *PipeNode {
if p == nil {
return p
}
var decl []*VariableNode
for _, d := range p.Decl {
decl = append(decl, d.Copy().(*VariableNode))
}
n := p.tr.newPipeline(p.Pos, p.Line, decl)
for _, c := range p.Cmds {
n.append(c.Copy().(*CommandNode))
}
return n
}
func (p *PipeNode) Copy() Node {
return p.CopyPipe()
}
// ActionNode holds an action (something bounded by delimiters).
// Control actions have their own nodes; ActionNode represents simple
// ones such as field evaluations and parenthesized pipelines.
type ActionNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
Pipe *PipeNode // The pipeline in the action.
}
func (t *Tree) newAction(pos Pos, line int, pipe *PipeNode) *ActionNode {
return &ActionNode{tr: t, NodeType: NodeAction, Pos: pos, Line: line, Pipe: pipe}
}
func (a *ActionNode) String() string {
return fmt.Sprintf("{{%s}}", a.Pipe)
}
func (a *ActionNode) tree() *Tree {
return a.tr
}
func (a *ActionNode) Copy() Node {
return a.tr.newAction(a.Pos, a.Line, a.Pipe.CopyPipe())
}
// CommandNode holds a command (a pipeline inside an evaluating action).
type CommandNode struct {
NodeType
Pos
tr *Tree
Args []Node // Arguments in lexical order: Identifier, field, or constant.
}
func (t *Tree) newCommand(pos Pos) *CommandNode {
return &CommandNode{tr: t, NodeType: NodeCommand, Pos: pos}
}
func (c *CommandNode) append(arg Node) {
c.Args = append(c.Args, arg)
}
func (c *CommandNode) String() string {
s := ""
for i, arg := range c.Args {
if i > 0 {
s += " "
}
if arg, ok := arg.(*PipeNode); ok {
s += "(" + arg.String() + ")"
continue
}
s += arg.String()
}
return s
}
func (c *CommandNode) tree() *Tree {
return c.tr
}
func (c *CommandNode) Copy() Node {
if c == nil {
return c
}
n := c.tr.newCommand(c.Pos)
for _, c := range c.Args {
n.append(c.Copy())
}
return n
}
// IdentifierNode holds an identifier.
type IdentifierNode struct {
NodeType
Pos
tr *Tree
Ident string // The identifier's name.
}
// NewIdentifier returns a new IdentifierNode with the given identifier name.
func NewIdentifier(ident string) *IdentifierNode {
return &IdentifierNode{NodeType: NodeIdentifier, Ident: ident}
}
// SetPos sets the position. NewIdentifier is a public method so we can't modify its signature.
// Chained for convenience.
// TODO: fix one day?
func (i *IdentifierNode) SetPos(pos Pos) *IdentifierNode {
i.Pos = pos
return i
}
// SetTree sets the parent tree for the node. NewIdentifier is a public method so we can't modify its signature.
// Chained for convenience.
// TODO: fix one day?
func (i *IdentifierNode) SetTree(t *Tree) *IdentifierNode {
i.tr = t
return i
}
func (i *IdentifierNode) String() string {
return i.Ident
}
func (i *IdentifierNode) tree() *Tree {
return i.tr
}
func (i *IdentifierNode) Copy() Node {
return NewIdentifier(i.Ident).SetTree(i.tr).SetPos(i.Pos)
}
// VariableNode holds a list of variable names, possibly with chained field
// accesses. The dollar sign is part of the (first) name.
type VariableNode struct {
NodeType
Pos
tr *Tree
Ident []string // Variable name and fields in lexical order.
}
func (t *Tree) newVariable(pos Pos, ident string) *VariableNode {
return &VariableNode{tr: t, NodeType: NodeVariable, Pos: pos, Ident: strings.Split(ident, ".")}
}
func (v *VariableNode) String() string {
s := ""
for i, id := range v.Ident {
if i > 0 {
s += "."
}
s += id
}
return s
}
func (v *VariableNode) tree() *Tree {
return v.tr
}
func (v *VariableNode) Copy() Node {
return &VariableNode{tr: v.tr, NodeType: NodeVariable, Pos: v.Pos, Ident: append([]string{}, v.Ident...)}
}
// DotNode holds the special identifier '.'.
type DotNode struct {
NodeType
Pos
tr *Tree
}
func (t *Tree) newDot(pos Pos) *DotNode {
return &DotNode{tr: t, NodeType: NodeDot, Pos: pos}
}
func (d *DotNode) Type() NodeType {
// Override method on embedded NodeType for API compatibility.
// TODO: Not really a problem; could change API without effect but
// api tool complains.
return NodeDot
}
func (d *DotNode) String() string {
return "."
}
func (d *DotNode) tree() *Tree {
return d.tr
}
func (d *DotNode) Copy() Node {
return d.tr.newDot(d.Pos)
}
// NilNode holds the special identifier 'nil' representing an untyped nil constant.
type NilNode struct {
NodeType
Pos
tr *Tree
}
func (t *Tree) newNil(pos Pos) *NilNode {
return &NilNode{tr: t, NodeType: NodeNil, Pos: pos}
}
func (n *NilNode) Type() NodeType {
// Override method on embedded NodeType for API compatibility.
// TODO: Not really a problem; could change API without effect but
// api tool complains.
return NodeNil
}
func (n *NilNode) String() string {
return "nil"
}
func (n *NilNode) tree() *Tree {
return n.tr
}
func (n *NilNode) Copy() Node {
return n.tr.newNil(n.Pos)
}
// FieldNode holds a field (identifier starting with '.').
// The names may be chained ('.x.y').
// The period is dropped from each ident.
type FieldNode struct {
NodeType
Pos
tr *Tree
Ident []string // The identifiers in lexical order.
}
func (t *Tree) newField(pos Pos, ident string) *FieldNode {
return &FieldNode{tr: t, NodeType: NodeField, Pos: pos, Ident: strings.Split(ident[1:], ".")} // [1:] to drop leading period
}
func (f *FieldNode) String() string {
s := ""
for _, id := range f.Ident {
s += "." + id
}
return s
}
func (f *FieldNode) tree() *Tree {
return f.tr
}
func (f *FieldNode) Copy() Node {
return &FieldNode{tr: f.tr, NodeType: NodeField, Pos: f.Pos, Ident: append([]string{}, f.Ident...)}
}
// ChainNode holds a term followed by a chain of field accesses (identifier starting with '.').
// The names may be chained ('.x.y').
// The periods are dropped from each ident.
type ChainNode struct {
NodeType
Pos
tr *Tree
Node Node
Field []string // The identifiers in lexical order.
}
func (t *Tree) newChain(pos Pos, node Node) *ChainNode {
return &ChainNode{tr: t, NodeType: NodeChain, Pos: pos, Node: node}
}
// Add adds the named field (which should start with a period) to the end of the chain.
func (c *ChainNode) Add(field string) {
if len(field) == 0 || field[0] != '.' {
panic("no dot in field")
}
field = field[1:] // Remove leading dot.
if field == "" {
panic("empty field")
}
c.Field = append(c.Field, field)
}
func (c *ChainNode) String() string {
s := c.Node.String()
if _, ok := c.Node.(*PipeNode); ok {
s = "(" + s + ")"
}
for _, field := range c.Field {
s += "." + field
}
return s
}
func (c *ChainNode) tree() *Tree {
return c.tr
}
func (c *ChainNode) Copy() Node {
return &ChainNode{tr: c.tr, NodeType: NodeChain, Pos: c.Pos, Node: c.Node, Field: append([]string{}, c.Field...)}
}
// BoolNode holds a boolean constant.
type BoolNode struct {
NodeType
Pos
tr *Tree
True bool // The value of the boolean constant.
}
func (t *Tree) newBool(pos Pos, true bool) *BoolNode {
return &BoolNode{tr: t, NodeType: NodeBool, Pos: pos, True: true}
}
func (b *BoolNode) String() string {
if b.True {
return "true"
}
return "false"
}
func (b *BoolNode) tree() *Tree {
return b.tr
}
func (b *BoolNode) Copy() Node {
return b.tr.newBool(b.Pos, b.True)
}
// NumberNode holds a number: signed or unsigned integer, float, or complex.
// The value is parsed and stored under all the types that can represent the value.
// This simulates in a small amount of code the behavior of Go's ideal constants.
type NumberNode struct {
NodeType
Pos
tr *Tree
IsInt bool // Number has an integral value.
IsUint bool // Number has an unsigned integral value.
IsFloat bool // Number has a floating-point value.
IsComplex bool // Number is complex.
Int64 int64 // The signed integer value.
Uint64 uint64 // The unsigned integer value.
Float64 float64 // The floating-point value.
Complex128 complex128 // The complex value.
Text string // The original textual representation from the input.
}
func (t *Tree) newNumber(pos Pos, text string, typ itemType) (*NumberNode, error) {
n := &NumberNode{tr: t, NodeType: NodeNumber, Pos: pos, Text: text}
switch typ {
case itemCharConstant:
rune, _, tail, err := strconv.UnquoteChar(text[1:], text[0])
if err != nil {
return nil, err
}
if tail != "'" {
return nil, fmt.Errorf("malformed character constant: %s", text)
}
n.Int64 = int64(rune)
n.IsInt = true
n.Uint64 = uint64(rune)
n.IsUint = true
n.Float64 = float64(rune) // odd but those are the rules.
n.IsFloat = true
return n, nil
case itemComplex:
// fmt.Sscan can parse the pair, so let it do the work.
if _, err := fmt.Sscan(text, &n.Complex128); err != nil {
return nil, err
}
n.IsComplex = true
n.simplifyComplex()
return n, nil
}
// Imaginary constants can only be complex unless they are zero.
if len(text) > 0 && text[len(text)-1] == 'i' {
f, err := strconv.ParseFloat(text[:len(text)-1], 64)
if err == nil {
n.IsComplex = true
n.Complex128 = complex(0, f)
n.simplifyComplex()
return n, nil
}
}
// Do integer test first so we get 0x123 etc.
u, err := strconv.ParseUint(text, 0, 64) // will fail for -0; fixed below.
if err == nil {
n.IsUint = true
n.Uint64 = u
}
i, err := strconv.ParseInt(text, 0, 64)
if err == nil {
n.IsInt = true
n.Int64 = i
if i == 0 {
n.IsUint = true // in case of -0.
n.Uint64 = u
}
}
// If an integer extraction succeeded, promote the float.
if n.IsInt {
n.IsFloat = true
n.Float64 = float64(n.Int64)
} else if n.IsUint {
n.IsFloat = true
n.Float64 = float64(n.Uint64)
} else {
f, err := strconv.ParseFloat(text, 64)
if err == nil {
n.IsFloat = true
n.Float64 = f
// If a floating-point extraction succeeded, extract the int if needed.
if !n.IsInt && float64(int64(f)) == f {
n.IsInt = true
n.Int64 = int64(f)
}
if !n.IsUint && float64(uint64(f)) == f {
n.IsUint = true
n.Uint64 = uint64(f)
}
}
}
if !n.IsInt && !n.IsUint && !n.IsFloat {
return nil, fmt.Errorf("illegal number syntax: %q", text)
}
return n, nil
}
// simplifyComplex pulls out any other types that are represented by the complex number.
// These all require that the imaginary part be zero.
func (n *NumberNode) simplifyComplex() {
n.IsFloat = imag(n.Complex128) == 0
if n.IsFloat {
n.Float64 = real(n.Complex128)
n.IsInt = float64(int64(n.Float64)) == n.Float64
if n.IsInt {
n.Int64 = int64(n.Float64)
}
n.IsUint = float64(uint64(n.Float64)) == n.Float64
if n.IsUint {
n.Uint64 = uint64(n.Float64)
}
}
}
func (n *NumberNode) String() string {
return n.Text
}
func (n *NumberNode) tree() *Tree {
return n.tr
}
func (n *NumberNode) Copy() Node {
nn := new(NumberNode)
*nn = *n // Easy, fast, correct.
return nn
}
// StringNode holds a string constant. The value has been "unquoted".
type StringNode struct {
NodeType
Pos
tr *Tree
Quoted string // The original text of the string, with quotes.
Text string // The string, after quote processing.
}
func (t *Tree) newString(pos Pos, orig, text string) *StringNode {
return &StringNode{tr: t, NodeType: NodeString, Pos: pos, Quoted: orig, Text: text}
}
func (s *StringNode) String() string {
return s.Quoted
}
func (s *StringNode) tree() *Tree {
return s.tr
}
func (s *StringNode) Copy() Node {
return s.tr.newString(s.Pos, s.Quoted, s.Text)
}
// endNode represents an {{end}} action.
// It does not appear in the final parse tree.
type endNode struct {
NodeType
Pos
tr *Tree
}
func (t *Tree) newEnd(pos Pos) *endNode {
return &endNode{tr: t, NodeType: nodeEnd, Pos: pos}
}
func (e *endNode) String() string {
return "{{end}}"
}
func (e *endNode) tree() *Tree {
return e.tr
}
func (e *endNode) Copy() Node {
return e.tr.newEnd(e.Pos)
}
// elseNode represents an {{else}} action. Does not appear in the final tree.
type elseNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
}
func (t *Tree) newElse(pos Pos, line int) *elseNode {
return &elseNode{tr: t, NodeType: nodeElse, Pos: pos, Line: line}
}
func (e *elseNode) Type() NodeType {
return nodeElse
}
func (e *elseNode) String() string {
return "{{else}}"
}
func (e *elseNode) tree() *Tree {
return e.tr
}
func (e *elseNode) Copy() Node {
return e.tr.newElse(e.Pos, e.Line)
}
// BranchNode is the common representation of if, range, and with.
type BranchNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
Pipe *PipeNode // The pipeline to be evaluated.
List *ListNode // What to execute if the value is non-empty.
ElseList *ListNode // What to execute if the value is empty (nil if absent).
}
func (b *BranchNode) String() string {
name := ""
switch b.NodeType {
case NodeIf:
name = "if"
case NodeRange:
name = "range"
case NodeWith:
name = "with"
default:
panic("unknown branch type")
}
if b.ElseList != nil {
return fmt.Sprintf("{{%s %s}}%s{{else}}%s{{end}}", name, b.Pipe, b.List, b.ElseList)
}
return fmt.Sprintf("{{%s %s}}%s{{end}}", name, b.Pipe, b.List)
}
func (b *BranchNode) tree() *Tree {
return b.tr
}
func (b *BranchNode) Copy() Node {
switch b.NodeType {
case NodeIf:
return b.tr.newIf(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
case NodeRange:
return b.tr.newRange(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
case NodeWith:
return b.tr.newWith(b.Pos, b.Line, b.Pipe, b.List, b.ElseList)
default:
panic("unknown branch type")
}
}
// IfNode represents an {{if}} action and its commands.
type IfNode struct {
BranchNode
}
func (t *Tree) newIf(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *IfNode {
return &IfNode{BranchNode{tr: t, NodeType: NodeIf, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
}
func (i *IfNode) Copy() Node {
return i.tr.newIf(i.Pos, i.Line, i.Pipe.CopyPipe(), i.List.CopyList(), i.ElseList.CopyList())
}
// RangeNode represents a {{range}} action and its commands.
type RangeNode struct {
BranchNode
}
func (t *Tree) newRange(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *RangeNode {
return &RangeNode{BranchNode{tr: t, NodeType: NodeRange, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
}
func (r *RangeNode) Copy() Node {
return r.tr.newRange(r.Pos, r.Line, r.Pipe.CopyPipe(), r.List.CopyList(), r.ElseList.CopyList())
}
// WithNode represents a {{with}} action and its commands.
type WithNode struct {
BranchNode
}
func (t *Tree) newWith(pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) *WithNode {
return &WithNode{BranchNode{tr: t, NodeType: NodeWith, Pos: pos, Line: line, Pipe: pipe, List: list, ElseList: elseList}}
}
func (w *WithNode) Copy() Node {
return w.tr.newWith(w.Pos, w.Line, w.Pipe.CopyPipe(), w.List.CopyList(), w.ElseList.CopyList())
}
// TemplateNode represents a {{template}} action.
type TemplateNode struct {
NodeType
Pos
tr *Tree
Line int // The line number in the input (deprecated; kept for compatibility)
Name string // The name of the template (unquoted).
Pipe *PipeNode // The command to evaluate as dot for the template.
}
func (t *Tree) newTemplate(pos Pos, line int, name string, pipe *PipeNode) *TemplateNode {
return &TemplateNode{tr: t, NodeType: NodeTemplate, Pos: pos, Line: line, Name: name, Pipe: pipe}
}
func (t *TemplateNode) String() string {
if t.Pipe == nil {
return fmt.Sprintf("{{template %q}}", t.Name)
}
return fmt.Sprintf("{{template %q %s}}", t.Name, t.Pipe)
}
func (t *TemplateNode) tree() *Tree {
return t.tr
}
func (t *TemplateNode) Copy() Node {
return t.tr.newTemplate(t.Pos, t.Line, t.Name, t.Pipe.CopyPipe())
}

View File

@@ -1,700 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package parse builds parse trees for templates as defined by text/template
// and html/template. Clients should use those packages to construct templates
// rather than this one, which provides shared internal data structures not
// intended for general use.
package parse
import (
"bytes"
"fmt"
"runtime"
"strconv"
"strings"
)
// Tree is the representation of a single parsed template.
type Tree struct {
Name string // name of the template represented by the tree.
ParseName string // name of the top-level template during parsing, for error messages.
Root *ListNode // top-level root of the tree.
text string // text parsed to create the template (or its parent)
// Parsing only; cleared after parse.
funcs []map[string]interface{}
lex *lexer
token [3]item // three-token lookahead for parser.
peekCount int
vars []string // variables defined at the moment.
}
// Copy returns a copy of the Tree. Any parsing state is discarded.
func (t *Tree) Copy() *Tree {
if t == nil {
return nil
}
return &Tree{
Name: t.Name,
ParseName: t.ParseName,
Root: t.Root.CopyList(),
text: t.text,
}
}
// Parse returns a map from template name to parse.Tree, created by parsing the
// templates described in the argument string. The top-level template will be
// given the specified name. If an error is encountered, parsing stops and an
// empty map is returned with the error.
func Parse(name, text, leftDelim, rightDelim string, funcs ...map[string]interface{}) (treeSet map[string]*Tree, err error) {
treeSet = make(map[string]*Tree)
t := New(name)
t.text = text
_, err = t.Parse(text, leftDelim, rightDelim, treeSet, funcs...)
return
}
// next returns the next token.
func (t *Tree) next() item {
if t.peekCount > 0 {
t.peekCount--
} else {
t.token[0] = t.lex.nextItem()
}
return t.token[t.peekCount]
}
// backup backs the input stream up one token.
func (t *Tree) backup() {
t.peekCount++
}
// backup2 backs the input stream up two tokens.
// The zeroth token is already there.
func (t *Tree) backup2(t1 item) {
t.token[1] = t1
t.peekCount = 2
}
// backup3 backs the input stream up three tokens
// The zeroth token is already there.
func (t *Tree) backup3(t2, t1 item) { // Reverse order: we're pushing back.
t.token[1] = t1
t.token[2] = t2
t.peekCount = 3
}
// peek returns but does not consume the next token.
func (t *Tree) peek() item {
if t.peekCount > 0 {
return t.token[t.peekCount-1]
}
t.peekCount = 1
t.token[0] = t.lex.nextItem()
return t.token[0]
}
// nextNonSpace returns the next non-space token.
func (t *Tree) nextNonSpace() (token item) {
for {
token = t.next()
if token.typ != itemSpace {
break
}
}
return token
}
// peekNonSpace returns but does not consume the next non-space token.
func (t *Tree) peekNonSpace() (token item) {
for {
token = t.next()
if token.typ != itemSpace {
break
}
}
t.backup()
return token
}
// Parsing.
// New allocates a new parse tree with the given name.
func New(name string, funcs ...map[string]interface{}) *Tree {
return &Tree{
Name: name,
funcs: funcs,
}
}
// ErrorContext returns a textual representation of the location of the node in the input text.
// The receiver is only used when the node does not have a pointer to the tree inside,
// which can occur in old code.
func (t *Tree) ErrorContext(n Node) (location, context string) {
pos := int(n.Position())
tree := n.tree()
if tree == nil {
tree = t
}
text := tree.text[:pos]
byteNum := strings.LastIndex(text, "\n")
if byteNum == -1 {
byteNum = pos // On first line.
} else {
byteNum++ // After the newline.
byteNum = pos - byteNum
}
lineNum := 1 + strings.Count(text, "\n")
context = n.String()
if len(context) > 20 {
context = fmt.Sprintf("%.20s...", context)
}
return fmt.Sprintf("%s:%d:%d", tree.ParseName, lineNum, byteNum), context
}
// errorf formats the error and terminates processing.
func (t *Tree) errorf(format string, args ...interface{}) {
t.Root = nil
format = fmt.Sprintf("template: %s:%d: %s", t.ParseName, t.lex.lineNumber(), format)
panic(fmt.Errorf(format, args...))
}
// error terminates processing.
func (t *Tree) error(err error) {
t.errorf("%s", err)
}
// expect consumes the next token and guarantees it has the required type.
func (t *Tree) expect(expected itemType, context string) item {
token := t.nextNonSpace()
if token.typ != expected {
t.unexpected(token, context)
}
return token
}
// expectOneOf consumes the next token and guarantees it has one of the required types.
func (t *Tree) expectOneOf(expected1, expected2 itemType, context string) item {
token := t.nextNonSpace()
if token.typ != expected1 && token.typ != expected2 {
t.unexpected(token, context)
}
return token
}
// unexpected complains about the token and terminates processing.
func (t *Tree) unexpected(token item, context string) {
t.errorf("unexpected %s in %s", token, context)
}
// recover is the handler that turns panics into returns from the top level of Parse.
func (t *Tree) recover(errp *error) {
e := recover()
if e != nil {
if _, ok := e.(runtime.Error); ok {
panic(e)
}
if t != nil {
t.stopParse()
}
*errp = e.(error)
}
return
}
// startParse initializes the parser, using the lexer.
func (t *Tree) startParse(funcs []map[string]interface{}, lex *lexer) {
t.Root = nil
t.lex = lex
t.vars = []string{"$"}
t.funcs = funcs
}
// stopParse terminates parsing.
func (t *Tree) stopParse() {
t.lex = nil
t.vars = nil
t.funcs = nil
}
// Parse parses the template definition string to construct a representation of
// the template for execution. If either action delimiter string is empty, the
// default ("{{" or "}}") is used. Embedded template definitions are added to
// the treeSet map.
func (t *Tree) Parse(text, leftDelim, rightDelim string, treeSet map[string]*Tree, funcs ...map[string]interface{}) (tree *Tree, err error) {
defer t.recover(&err)
t.ParseName = t.Name
t.startParse(funcs, lex(t.Name, text, leftDelim, rightDelim))
t.text = text
t.parse(treeSet)
t.add(treeSet)
t.stopParse()
return t, nil
}
// add adds tree to the treeSet.
func (t *Tree) add(treeSet map[string]*Tree) {
tree := treeSet[t.Name]
if tree == nil || IsEmptyTree(tree.Root) {
treeSet[t.Name] = t
return
}
if !IsEmptyTree(t.Root) {
t.errorf("template: multiple definition of template %q", t.Name)
}
}
// IsEmptyTree reports whether this tree (node) is empty of everything but space.
func IsEmptyTree(n Node) bool {
switch n := n.(type) {
case nil:
return true
case *ActionNode:
case *IfNode:
case *ListNode:
for _, node := range n.Nodes {
if !IsEmptyTree(node) {
return false
}
}
return true
case *RangeNode:
case *TemplateNode:
case *TextNode:
return len(bytes.TrimSpace(n.Text)) == 0
case *WithNode:
default:
panic("unknown node: " + n.String())
}
return false
}
// parse is the top-level parser for a template, essentially the same
// as itemList except it also parses {{define}} actions.
// It runs to EOF.
func (t *Tree) parse(treeSet map[string]*Tree) (next Node) {
t.Root = t.newList(t.peek().pos)
for t.peek().typ != itemEOF {
if t.peek().typ == itemLeftDelim {
delim := t.next()
if t.nextNonSpace().typ == itemDefine {
newT := New("definition") // name will be updated once we know it.
newT.text = t.text
newT.ParseName = t.ParseName
newT.startParse(t.funcs, t.lex)
newT.parseDefinition(treeSet)
continue
}
t.backup2(delim)
}
n := t.textOrAction()
if n.Type() == nodeEnd {
t.errorf("unexpected %s", n)
}
t.Root.append(n)
}
return nil
}
// parseDefinition parses a {{define}} ... {{end}} template definition and
// installs the definition in the treeSet map. The "define" keyword has already
// been scanned.
func (t *Tree) parseDefinition(treeSet map[string]*Tree) {
const context = "define clause"
name := t.expectOneOf(itemString, itemRawString, context)
var err error
t.Name, err = strconv.Unquote(name.val)
if err != nil {
t.error(err)
}
t.expect(itemRightDelim, context)
var end Node
t.Root, end = t.itemList()
if end.Type() != nodeEnd {
t.errorf("unexpected %s in %s", end, context)
}
t.add(treeSet)
t.stopParse()
}
// itemList:
// textOrAction*
// Terminates at {{end}} or {{else}}, returned separately.
func (t *Tree) itemList() (list *ListNode, next Node) {
list = t.newList(t.peekNonSpace().pos)
for t.peekNonSpace().typ != itemEOF {
n := t.textOrAction()
switch n.Type() {
case nodeEnd, nodeElse:
return list, n
}
list.append(n)
}
t.errorf("unexpected EOF")
return
}
// textOrAction:
// text | action
func (t *Tree) textOrAction() Node {
switch token := t.nextNonSpace(); token.typ {
case itemElideNewline:
return t.elideNewline()
case itemText:
return t.newText(token.pos, token.val)
case itemLeftDelim:
return t.action()
default:
t.unexpected(token, "input")
}
return nil
}
// elideNewline:
// Remove newlines trailing rightDelim if \\ is present.
func (t *Tree) elideNewline() Node {
token := t.peek()
if token.typ != itemText {
t.unexpected(token, "input")
return nil
}
t.next()
stripped := strings.TrimLeft(token.val, "\n\r")
diff := len(token.val) - len(stripped)
if diff > 0 {
// This is a bit nasty. We mutate the token in-place to remove
// preceding newlines.
token.pos += Pos(diff)
token.val = stripped
}
return t.newText(token.pos, token.val)
}
// Action:
// control
// command ("|" command)*
// Left delim is past. Now get actions.
// First word could be a keyword such as range.
func (t *Tree) action() (n Node) {
switch token := t.nextNonSpace(); token.typ {
case itemElse:
return t.elseControl()
case itemEnd:
return t.endControl()
case itemIf:
return t.ifControl()
case itemRange:
return t.rangeControl()
case itemTemplate:
return t.templateControl()
case itemWith:
return t.withControl()
}
t.backup()
// Do not pop variables; they persist until "end".
return t.newAction(t.peek().pos, t.lex.lineNumber(), t.pipeline("command"))
}
// Pipeline:
// declarations? command ('|' command)*
func (t *Tree) pipeline(context string) (pipe *PipeNode) {
var decl []*VariableNode
pos := t.peekNonSpace().pos
// Are there declarations?
for {
if v := t.peekNonSpace(); v.typ == itemVariable {
t.next()
// Since space is a token, we need 3-token look-ahead here in the worst case:
// in "$x foo" we need to read "foo" (as opposed to ":=") to know that $x is an
// argument variable rather than a declaration. So remember the token
// adjacent to the variable so we can push it back if necessary.
tokenAfterVariable := t.peek()
if next := t.peekNonSpace(); next.typ == itemColonEquals || (next.typ == itemChar && next.val == ",") {
t.nextNonSpace()
variable := t.newVariable(v.pos, v.val)
decl = append(decl, variable)
t.vars = append(t.vars, v.val)
if next.typ == itemChar && next.val == "," {
if context == "range" && len(decl) < 2 {
continue
}
t.errorf("too many declarations in %s", context)
}
} else if tokenAfterVariable.typ == itemSpace {
t.backup3(v, tokenAfterVariable)
} else {
t.backup2(v)
}
}
break
}
pipe = t.newPipeline(pos, t.lex.lineNumber(), decl)
for {
switch token := t.nextNonSpace(); token.typ {
case itemRightDelim, itemRightParen:
if len(pipe.Cmds) == 0 {
t.errorf("missing value for %s", context)
}
if token.typ == itemRightParen {
t.backup()
}
return
case itemBool, itemCharConstant, itemComplex, itemDot, itemField, itemIdentifier,
itemNumber, itemNil, itemRawString, itemString, itemVariable, itemLeftParen:
t.backup()
pipe.append(t.command())
default:
t.unexpected(token, context)
}
}
}
func (t *Tree) parseControl(allowElseIf bool, context string) (pos Pos, line int, pipe *PipeNode, list, elseList *ListNode) {
defer t.popVars(len(t.vars))
line = t.lex.lineNumber()
pipe = t.pipeline(context)
var next Node
list, next = t.itemList()
switch next.Type() {
case nodeEnd: //done
case nodeElse:
if allowElseIf {
// Special case for "else if". If the "else" is followed immediately by an "if",
// the elseControl will have left the "if" token pending. Treat
// {{if a}}_{{else if b}}_{{end}}
// as
// {{if a}}_{{else}}{{if b}}_{{end}}{{end}}.
// To do this, parse the if as usual and stop at it {{end}}; the subsequent{{end}}
// is assumed. This technique works even for long if-else-if chains.
// TODO: Should we allow else-if in with and range?
if t.peek().typ == itemIf {
t.next() // Consume the "if" token.
elseList = t.newList(next.Position())
elseList.append(t.ifControl())
// Do not consume the next item - only one {{end}} required.
break
}
}
elseList, next = t.itemList()
if next.Type() != nodeEnd {
t.errorf("expected end; found %s", next)
}
}
return pipe.Position(), line, pipe, list, elseList
}
// If:
// {{if pipeline}} itemList {{end}}
// {{if pipeline}} itemList {{else}} itemList {{end}}
// If keyword is past.
func (t *Tree) ifControl() Node {
return t.newIf(t.parseControl(true, "if"))
}
// Range:
// {{range pipeline}} itemList {{end}}
// {{range pipeline}} itemList {{else}} itemList {{end}}
// Range keyword is past.
func (t *Tree) rangeControl() Node {
return t.newRange(t.parseControl(false, "range"))
}
// With:
// {{with pipeline}} itemList {{end}}
// {{with pipeline}} itemList {{else}} itemList {{end}}
// If keyword is past.
func (t *Tree) withControl() Node {
return t.newWith(t.parseControl(false, "with"))
}
// End:
// {{end}}
// End keyword is past.
func (t *Tree) endControl() Node {
return t.newEnd(t.expect(itemRightDelim, "end").pos)
}
// Else:
// {{else}}
// Else keyword is past.
func (t *Tree) elseControl() Node {
// Special case for "else if".
peek := t.peekNonSpace()
if peek.typ == itemIf {
// We see "{{else if ... " but in effect rewrite it to {{else}}{{if ... ".
return t.newElse(peek.pos, t.lex.lineNumber())
}
return t.newElse(t.expect(itemRightDelim, "else").pos, t.lex.lineNumber())
}
// Template:
// {{template stringValue pipeline}}
// Template keyword is past. The name must be something that can evaluate
// to a string.
func (t *Tree) templateControl() Node {
var name string
token := t.nextNonSpace()
switch token.typ {
case itemString, itemRawString:
s, err := strconv.Unquote(token.val)
if err != nil {
t.error(err)
}
name = s
default:
t.unexpected(token, "template invocation")
}
var pipe *PipeNode
if t.nextNonSpace().typ != itemRightDelim {
t.backup()
// Do not pop variables; they persist until "end".
pipe = t.pipeline("template")
}
return t.newTemplate(token.pos, t.lex.lineNumber(), name, pipe)
}
// command:
// operand (space operand)*
// space-separated arguments up to a pipeline character or right delimiter.
// we consume the pipe character but leave the right delim to terminate the action.
func (t *Tree) command() *CommandNode {
cmd := t.newCommand(t.peekNonSpace().pos)
for {
t.peekNonSpace() // skip leading spaces.
operand := t.operand()
if operand != nil {
cmd.append(operand)
}
switch token := t.next(); token.typ {
case itemSpace:
continue
case itemError:
t.errorf("%s", token.val)
case itemRightDelim, itemRightParen:
t.backup()
case itemPipe:
default:
t.errorf("unexpected %s in operand; missing space?", token)
}
break
}
if len(cmd.Args) == 0 {
t.errorf("empty command")
}
return cmd
}
// operand:
// term .Field*
// An operand is a space-separated component of a command,
// a term possibly followed by field accesses.
// A nil return means the next item is not an operand.
func (t *Tree) operand() Node {
node := t.term()
if node == nil {
return nil
}
if t.peek().typ == itemField {
chain := t.newChain(t.peek().pos, node)
for t.peek().typ == itemField {
chain.Add(t.next().val)
}
// Compatibility with original API: If the term is of type NodeField
// or NodeVariable, just put more fields on the original.
// Otherwise, keep the Chain node.
// TODO: Switch to Chains always when we can.
switch node.Type() {
case NodeField:
node = t.newField(chain.Position(), chain.String())
case NodeVariable:
node = t.newVariable(chain.Position(), chain.String())
default:
node = chain
}
}
return node
}
// term:
// literal (number, string, nil, boolean)
// function (identifier)
// .
// .Field
// $
// '(' pipeline ')'
// A term is a simple "expression".
// A nil return means the next item is not a term.
func (t *Tree) term() Node {
switch token := t.nextNonSpace(); token.typ {
case itemError:
t.errorf("%s", token.val)
case itemIdentifier:
if !t.hasFunction(token.val) {
t.errorf("function %q not defined", token.val)
}
return NewIdentifier(token.val).SetTree(t).SetPos(token.pos)
case itemDot:
return t.newDot(token.pos)
case itemNil:
return t.newNil(token.pos)
case itemVariable:
return t.useVar(token.pos, token.val)
case itemField:
return t.newField(token.pos, token.val)
case itemBool:
return t.newBool(token.pos, token.val == "true")
case itemCharConstant, itemComplex, itemNumber:
number, err := t.newNumber(token.pos, token.val, token.typ)
if err != nil {
t.error(err)
}
return number
case itemLeftParen:
pipe := t.pipeline("parenthesized pipeline")
if token := t.next(); token.typ != itemRightParen {
t.errorf("unclosed right paren: unexpected %s", token)
}
return pipe
case itemString, itemRawString:
s, err := strconv.Unquote(token.val)
if err != nil {
t.error(err)
}
return t.newString(token.pos, token.val, s)
}
t.backup()
return nil
}
// hasFunction reports if a function name exists in the Tree's maps.
func (t *Tree) hasFunction(name string) bool {
for _, funcMap := range t.funcs {
if funcMap == nil {
continue
}
if funcMap[name] != nil {
return true
}
}
return false
}
// popVars trims the variable list to the specified length
func (t *Tree) popVars(n int) {
t.vars = t.vars[:n]
}
// useVar returns a node for a variable reference. It errors if the
// variable is not defined.
func (t *Tree) useVar(pos Pos, name string) Node {
v := t.newVariable(pos, name)
for _, varName := range t.vars {
if varName == v.Ident[0] {
return v
}
}
t.errorf("undefined variable %q", v.Ident[0])
return nil
}

View File

@@ -1,218 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package template
import (
"fmt"
"reflect"
"github.com/alecthomas/template/parse"
)
// common holds the information shared by related templates.
type common struct {
tmpl map[string]*Template
// We use two maps, one for parsing and one for execution.
// This separation makes the API cleaner since it doesn't
// expose reflection to the client.
parseFuncs FuncMap
execFuncs map[string]reflect.Value
}
// Template is the representation of a parsed template. The *parse.Tree
// field is exported only for use by html/template and should be treated
// as unexported by all other clients.
type Template struct {
name string
*parse.Tree
*common
leftDelim string
rightDelim string
}
// New allocates a new template with the given name.
func New(name string) *Template {
return &Template{
name: name,
}
}
// Name returns the name of the template.
func (t *Template) Name() string {
return t.name
}
// New allocates a new template associated with the given one and with the same
// delimiters. The association, which is transitive, allows one template to
// invoke another with a {{template}} action.
func (t *Template) New(name string) *Template {
t.init()
return &Template{
name: name,
common: t.common,
leftDelim: t.leftDelim,
rightDelim: t.rightDelim,
}
}
func (t *Template) init() {
if t.common == nil {
t.common = new(common)
t.tmpl = make(map[string]*Template)
t.parseFuncs = make(FuncMap)
t.execFuncs = make(map[string]reflect.Value)
}
}
// Clone returns a duplicate of the template, including all associated
// templates. The actual representation is not copied, but the name space of
// associated templates is, so further calls to Parse in the copy will add
// templates to the copy but not to the original. Clone can be used to prepare
// common templates and use them with variant definitions for other templates
// by adding the variants after the clone is made.
func (t *Template) Clone() (*Template, error) {
nt := t.copy(nil)
nt.init()
nt.tmpl[t.name] = nt
for k, v := range t.tmpl {
if k == t.name { // Already installed.
continue
}
// The associated templates share nt's common structure.
tmpl := v.copy(nt.common)
nt.tmpl[k] = tmpl
}
for k, v := range t.parseFuncs {
nt.parseFuncs[k] = v
}
for k, v := range t.execFuncs {
nt.execFuncs[k] = v
}
return nt, nil
}
// copy returns a shallow copy of t, with common set to the argument.
func (t *Template) copy(c *common) *Template {
nt := New(t.name)
nt.Tree = t.Tree
nt.common = c
nt.leftDelim = t.leftDelim
nt.rightDelim = t.rightDelim
return nt
}
// AddParseTree creates a new template with the name and parse tree
// and associates it with t.
func (t *Template) AddParseTree(name string, tree *parse.Tree) (*Template, error) {
if t.common != nil && t.tmpl[name] != nil {
return nil, fmt.Errorf("template: redefinition of template %q", name)
}
nt := t.New(name)
nt.Tree = tree
t.tmpl[name] = nt
return nt, nil
}
// Templates returns a slice of the templates associated with t, including t
// itself.
func (t *Template) Templates() []*Template {
if t.common == nil {
return nil
}
// Return a slice so we don't expose the map.
m := make([]*Template, 0, len(t.tmpl))
for _, v := range t.tmpl {
m = append(m, v)
}
return m
}
// Delims sets the action delimiters to the specified strings, to be used in
// subsequent calls to Parse, ParseFiles, or ParseGlob. Nested template
// definitions will inherit the settings. An empty delimiter stands for the
// corresponding default: {{ or }}.
// The return value is the template, so calls can be chained.
func (t *Template) Delims(left, right string) *Template {
t.leftDelim = left
t.rightDelim = right
return t
}
// Funcs adds the elements of the argument map to the template's function map.
// It panics if a value in the map is not a function with appropriate return
// type. However, it is legal to overwrite elements of the map. The return
// value is the template, so calls can be chained.
func (t *Template) Funcs(funcMap FuncMap) *Template {
t.init()
addValueFuncs(t.execFuncs, funcMap)
addFuncs(t.parseFuncs, funcMap)
return t
}
// Lookup returns the template with the given name that is associated with t,
// or nil if there is no such template.
func (t *Template) Lookup(name string) *Template {
if t.common == nil {
return nil
}
return t.tmpl[name]
}
// Parse parses a string into a template. Nested template definitions will be
// associated with the top-level template t. Parse may be called multiple times
// to parse definitions of templates to associate with t. It is an error if a
// resulting template is non-empty (contains content other than template
// definitions) and would replace a non-empty template with the same name.
// (In multiple calls to Parse with the same receiver template, only one call
// can contain text other than space, comments, and template definitions.)
func (t *Template) Parse(text string) (*Template, error) {
t.init()
trees, err := parse.Parse(t.name, text, t.leftDelim, t.rightDelim, t.parseFuncs, builtins)
if err != nil {
return nil, err
}
// Add the newly parsed trees, including the one for t, into our common structure.
for name, tree := range trees {
// If the name we parsed is the name of this template, overwrite this template.
// The associate method checks it's not a redefinition.
tmpl := t
if name != t.name {
tmpl = t.New(name)
}
// Even if t == tmpl, we need to install it in the common.tmpl map.
if replace, err := t.associate(tmpl, tree); err != nil {
return nil, err
} else if replace {
tmpl.Tree = tree
}
tmpl.leftDelim = t.leftDelim
tmpl.rightDelim = t.rightDelim
}
return t, nil
}
// associate installs the new template into the group of templates associated
// with t. It is an error to reuse a name except to overwrite an empty
// template. The two are already known to share the common structure.
// The boolean return value reports wither to store this tree as t.Tree.
func (t *Template) associate(new *Template, tree *parse.Tree) (bool, error) {
if new.common != t.common {
panic("internal error: associate not common")
}
name := new.name
if old := t.tmpl[name]; old != nil {
oldIsEmpty := parse.IsEmptyTree(old.Root)
newIsEmpty := parse.IsEmptyTree(tree.Root)
if newIsEmpty {
// Whether old is empty or not, new is empty; no reason to replace old.
return false, nil
}
if !oldIsEmpty {
return false, fmt.Errorf("template: redefinition of template %q", name)
}
}
t.tmpl[name] = new
return true, nil
}

View File

@@ -1,19 +0,0 @@
Copyright (C) 2014 Alec Thomas
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,11 +0,0 @@
# Units - Helpful unit multipliers and functions for Go
The goal of this package is to have functionality similar to the [time](http://golang.org/pkg/time/) package.
It allows for code like this:
```go
n, err := ParseBase2Bytes("1KB")
// n == 1024
n = units.Mebibyte * 512
```

View File

@@ -1,85 +0,0 @@
package units
// Base2Bytes is the old non-SI power-of-2 byte scale (1024 bytes in a kilobyte,
// etc.).
type Base2Bytes int64
// Base-2 byte units.
const (
Kibibyte Base2Bytes = 1024
KiB = Kibibyte
Mebibyte = Kibibyte * 1024
MiB = Mebibyte
Gibibyte = Mebibyte * 1024
GiB = Gibibyte
Tebibyte = Gibibyte * 1024
TiB = Tebibyte
Pebibyte = Tebibyte * 1024
PiB = Pebibyte
Exbibyte = Pebibyte * 1024
EiB = Exbibyte
)
var (
bytesUnitMap = MakeUnitMap("iB", "B", 1024)
oldBytesUnitMap = MakeUnitMap("B", "B", 1024)
)
// ParseBase2Bytes supports both iB and B in base-2 multipliers. That is, KB
// and KiB are both 1024.
// However "kB", which is the correct SI spelling of 1000 Bytes, is rejected.
func ParseBase2Bytes(s string) (Base2Bytes, error) {
n, err := ParseUnit(s, bytesUnitMap)
if err != nil {
n, err = ParseUnit(s, oldBytesUnitMap)
}
return Base2Bytes(n), err
}
func (b Base2Bytes) String() string {
return ToString(int64(b), 1024, "iB", "B")
}
var (
metricBytesUnitMap = MakeUnitMap("B", "B", 1000)
)
// MetricBytes are SI byte units (1000 bytes in a kilobyte).
type MetricBytes SI
// SI base-10 byte units.
const (
Kilobyte MetricBytes = 1000
KB = Kilobyte
Megabyte = Kilobyte * 1000
MB = Megabyte
Gigabyte = Megabyte * 1000
GB = Gigabyte
Terabyte = Gigabyte * 1000
TB = Terabyte
Petabyte = Terabyte * 1000
PB = Petabyte
Exabyte = Petabyte * 1000
EB = Exabyte
)
// ParseMetricBytes parses base-10 metric byte units. That is, KB is 1000 bytes.
func ParseMetricBytes(s string) (MetricBytes, error) {
n, err := ParseUnit(s, metricBytesUnitMap)
return MetricBytes(n), err
}
// TODO: represents 1000B as uppercase "KB", while SI standard requires "kB".
func (m MetricBytes) String() string {
return ToString(int64(m), 1000, "B", "B")
}
// ParseStrictBytes supports both iB and B suffixes for base 2 and metric,
// respectively. That is, KiB represents 1024 and kB, KB represent 1000.
func ParseStrictBytes(s string) (int64, error) {
n, err := ParseUnit(s, bytesUnitMap)
if err != nil {
n, err = ParseUnit(s, metricBytesUnitMap)
}
return int64(n), err
}

View File

@@ -1,13 +0,0 @@
// Package units provides helpful unit multipliers and functions for Go.
//
// The goal of this package is to have functionality similar to the time [1] package.
//
//
// [1] http://golang.org/pkg/time/
//
// It allows for code like this:
//
// n, err := ParseBase2Bytes("1KB")
// // n == 1024
// n = units.Mebibyte * 512
package units

View File

@@ -1,3 +0,0 @@
module github.com/alecthomas/units
require github.com/stretchr/testify v1.4.0

View File

@@ -1,11 +0,0 @@
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

View File

@@ -1,50 +0,0 @@
package units
// SI units.
type SI int64
// SI unit multiples.
const (
Kilo SI = 1000
Mega = Kilo * 1000
Giga = Mega * 1000
Tera = Giga * 1000
Peta = Tera * 1000
Exa = Peta * 1000
)
func MakeUnitMap(suffix, shortSuffix string, scale int64) map[string]float64 {
res := map[string]float64{
shortSuffix: 1,
// see below for "k" / "K"
"M" + suffix: float64(scale * scale),
"G" + suffix: float64(scale * scale * scale),
"T" + suffix: float64(scale * scale * scale * scale),
"P" + suffix: float64(scale * scale * scale * scale * scale),
"E" + suffix: float64(scale * scale * scale * scale * scale * scale),
}
// Standard SI prefixes use lowercase "k" for kilo = 1000.
// For compatibility, and to be fool-proof, we accept both "k" and "K" in metric mode.
//
// However, official binary prefixes are always capitalized - "KiB" -
// and we specifically never parse "kB" as 1024B because:
//
// (1) people pedantic enough to use lowercase according to SI unlikely to abuse "k" to mean 1024 :-)
//
// (2) Use of capital K for 1024 was an informal tradition predating IEC prefixes:
// "The binary meaning of the kilobyte for 1024 bytes typically uses the symbol KB, with an
// uppercase letter K."
// -- https://en.wikipedia.org/wiki/Kilobyte#Base_2_(1024_bytes)
// "Capitalization of the letter K became the de facto standard for binary notation, although this
// could not be extended to higher powers, and use of the lowercase k did persist.[13][14][15]"
// -- https://en.wikipedia.org/wiki/Binary_prefix#History
// See also the extensive https://en.wikipedia.org/wiki/Timeline_of_binary_prefixes.
if scale == 1024 {
res["K"+suffix] = float64(scale)
} else {
res["k"+suffix] = float64(scale)
res["K"+suffix] = float64(scale)
}
return res
}

View File

@@ -1,138 +0,0 @@
package units
import (
"errors"
"fmt"
"strings"
)
var (
siUnits = []string{"", "K", "M", "G", "T", "P", "E"}
)
func ToString(n int64, scale int64, suffix, baseSuffix string) string {
mn := len(siUnits)
out := make([]string, mn)
for i, m := range siUnits {
if n%scale != 0 || i == 0 && n == 0 {
s := suffix
if i == 0 {
s = baseSuffix
}
out[mn-1-i] = fmt.Sprintf("%d%s%s", n%scale, m, s)
}
n /= scale
if n == 0 {
break
}
}
return strings.Join(out, "")
}
// Below code ripped straight from http://golang.org/src/pkg/time/format.go?s=33392:33438#L1123
var errLeadingInt = errors.New("units: bad [0-9]*") // never printed
// leadingInt consumes the leading [0-9]* from s.
func leadingInt(s string) (x int64, rem string, err error) {
i := 0
for ; i < len(s); i++ {
c := s[i]
if c < '0' || c > '9' {
break
}
if x >= (1<<63-10)/10 {
// overflow
return 0, "", errLeadingInt
}
x = x*10 + int64(c) - '0'
}
return x, s[i:], nil
}
func ParseUnit(s string, unitMap map[string]float64) (int64, error) {
// [-+]?([0-9]*(\.[0-9]*)?[a-z]+)+
orig := s
f := float64(0)
neg := false
// Consume [-+]?
if s != "" {
c := s[0]
if c == '-' || c == '+' {
neg = c == '-'
s = s[1:]
}
}
// Special case: if all that is left is "0", this is zero.
if s == "0" {
return 0, nil
}
if s == "" {
return 0, errors.New("units: invalid " + orig)
}
for s != "" {
g := float64(0) // this element of the sequence
var x int64
var err error
// The next character must be [0-9.]
if !(s[0] == '.' || ('0' <= s[0] && s[0] <= '9')) {
return 0, errors.New("units: invalid " + orig)
}
// Consume [0-9]*
pl := len(s)
x, s, err = leadingInt(s)
if err != nil {
return 0, errors.New("units: invalid " + orig)
}
g = float64(x)
pre := pl != len(s) // whether we consumed anything before a period
// Consume (\.[0-9]*)?
post := false
if s != "" && s[0] == '.' {
s = s[1:]
pl := len(s)
x, s, err = leadingInt(s)
if err != nil {
return 0, errors.New("units: invalid " + orig)
}
scale := 1.0
for n := pl - len(s); n > 0; n-- {
scale *= 10
}
g += float64(x) / scale
post = pl != len(s)
}
if !pre && !post {
// no digits (e.g. ".s" or "-.s")
return 0, errors.New("units: invalid " + orig)
}
// Consume unit.
i := 0
for ; i < len(s); i++ {
c := s[i]
if c == '.' || ('0' <= c && c <= '9') {
break
}
}
u := s[:i]
s = s[i:]
unit, ok := unitMap[u]
if !ok {
return 0, errors.New("units: unknown unit " + u + " in " + orig)
}
f += g * unit
}
if neg {
f = -f
}
if f < float64(-1<<63) || f > float64(1<<63-1) {
return 0, errors.New("units: overflow parsing unit")
}
return int64(f), nil
}

View File

@@ -1,20 +0,0 @@
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

File diff suppressed because it is too large Load Diff

View File

@@ -1,316 +0,0 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targetMap map[float64]float64) *Stream {
// Convert map to slice to avoid slow iterations on a map.
// ƒ is called on the hot path, so converting the map to a slice
// beforehand results in significant CPU savings.
targets := targetMapToSlice(targetMap)
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for _, t := range targets {
if t.quantile*s.n <= r {
f = (2 * t.epsilon * r) / t.quantile
} else {
f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
type target struct {
quantile float64
epsilon float64
}
func targetMapToSlice(targetMap map[float64]float64) []target {
targets := make([]target, 0, len(targetMap))
for quantile, epsilon := range targetMap {
t := target{
quantile: quantile,
epsilon: epsilon,
}
targets = append(targets, t)
}
return targets
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(math.Ceil(float64(l) * q))
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

View File

@@ -1,8 +0,0 @@
language: go
go:
- "1.x"
- master
env:
- TAGS=""
- TAGS="-tags purego"
script: go test $TAGS -v ./...

View File

@@ -1,22 +0,0 @@
Copyright (c) 2016 Caleb Spare
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,67 +0,0 @@
# xxhash
[![GoDoc](https://godoc.org/github.com/cespare/xxhash?status.svg)](https://godoc.org/github.com/cespare/xxhash)
[![Build Status](https://travis-ci.org/cespare/xxhash.svg?branch=master)](https://travis-ci.org/cespare/xxhash)
xxhash is a Go implementation of the 64-bit
[xxHash](http://cyan4973.github.io/xxHash/) algorithm, XXH64. This is a
high-quality hashing algorithm that is much faster than anything in the Go
standard library.
This package provides a straightforward API:
```
func Sum64(b []byte) uint64
func Sum64String(s string) uint64
type Digest struct{ ... }
func New() *Digest
```
The `Digest` type implements hash.Hash64. Its key methods are:
```
func (*Digest) Write([]byte) (int, error)
func (*Digest) WriteString(string) (int, error)
func (*Digest) Sum64() uint64
```
This implementation provides a fast pure-Go implementation and an even faster
assembly implementation for amd64.
## Compatibility
This package is in a module and the latest code is in version 2 of the module.
You need a version of Go with at least "minimal module compatibility" to use
github.com/cespare/xxhash/v2:
* 1.9.7+ for Go 1.9
* 1.10.3+ for Go 1.10
* Go 1.11 or later
I recommend using the latest release of Go.
## Benchmarks
Here are some quick benchmarks comparing the pure-Go and assembly
implementations of Sum64.
| input size | purego | asm |
| --- | --- | --- |
| 5 B | 979.66 MB/s | 1291.17 MB/s |
| 100 B | 7475.26 MB/s | 7973.40 MB/s |
| 4 KB | 17573.46 MB/s | 17602.65 MB/s |
| 10 MB | 17131.46 MB/s | 17142.16 MB/s |
These numbers were generated on Ubuntu 18.04 with an Intel i7-8700K CPU using
the following commands under Go 1.11.2:
```
$ go test -tags purego -benchtime 10s -bench '/xxhash,direct,bytes'
$ go test -benchtime 10s -bench '/xxhash,direct,bytes'
```
## Projects using this package
- [InfluxDB](https://github.com/influxdata/influxdb)
- [Prometheus](https://github.com/prometheus/prometheus)
- [FreeCache](https://github.com/coocood/freecache)

View File

@@ -1,3 +0,0 @@
module github.com/cespare/xxhash/v2
go 1.11

View File

View File

@@ -1,236 +0,0 @@
// Package xxhash implements the 64-bit variant of xxHash (XXH64) as described
// at http://cyan4973.github.io/xxHash/.
package xxhash
import (
"encoding/binary"
"errors"
"math/bits"
)
const (
prime1 uint64 = 11400714785074694791
prime2 uint64 = 14029467366897019727
prime3 uint64 = 1609587929392839161
prime4 uint64 = 9650029242287828579
prime5 uint64 = 2870177450012600261
)
// NOTE(caleb): I'm using both consts and vars of the primes. Using consts where
// possible in the Go code is worth a small (but measurable) performance boost
// by avoiding some MOVQs. Vars are needed for the asm and also are useful for
// convenience in the Go code in a few places where we need to intentionally
// avoid constant arithmetic (e.g., v1 := prime1 + prime2 fails because the
// result overflows a uint64).
var (
prime1v = prime1
prime2v = prime2
prime3v = prime3
prime4v = prime4
prime5v = prime5
)
// Digest implements hash.Hash64.
type Digest struct {
v1 uint64
v2 uint64
v3 uint64
v4 uint64
total uint64
mem [32]byte
n int // how much of mem is used
}
// New creates a new Digest that computes the 64-bit xxHash algorithm.
func New() *Digest {
var d Digest
d.Reset()
return &d
}
// Reset clears the Digest's state so that it can be reused.
func (d *Digest) Reset() {
d.v1 = prime1v + prime2
d.v2 = prime2
d.v3 = 0
d.v4 = -prime1v
d.total = 0
d.n = 0
}
// Size always returns 8 bytes.
func (d *Digest) Size() int { return 8 }
// BlockSize always returns 32 bytes.
func (d *Digest) BlockSize() int { return 32 }
// Write adds more data to d. It always returns len(b), nil.
func (d *Digest) Write(b []byte) (n int, err error) {
n = len(b)
d.total += uint64(n)
if d.n+n < 32 {
// This new data doesn't even fill the current block.
copy(d.mem[d.n:], b)
d.n += n
return
}
if d.n > 0 {
// Finish off the partial block.
copy(d.mem[d.n:], b)
d.v1 = round(d.v1, u64(d.mem[0:8]))
d.v2 = round(d.v2, u64(d.mem[8:16]))
d.v3 = round(d.v3, u64(d.mem[16:24]))
d.v4 = round(d.v4, u64(d.mem[24:32]))
b = b[32-d.n:]
d.n = 0
}
if len(b) >= 32 {
// One or more full blocks left.
nw := writeBlocks(d, b)
b = b[nw:]
}
// Store any remaining partial block.
copy(d.mem[:], b)
d.n = len(b)
return
}
// Sum appends the current hash to b and returns the resulting slice.
func (d *Digest) Sum(b []byte) []byte {
s := d.Sum64()
return append(
b,
byte(s>>56),
byte(s>>48),
byte(s>>40),
byte(s>>32),
byte(s>>24),
byte(s>>16),
byte(s>>8),
byte(s),
)
}
// Sum64 returns the current hash.
func (d *Digest) Sum64() uint64 {
var h uint64
if d.total >= 32 {
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = d.v3 + prime5
}
h += d.total
i, end := 0, d.n
for ; i+8 <= end; i += 8 {
k1 := round(0, u64(d.mem[i:i+8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if i+4 <= end {
h ^= uint64(u32(d.mem[i:i+4])) * prime1
h = rol23(h)*prime2 + prime3
i += 4
}
for i < end {
h ^= uint64(d.mem[i]) * prime5
h = rol11(h) * prime1
i++
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return h
}
const (
magic = "xxh\x06"
marshaledSize = len(magic) + 8*5 + 32
)
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (d *Digest) MarshalBinary() ([]byte, error) {
b := make([]byte, 0, marshaledSize)
b = append(b, magic...)
b = appendUint64(b, d.v1)
b = appendUint64(b, d.v2)
b = appendUint64(b, d.v3)
b = appendUint64(b, d.v4)
b = appendUint64(b, d.total)
b = append(b, d.mem[:d.n]...)
b = b[:len(b)+len(d.mem)-d.n]
return b, nil
}
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (d *Digest) UnmarshalBinary(b []byte) error {
if len(b) < len(magic) || string(b[:len(magic)]) != magic {
return errors.New("xxhash: invalid hash state identifier")
}
if len(b) != marshaledSize {
return errors.New("xxhash: invalid hash state size")
}
b = b[len(magic):]
b, d.v1 = consumeUint64(b)
b, d.v2 = consumeUint64(b)
b, d.v3 = consumeUint64(b)
b, d.v4 = consumeUint64(b)
b, d.total = consumeUint64(b)
copy(d.mem[:], b)
b = b[len(d.mem):]
d.n = int(d.total % uint64(len(d.mem)))
return nil
}
func appendUint64(b []byte, x uint64) []byte {
var a [8]byte
binary.LittleEndian.PutUint64(a[:], x)
return append(b, a[:]...)
}
func consumeUint64(b []byte) ([]byte, uint64) {
x := u64(b)
return b[8:], x
}
func u64(b []byte) uint64 { return binary.LittleEndian.Uint64(b) }
func u32(b []byte) uint32 { return binary.LittleEndian.Uint32(b) }
func round(acc, input uint64) uint64 {
acc += input * prime2
acc = rol31(acc)
acc *= prime1
return acc
}
func mergeRound(acc, val uint64) uint64 {
val = round(0, val)
acc ^= val
acc = acc*prime1 + prime4
return acc
}
func rol1(x uint64) uint64 { return bits.RotateLeft64(x, 1) }
func rol7(x uint64) uint64 { return bits.RotateLeft64(x, 7) }
func rol11(x uint64) uint64 { return bits.RotateLeft64(x, 11) }
func rol12(x uint64) uint64 { return bits.RotateLeft64(x, 12) }
func rol18(x uint64) uint64 { return bits.RotateLeft64(x, 18) }
func rol23(x uint64) uint64 { return bits.RotateLeft64(x, 23) }
func rol27(x uint64) uint64 { return bits.RotateLeft64(x, 27) }
func rol31(x uint64) uint64 { return bits.RotateLeft64(x, 31) }

View File

@@ -1,13 +0,0 @@
// +build !appengine
// +build gc
// +build !purego
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
//
//go:noescape
func Sum64(b []byte) uint64
//go:noescape
func writeBlocks(d *Digest, b []byte) int

View File

@@ -1,215 +0,0 @@
// +build !appengine
// +build gc
// +build !purego
#include "textflag.h"
// Register allocation:
// AX h
// CX pointer to advance through b
// DX n
// BX loop end
// R8 v1, k1
// R9 v2
// R10 v3
// R11 v4
// R12 tmp
// R13 prime1v
// R14 prime2v
// R15 prime4v
// round reads from and advances the buffer pointer in CX.
// It assumes that R13 has prime1v and R14 has prime2v.
#define round(r) \
MOVQ (CX), R12 \
ADDQ $8, CX \
IMULQ R14, R12 \
ADDQ R12, r \
ROLQ $31, r \
IMULQ R13, r
// mergeRound applies a merge round on the two registers acc and val.
// It assumes that R13 has prime1v, R14 has prime2v, and R15 has prime4v.
#define mergeRound(acc, val) \
IMULQ R14, val \
ROLQ $31, val \
IMULQ R13, val \
XORQ val, acc \
IMULQ R13, acc \
ADDQ R15, acc
// func Sum64(b []byte) uint64
TEXT ·Sum64(SB), NOSPLIT, $0-32
// Load fixed primes.
MOVQ ·prime1v(SB), R13
MOVQ ·prime2v(SB), R14
MOVQ ·prime4v(SB), R15
// Load slice.
MOVQ b_base+0(FP), CX
MOVQ b_len+8(FP), DX
LEAQ (CX)(DX*1), BX
// The first loop limit will be len(b)-32.
SUBQ $32, BX
// Check whether we have at least one block.
CMPQ DX, $32
JLT noBlocks
// Set up initial state (v1, v2, v3, v4).
MOVQ R13, R8
ADDQ R14, R8
MOVQ R14, R9
XORQ R10, R10
XORQ R11, R11
SUBQ R13, R11
// Loop until CX > BX.
blockLoop:
round(R8)
round(R9)
round(R10)
round(R11)
CMPQ CX, BX
JLE blockLoop
MOVQ R8, AX
ROLQ $1, AX
MOVQ R9, R12
ROLQ $7, R12
ADDQ R12, AX
MOVQ R10, R12
ROLQ $12, R12
ADDQ R12, AX
MOVQ R11, R12
ROLQ $18, R12
ADDQ R12, AX
mergeRound(AX, R8)
mergeRound(AX, R9)
mergeRound(AX, R10)
mergeRound(AX, R11)
JMP afterBlocks
noBlocks:
MOVQ ·prime5v(SB), AX
afterBlocks:
ADDQ DX, AX
// Right now BX has len(b)-32, and we want to loop until CX > len(b)-8.
ADDQ $24, BX
CMPQ CX, BX
JG fourByte
wordLoop:
// Calculate k1.
MOVQ (CX), R8
ADDQ $8, CX
IMULQ R14, R8
ROLQ $31, R8
IMULQ R13, R8
XORQ R8, AX
ROLQ $27, AX
IMULQ R13, AX
ADDQ R15, AX
CMPQ CX, BX
JLE wordLoop
fourByte:
ADDQ $4, BX
CMPQ CX, BX
JG singles
MOVL (CX), R8
ADDQ $4, CX
IMULQ R13, R8
XORQ R8, AX
ROLQ $23, AX
IMULQ R14, AX
ADDQ ·prime3v(SB), AX
singles:
ADDQ $4, BX
CMPQ CX, BX
JGE finalize
singlesLoop:
MOVBQZX (CX), R12
ADDQ $1, CX
IMULQ ·prime5v(SB), R12
XORQ R12, AX
ROLQ $11, AX
IMULQ R13, AX
CMPQ CX, BX
JL singlesLoop
finalize:
MOVQ AX, R12
SHRQ $33, R12
XORQ R12, AX
IMULQ R14, AX
MOVQ AX, R12
SHRQ $29, R12
XORQ R12, AX
IMULQ ·prime3v(SB), AX
MOVQ AX, R12
SHRQ $32, R12
XORQ R12, AX
MOVQ AX, ret+24(FP)
RET
// writeBlocks uses the same registers as above except that it uses AX to store
// the d pointer.
// func writeBlocks(d *Digest, b []byte) int
TEXT ·writeBlocks(SB), NOSPLIT, $0-40
// Load fixed primes needed for round.
MOVQ ·prime1v(SB), R13
MOVQ ·prime2v(SB), R14
// Load slice.
MOVQ b_base+8(FP), CX
MOVQ b_len+16(FP), DX
LEAQ (CX)(DX*1), BX
SUBQ $32, BX
// Load vN from d.
MOVQ d+0(FP), AX
MOVQ 0(AX), R8 // v1
MOVQ 8(AX), R9 // v2
MOVQ 16(AX), R10 // v3
MOVQ 24(AX), R11 // v4
// We don't need to check the loop condition here; this function is
// always called with at least one block of data to process.
blockLoop:
round(R8)
round(R9)
round(R10)
round(R11)
CMPQ CX, BX
JLE blockLoop
// Copy vN back to d.
MOVQ R8, 0(AX)
MOVQ R9, 8(AX)
MOVQ R10, 16(AX)
MOVQ R11, 24(AX)
// The number of bytes written is CX minus the old base pointer.
SUBQ b_base+8(FP), CX
MOVQ CX, ret+32(FP)
RET

View File

@@ -1,76 +0,0 @@
// +build !amd64 appengine !gc purego
package xxhash
// Sum64 computes the 64-bit xxHash digest of b.
func Sum64(b []byte) uint64 {
// A simpler version would be
// d := New()
// d.Write(b)
// return d.Sum64()
// but this is faster, particularly for small inputs.
n := len(b)
var h uint64
if n >= 32 {
v1 := prime1v + prime2
v2 := prime2
v3 := uint64(0)
v4 := -prime1v
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
v3 = round(v3, u64(b[16:24:len(b)]))
v4 = round(v4, u64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
h = mergeRound(h, v1)
h = mergeRound(h, v2)
h = mergeRound(h, v3)
h = mergeRound(h, v4)
} else {
h = prime5
}
h += uint64(n)
i, end := 0, len(b)
for ; i+8 <= end; i += 8 {
k1 := round(0, u64(b[i:i+8:len(b)]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
if i+4 <= end {
h ^= uint64(u32(b[i:i+4:len(b)])) * prime1
h = rol23(h)*prime2 + prime3
i += 4
}
for ; i < end; i++ {
h ^= uint64(b[i]) * prime5
h = rol11(h) * prime1
}
h ^= h >> 33
h *= prime2
h ^= h >> 29
h *= prime3
h ^= h >> 32
return h
}
func writeBlocks(d *Digest, b []byte) int {
v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
n := len(b)
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
v3 = round(v3, u64(b[16:24:len(b)]))
v4 = round(v4, u64(b[24:32:len(b)]))
b = b[32:len(b):len(b)]
}
d.v1, d.v2, d.v3, d.v4 = v1, v2, v3, v4
return n - len(b)
}

View File

@@ -1,15 +0,0 @@
// +build appengine
// This file contains the safe implementations of otherwise unsafe-using code.
package xxhash
// Sum64String computes the 64-bit xxHash digest of s.
func Sum64String(s string) uint64 {
return Sum64([]byte(s))
}
// WriteString adds more data to d. It always returns len(s), nil.
func (d *Digest) WriteString(s string) (n int, err error) {
return d.Write([]byte(s))
}

View File

@@ -1,46 +0,0 @@
// +build !appengine
// This file encapsulates usage of unsafe.
// xxhash_safe.go contains the safe implementations.
package xxhash
import (
"reflect"
"unsafe"
)
// Notes:
//
// See https://groups.google.com/d/msg/golang-nuts/dcjzJy-bSpw/tcZYBzQqAQAJ
// for some discussion about these unsafe conversions.
//
// In the future it's possible that compiler optimizations will make these
// unsafe operations unnecessary: https://golang.org/issue/2205.
//
// Both of these wrapper functions still incur function call overhead since they
// will not be inlined. We could write Go/asm copies of Sum64 and Digest.Write
// for strings to squeeze out a bit more speed. Mid-stack inlining should
// eventually fix this.
// Sum64String computes the 64-bit xxHash digest of s.
// It may be faster than Sum64([]byte(s)) by avoiding a copy.
func Sum64String(s string) uint64 {
var b []byte
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
bh.Len = len(s)
bh.Cap = len(s)
return Sum64(b)
}
// WriteString adds more data to d. It always returns len(s), nil.
// It may be faster than Write([]byte(s)) by avoiding a copy.
func (d *Digest) WriteString(s string) (n int, err error) {
var b []byte
bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
bh.Len = len(s)
bh.Cap = len(s)
return d.Write(b)
}

View File

@@ -1,3 +0,0 @@
# This source code refers to The Go Authors for copyright purposes.
# The master list of authors is in the main Go distribution,
# visible at http://tip.golang.org/AUTHORS.

View File

@@ -1,3 +0,0 @@
# This source code was written by the Go contributors.
# The master list of contributors is in the main Go distribution,
# visible at http://tip.golang.org/CONTRIBUTORS.

View File

@@ -1,28 +0,0 @@
Copyright 2010 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,324 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"errors"
"fmt"
"google.golang.org/protobuf/encoding/prototext"
"google.golang.org/protobuf/encoding/protowire"
"google.golang.org/protobuf/runtime/protoimpl"
)
const (
WireVarint = 0
WireFixed32 = 5
WireFixed64 = 1
WireBytes = 2
WireStartGroup = 3
WireEndGroup = 4
)
// EncodeVarint returns the varint encoded bytes of v.
func EncodeVarint(v uint64) []byte {
return protowire.AppendVarint(nil, v)
}
// SizeVarint returns the length of the varint encoded bytes of v.
// This is equal to len(EncodeVarint(v)).
func SizeVarint(v uint64) int {
return protowire.SizeVarint(v)
}
// DecodeVarint parses a varint encoded integer from b,
// returning the integer value and the length of the varint.
// It returns (0, 0) if there is a parse error.
func DecodeVarint(b []byte) (uint64, int) {
v, n := protowire.ConsumeVarint(b)
if n < 0 {
return 0, 0
}
return v, n
}
// Buffer is a buffer for encoding and decoding the protobuf wire format.
// It may be reused between invocations to reduce memory usage.
type Buffer struct {
buf []byte
idx int
deterministic bool
}
// NewBuffer allocates a new Buffer initialized with buf,
// where the contents of buf are considered the unread portion of the buffer.
func NewBuffer(buf []byte) *Buffer {
return &Buffer{buf: buf}
}
// SetDeterministic specifies whether to use deterministic serialization.
//
// Deterministic serialization guarantees that for a given binary, equal
// messages will always be serialized to the same bytes. This implies:
//
// - Repeated serialization of a message will return the same bytes.
// - Different processes of the same binary (which may be executing on
// different machines) will serialize equal messages to the same bytes.
//
// Note that the deterministic serialization is NOT canonical across
// languages. It is not guaranteed to remain stable over time. It is unstable
// across different builds with schema changes due to unknown fields.
// Users who need canonical serialization (e.g., persistent storage in a
// canonical form, fingerprinting, etc.) should define their own
// canonicalization specification and implement their own serializer rather
// than relying on this API.
//
// If deterministic serialization is requested, map entries will be sorted
// by keys in lexographical order. This is an implementation detail and
// subject to change.
func (b *Buffer) SetDeterministic(deterministic bool) {
b.deterministic = deterministic
}
// SetBuf sets buf as the internal buffer,
// where the contents of buf are considered the unread portion of the buffer.
func (b *Buffer) SetBuf(buf []byte) {
b.buf = buf
b.idx = 0
}
// Reset clears the internal buffer of all written and unread data.
func (b *Buffer) Reset() {
b.buf = b.buf[:0]
b.idx = 0
}
// Bytes returns the internal buffer.
func (b *Buffer) Bytes() []byte {
return b.buf
}
// Unread returns the unread portion of the buffer.
func (b *Buffer) Unread() []byte {
return b.buf[b.idx:]
}
// Marshal appends the wire-format encoding of m to the buffer.
func (b *Buffer) Marshal(m Message) error {
var err error
b.buf, err = marshalAppend(b.buf, m, b.deterministic)
return err
}
// Unmarshal parses the wire-format message in the buffer and
// places the decoded results in m.
// It does not reset m before unmarshaling.
func (b *Buffer) Unmarshal(m Message) error {
err := UnmarshalMerge(b.Unread(), m)
b.idx = len(b.buf)
return err
}
type unknownFields struct{ XXX_unrecognized protoimpl.UnknownFields }
func (m *unknownFields) String() string { panic("not implemented") }
func (m *unknownFields) Reset() { panic("not implemented") }
func (m *unknownFields) ProtoMessage() { panic("not implemented") }
// DebugPrint dumps the encoded bytes of b with a header and footer including s
// to stdout. This is only intended for debugging.
func (*Buffer) DebugPrint(s string, b []byte) {
m := MessageReflect(new(unknownFields))
m.SetUnknown(b)
b, _ = prototext.MarshalOptions{AllowPartial: true, Indent: "\t"}.Marshal(m.Interface())
fmt.Printf("==== %s ====\n%s==== %s ====\n", s, b, s)
}
// EncodeVarint appends an unsigned varint encoding to the buffer.
func (b *Buffer) EncodeVarint(v uint64) error {
b.buf = protowire.AppendVarint(b.buf, v)
return nil
}
// EncodeZigzag32 appends a 32-bit zig-zag varint encoding to the buffer.
func (b *Buffer) EncodeZigzag32(v uint64) error {
return b.EncodeVarint(uint64((uint32(v) << 1) ^ uint32((int32(v) >> 31))))
}
// EncodeZigzag64 appends a 64-bit zig-zag varint encoding to the buffer.
func (b *Buffer) EncodeZigzag64(v uint64) error {
return b.EncodeVarint(uint64((uint64(v) << 1) ^ uint64((int64(v) >> 63))))
}
// EncodeFixed32 appends a 32-bit little-endian integer to the buffer.
func (b *Buffer) EncodeFixed32(v uint64) error {
b.buf = protowire.AppendFixed32(b.buf, uint32(v))
return nil
}
// EncodeFixed64 appends a 64-bit little-endian integer to the buffer.
func (b *Buffer) EncodeFixed64(v uint64) error {
b.buf = protowire.AppendFixed64(b.buf, uint64(v))
return nil
}
// EncodeRawBytes appends a length-prefixed raw bytes to the buffer.
func (b *Buffer) EncodeRawBytes(v []byte) error {
b.buf = protowire.AppendBytes(b.buf, v)
return nil
}
// EncodeStringBytes appends a length-prefixed raw bytes to the buffer.
// It does not validate whether v contains valid UTF-8.
func (b *Buffer) EncodeStringBytes(v string) error {
b.buf = protowire.AppendString(b.buf, v)
return nil
}
// EncodeMessage appends a length-prefixed encoded message to the buffer.
func (b *Buffer) EncodeMessage(m Message) error {
var err error
b.buf = protowire.AppendVarint(b.buf, uint64(Size(m)))
b.buf, err = marshalAppend(b.buf, m, b.deterministic)
return err
}
// DecodeVarint consumes an encoded unsigned varint from the buffer.
func (b *Buffer) DecodeVarint() (uint64, error) {
v, n := protowire.ConsumeVarint(b.buf[b.idx:])
if n < 0 {
return 0, protowire.ParseError(n)
}
b.idx += n
return uint64(v), nil
}
// DecodeZigzag32 consumes an encoded 32-bit zig-zag varint from the buffer.
func (b *Buffer) DecodeZigzag32() (uint64, error) {
v, err := b.DecodeVarint()
if err != nil {
return 0, err
}
return uint64((uint32(v) >> 1) ^ uint32((int32(v&1)<<31)>>31)), nil
}
// DecodeZigzag64 consumes an encoded 64-bit zig-zag varint from the buffer.
func (b *Buffer) DecodeZigzag64() (uint64, error) {
v, err := b.DecodeVarint()
if err != nil {
return 0, err
}
return uint64((uint64(v) >> 1) ^ uint64((int64(v&1)<<63)>>63)), nil
}
// DecodeFixed32 consumes a 32-bit little-endian integer from the buffer.
func (b *Buffer) DecodeFixed32() (uint64, error) {
v, n := protowire.ConsumeFixed32(b.buf[b.idx:])
if n < 0 {
return 0, protowire.ParseError(n)
}
b.idx += n
return uint64(v), nil
}
// DecodeFixed64 consumes a 64-bit little-endian integer from the buffer.
func (b *Buffer) DecodeFixed64() (uint64, error) {
v, n := protowire.ConsumeFixed64(b.buf[b.idx:])
if n < 0 {
return 0, protowire.ParseError(n)
}
b.idx += n
return uint64(v), nil
}
// DecodeRawBytes consumes a length-prefixed raw bytes from the buffer.
// If alloc is specified, it returns a copy the raw bytes
// rather than a sub-slice of the buffer.
func (b *Buffer) DecodeRawBytes(alloc bool) ([]byte, error) {
v, n := protowire.ConsumeBytes(b.buf[b.idx:])
if n < 0 {
return nil, protowire.ParseError(n)
}
b.idx += n
if alloc {
v = append([]byte(nil), v...)
}
return v, nil
}
// DecodeStringBytes consumes a length-prefixed raw bytes from the buffer.
// It does not validate whether the raw bytes contain valid UTF-8.
func (b *Buffer) DecodeStringBytes() (string, error) {
v, n := protowire.ConsumeString(b.buf[b.idx:])
if n < 0 {
return "", protowire.ParseError(n)
}
b.idx += n
return v, nil
}
// DecodeMessage consumes a length-prefixed message from the buffer.
// It does not reset m before unmarshaling.
func (b *Buffer) DecodeMessage(m Message) error {
v, err := b.DecodeRawBytes(false)
if err != nil {
return err
}
return UnmarshalMerge(v, m)
}
// DecodeGroup consumes a message group from the buffer.
// It assumes that the start group marker has already been consumed and
// consumes all bytes until (and including the end group marker).
// It does not reset m before unmarshaling.
func (b *Buffer) DecodeGroup(m Message) error {
v, n, err := consumeGroup(b.buf[b.idx:])
if err != nil {
return err
}
b.idx += n
return UnmarshalMerge(v, m)
}
// consumeGroup parses b until it finds an end group marker, returning
// the raw bytes of the message (excluding the end group marker) and the
// the total length of the message (including the end group marker).
func consumeGroup(b []byte) ([]byte, int, error) {
b0 := b
depth := 1 // assume this follows a start group marker
for {
_, wtyp, tagLen := protowire.ConsumeTag(b)
if tagLen < 0 {
return nil, 0, protowire.ParseError(tagLen)
}
b = b[tagLen:]
var valLen int
switch wtyp {
case protowire.VarintType:
_, valLen = protowire.ConsumeVarint(b)
case protowire.Fixed32Type:
_, valLen = protowire.ConsumeFixed32(b)
case protowire.Fixed64Type:
_, valLen = protowire.ConsumeFixed64(b)
case protowire.BytesType:
_, valLen = protowire.ConsumeBytes(b)
case protowire.StartGroupType:
depth++
case protowire.EndGroupType:
depth--
default:
return nil, 0, errors.New("proto: cannot parse reserved wire type")
}
if valLen < 0 {
return nil, 0, protowire.ParseError(valLen)
}
b = b[valLen:]
if depth == 0 {
return b0[:len(b0)-len(b)-tagLen], len(b0) - len(b), nil
}
}
}

View File

@@ -1,63 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"google.golang.org/protobuf/reflect/protoreflect"
)
// SetDefaults sets unpopulated scalar fields to their default values.
// Fields within a oneof are not set even if they have a default value.
// SetDefaults is recursively called upon any populated message fields.
func SetDefaults(m Message) {
if m != nil {
setDefaults(MessageReflect(m))
}
}
func setDefaults(m protoreflect.Message) {
fds := m.Descriptor().Fields()
for i := 0; i < fds.Len(); i++ {
fd := fds.Get(i)
if !m.Has(fd) {
if fd.HasDefault() && fd.ContainingOneof() == nil {
v := fd.Default()
if fd.Kind() == protoreflect.BytesKind {
v = protoreflect.ValueOf(append([]byte(nil), v.Bytes()...)) // copy the default bytes
}
m.Set(fd, v)
}
continue
}
}
m.Range(func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool {
switch {
// Handle singular message.
case fd.Cardinality() != protoreflect.Repeated:
if fd.Message() != nil {
setDefaults(m.Get(fd).Message())
}
// Handle list of messages.
case fd.IsList():
if fd.Message() != nil {
ls := m.Get(fd).List()
for i := 0; i < ls.Len(); i++ {
setDefaults(ls.Get(i).Message())
}
}
// Handle map of messages.
case fd.IsMap():
if fd.MapValue().Message() != nil {
ms := m.Get(fd).Map()
ms.Range(func(_ protoreflect.MapKey, v protoreflect.Value) bool {
setDefaults(v.Message())
return true
})
}
}
return true
})
}

View File

@@ -1,113 +0,0 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"encoding/json"
"errors"
"fmt"
"strconv"
protoV2 "google.golang.org/protobuf/proto"
)
var (
// Deprecated: No longer returned.
ErrNil = errors.New("proto: Marshal called with nil")
// Deprecated: No longer returned.
ErrTooLarge = errors.New("proto: message encodes to over 2 GB")
// Deprecated: No longer returned.
ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof")
)
// Deprecated: Do not use.
type Stats struct{ Emalloc, Dmalloc, Encode, Decode, Chit, Cmiss, Size uint64 }
// Deprecated: Do not use.
func GetStats() Stats { return Stats{} }
// Deprecated: Do not use.
func MarshalMessageSet(interface{}) ([]byte, error) {
return nil, errors.New("proto: not implemented")
}
// Deprecated: Do not use.
func UnmarshalMessageSet([]byte, interface{}) error {
return errors.New("proto: not implemented")
}
// Deprecated: Do not use.
func MarshalMessageSetJSON(interface{}) ([]byte, error) {
return nil, errors.New("proto: not implemented")
}
// Deprecated: Do not use.
func UnmarshalMessageSetJSON([]byte, interface{}) error {
return errors.New("proto: not implemented")
}
// Deprecated: Do not use.
func RegisterMessageSetType(Message, int32, string) {}
// Deprecated: Do not use.
func EnumName(m map[int32]string, v int32) string {
s, ok := m[v]
if ok {
return s
}
return strconv.Itoa(int(v))
}
// Deprecated: Do not use.
func UnmarshalJSONEnum(m map[string]int32, data []byte, enumName string) (int32, error) {
if data[0] == '"' {
// New style: enums are strings.
var repr string
if err := json.Unmarshal(data, &repr); err != nil {
return -1, err
}
val, ok := m[repr]
if !ok {
return 0, fmt.Errorf("unrecognized enum %s value %q", enumName, repr)
}
return val, nil
}
// Old style: enums are ints.
var val int32
if err := json.Unmarshal(data, &val); err != nil {
return 0, fmt.Errorf("cannot unmarshal %#q into enum %s", data, enumName)
}
return val, nil
}
// Deprecated: Do not use; this type existed for intenal-use only.
type InternalMessageInfo struct{}
// Deprecated: Do not use; this method existed for intenal-use only.
func (*InternalMessageInfo) DiscardUnknown(m Message) {
DiscardUnknown(m)
}
// Deprecated: Do not use; this method existed for intenal-use only.
func (*InternalMessageInfo) Marshal(b []byte, m Message, deterministic bool) ([]byte, error) {
return protoV2.MarshalOptions{Deterministic: deterministic}.MarshalAppend(b, MessageV2(m))
}
// Deprecated: Do not use; this method existed for intenal-use only.
func (*InternalMessageInfo) Merge(dst, src Message) {
protoV2.Merge(MessageV2(dst), MessageV2(src))
}
// Deprecated: Do not use; this method existed for intenal-use only.
func (*InternalMessageInfo) Size(m Message) int {
return protoV2.Size(MessageV2(m))
}
// Deprecated: Do not use; this method existed for intenal-use only.
func (*InternalMessageInfo) Unmarshal(m Message, b []byte) error {
return protoV2.UnmarshalOptions{Merge: true}.Unmarshal(b, MessageV2(m))
}

View File

@@ -1,58 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"google.golang.org/protobuf/reflect/protoreflect"
)
// DiscardUnknown recursively discards all unknown fields from this message
// and all embedded messages.
//
// When unmarshaling a message with unrecognized fields, the tags and values
// of such fields are preserved in the Message. This allows a later call to
// marshal to be able to produce a message that continues to have those
// unrecognized fields. To avoid this, DiscardUnknown is used to
// explicitly clear the unknown fields after unmarshaling.
func DiscardUnknown(m Message) {
if m != nil {
discardUnknown(MessageReflect(m))
}
}
func discardUnknown(m protoreflect.Message) {
m.Range(func(fd protoreflect.FieldDescriptor, val protoreflect.Value) bool {
switch {
// Handle singular message.
case fd.Cardinality() != protoreflect.Repeated:
if fd.Message() != nil {
discardUnknown(m.Get(fd).Message())
}
// Handle list of messages.
case fd.IsList():
if fd.Message() != nil {
ls := m.Get(fd).List()
for i := 0; i < ls.Len(); i++ {
discardUnknown(ls.Get(i).Message())
}
}
// Handle map of messages.
case fd.IsMap():
if fd.MapValue().Message() != nil {
ms := m.Get(fd).Map()
ms.Range(func(_ protoreflect.MapKey, v protoreflect.Value) bool {
discardUnknown(v.Message())
return true
})
}
}
return true
})
// Discard unknown fields.
if len(m.GetUnknown()) > 0 {
m.SetUnknown(nil)
}
}

View File

@@ -1,356 +0,0 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"errors"
"fmt"
"reflect"
"google.golang.org/protobuf/encoding/protowire"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
"google.golang.org/protobuf/runtime/protoiface"
"google.golang.org/protobuf/runtime/protoimpl"
)
type (
// ExtensionDesc represents an extension descriptor and
// is used to interact with an extension field in a message.
//
// Variables of this type are generated in code by protoc-gen-go.
ExtensionDesc = protoimpl.ExtensionInfo
// ExtensionRange represents a range of message extensions.
// Used in code generated by protoc-gen-go.
ExtensionRange = protoiface.ExtensionRangeV1
// Deprecated: Do not use; this is an internal type.
Extension = protoimpl.ExtensionFieldV1
// Deprecated: Do not use; this is an internal type.
XXX_InternalExtensions = protoimpl.ExtensionFields
)
// ErrMissingExtension reports whether the extension was not present.
var ErrMissingExtension = errors.New("proto: missing extension")
var errNotExtendable = errors.New("proto: not an extendable proto.Message")
// HasExtension reports whether the extension field is present in m
// either as an explicitly populated field or as an unknown field.
func HasExtension(m Message, xt *ExtensionDesc) (has bool) {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() {
return false
}
// Check whether any populated known field matches the field number.
xtd := xt.TypeDescriptor()
if isValidExtension(mr.Descriptor(), xtd) {
has = mr.Has(xtd)
} else {
mr.Range(func(fd protoreflect.FieldDescriptor, _ protoreflect.Value) bool {
has = int32(fd.Number()) == xt.Field
return !has
})
}
// Check whether any unknown field matches the field number.
for b := mr.GetUnknown(); !has && len(b) > 0; {
num, _, n := protowire.ConsumeField(b)
has = int32(num) == xt.Field
b = b[n:]
}
return has
}
// ClearExtension removes the extension field from m
// either as an explicitly populated field or as an unknown field.
func ClearExtension(m Message, xt *ExtensionDesc) {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() {
return
}
xtd := xt.TypeDescriptor()
if isValidExtension(mr.Descriptor(), xtd) {
mr.Clear(xtd)
} else {
mr.Range(func(fd protoreflect.FieldDescriptor, _ protoreflect.Value) bool {
if int32(fd.Number()) == xt.Field {
mr.Clear(fd)
return false
}
return true
})
}
clearUnknown(mr, fieldNum(xt.Field))
}
// ClearAllExtensions clears all extensions from m.
// This includes populated fields and unknown fields in the extension range.
func ClearAllExtensions(m Message) {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() {
return
}
mr.Range(func(fd protoreflect.FieldDescriptor, _ protoreflect.Value) bool {
if fd.IsExtension() {
mr.Clear(fd)
}
return true
})
clearUnknown(mr, mr.Descriptor().ExtensionRanges())
}
// GetExtension retrieves a proto2 extended field from m.
//
// If the descriptor is type complete (i.e., ExtensionDesc.ExtensionType is non-nil),
// then GetExtension parses the encoded field and returns a Go value of the specified type.
// If the field is not present, then the default value is returned (if one is specified),
// otherwise ErrMissingExtension is reported.
//
// If the descriptor is type incomplete (i.e., ExtensionDesc.ExtensionType is nil),
// then GetExtension returns the raw encoded bytes for the extension field.
func GetExtension(m Message, xt *ExtensionDesc) (interface{}, error) {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() || mr.Descriptor().ExtensionRanges().Len() == 0 {
return nil, errNotExtendable
}
// Retrieve the unknown fields for this extension field.
var bo protoreflect.RawFields
for bi := mr.GetUnknown(); len(bi) > 0; {
num, _, n := protowire.ConsumeField(bi)
if int32(num) == xt.Field {
bo = append(bo, bi[:n]...)
}
bi = bi[n:]
}
// For type incomplete descriptors, only retrieve the unknown fields.
if xt.ExtensionType == nil {
return []byte(bo), nil
}
// If the extension field only exists as unknown fields, unmarshal it.
// This is rarely done since proto.Unmarshal eagerly unmarshals extensions.
xtd := xt.TypeDescriptor()
if !isValidExtension(mr.Descriptor(), xtd) {
return nil, fmt.Errorf("proto: bad extended type; %T does not extend %T", xt.ExtendedType, m)
}
if !mr.Has(xtd) && len(bo) > 0 {
m2 := mr.New()
if err := (proto.UnmarshalOptions{
Resolver: extensionResolver{xt},
}.Unmarshal(bo, m2.Interface())); err != nil {
return nil, err
}
if m2.Has(xtd) {
mr.Set(xtd, m2.Get(xtd))
clearUnknown(mr, fieldNum(xt.Field))
}
}
// Check whether the message has the extension field set or a default.
var pv protoreflect.Value
switch {
case mr.Has(xtd):
pv = mr.Get(xtd)
case xtd.HasDefault():
pv = xtd.Default()
default:
return nil, ErrMissingExtension
}
v := xt.InterfaceOf(pv)
rv := reflect.ValueOf(v)
if isScalarKind(rv.Kind()) {
rv2 := reflect.New(rv.Type())
rv2.Elem().Set(rv)
v = rv2.Interface()
}
return v, nil
}
// extensionResolver is a custom extension resolver that stores a single
// extension type that takes precedence over the global registry.
type extensionResolver struct{ xt protoreflect.ExtensionType }
func (r extensionResolver) FindExtensionByName(field protoreflect.FullName) (protoreflect.ExtensionType, error) {
if xtd := r.xt.TypeDescriptor(); xtd.FullName() == field {
return r.xt, nil
}
return protoregistry.GlobalTypes.FindExtensionByName(field)
}
func (r extensionResolver) FindExtensionByNumber(message protoreflect.FullName, field protoreflect.FieldNumber) (protoreflect.ExtensionType, error) {
if xtd := r.xt.TypeDescriptor(); xtd.ContainingMessage().FullName() == message && xtd.Number() == field {
return r.xt, nil
}
return protoregistry.GlobalTypes.FindExtensionByNumber(message, field)
}
// GetExtensions returns a list of the extensions values present in m,
// corresponding with the provided list of extension descriptors, xts.
// If an extension is missing in m, the corresponding value is nil.
func GetExtensions(m Message, xts []*ExtensionDesc) ([]interface{}, error) {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() {
return nil, errNotExtendable
}
vs := make([]interface{}, len(xts))
for i, xt := range xts {
v, err := GetExtension(m, xt)
if err != nil {
if err == ErrMissingExtension {
continue
}
return vs, err
}
vs[i] = v
}
return vs, nil
}
// SetExtension sets an extension field in m to the provided value.
func SetExtension(m Message, xt *ExtensionDesc, v interface{}) error {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() || mr.Descriptor().ExtensionRanges().Len() == 0 {
return errNotExtendable
}
rv := reflect.ValueOf(v)
if reflect.TypeOf(v) != reflect.TypeOf(xt.ExtensionType) {
return fmt.Errorf("proto: bad extension value type. got: %T, want: %T", v, xt.ExtensionType)
}
if rv.Kind() == reflect.Ptr {
if rv.IsNil() {
return fmt.Errorf("proto: SetExtension called with nil value of type %T", v)
}
if isScalarKind(rv.Elem().Kind()) {
v = rv.Elem().Interface()
}
}
xtd := xt.TypeDescriptor()
if !isValidExtension(mr.Descriptor(), xtd) {
return fmt.Errorf("proto: bad extended type; %T does not extend %T", xt.ExtendedType, m)
}
mr.Set(xtd, xt.ValueOf(v))
clearUnknown(mr, fieldNum(xt.Field))
return nil
}
// SetRawExtension inserts b into the unknown fields of m.
//
// Deprecated: Use Message.ProtoReflect.SetUnknown instead.
func SetRawExtension(m Message, fnum int32, b []byte) {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() {
return
}
// Verify that the raw field is valid.
for b0 := b; len(b0) > 0; {
num, _, n := protowire.ConsumeField(b0)
if int32(num) != fnum {
panic(fmt.Sprintf("mismatching field number: got %d, want %d", num, fnum))
}
b0 = b0[n:]
}
ClearExtension(m, &ExtensionDesc{Field: fnum})
mr.SetUnknown(append(mr.GetUnknown(), b...))
}
// ExtensionDescs returns a list of extension descriptors found in m,
// containing descriptors for both populated extension fields in m and
// also unknown fields of m that are in the extension range.
// For the later case, an type incomplete descriptor is provided where only
// the ExtensionDesc.Field field is populated.
// The order of the extension descriptors is undefined.
func ExtensionDescs(m Message) ([]*ExtensionDesc, error) {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() || mr.Descriptor().ExtensionRanges().Len() == 0 {
return nil, errNotExtendable
}
// Collect a set of known extension descriptors.
extDescs := make(map[protoreflect.FieldNumber]*ExtensionDesc)
mr.Range(func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool {
if fd.IsExtension() {
xt := fd.(protoreflect.ExtensionTypeDescriptor)
if xd, ok := xt.Type().(*ExtensionDesc); ok {
extDescs[fd.Number()] = xd
}
}
return true
})
// Collect a set of unknown extension descriptors.
extRanges := mr.Descriptor().ExtensionRanges()
for b := mr.GetUnknown(); len(b) > 0; {
num, _, n := protowire.ConsumeField(b)
if extRanges.Has(num) && extDescs[num] == nil {
extDescs[num] = nil
}
b = b[n:]
}
// Transpose the set of descriptors into a list.
var xts []*ExtensionDesc
for num, xt := range extDescs {
if xt == nil {
xt = &ExtensionDesc{Field: int32(num)}
}
xts = append(xts, xt)
}
return xts, nil
}
// isValidExtension reports whether xtd is a valid extension descriptor for md.
func isValidExtension(md protoreflect.MessageDescriptor, xtd protoreflect.ExtensionTypeDescriptor) bool {
return xtd.ContainingMessage() == md && md.ExtensionRanges().Has(xtd.Number())
}
// isScalarKind reports whether k is a protobuf scalar kind (except bytes).
// This function exists for historical reasons since the representation of
// scalars differs between v1 and v2, where v1 uses *T and v2 uses T.
func isScalarKind(k reflect.Kind) bool {
switch k {
case reflect.Bool, reflect.Int32, reflect.Int64, reflect.Uint32, reflect.Uint64, reflect.Float32, reflect.Float64, reflect.String:
return true
default:
return false
}
}
// clearUnknown removes unknown fields from m where remover.Has reports true.
func clearUnknown(m protoreflect.Message, remover interface {
Has(protoreflect.FieldNumber) bool
}) {
var bo protoreflect.RawFields
for bi := m.GetUnknown(); len(bi) > 0; {
num, _, n := protowire.ConsumeField(bi)
if !remover.Has(num) {
bo = append(bo, bi[:n]...)
}
bi = bi[n:]
}
if bi := m.GetUnknown(); len(bi) != len(bo) {
m.SetUnknown(bo)
}
}
type fieldNum protoreflect.FieldNumber
func (n1 fieldNum) Has(n2 protoreflect.FieldNumber) bool {
return protoreflect.FieldNumber(n1) == n2
}

View File

@@ -1,306 +0,0 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"fmt"
"reflect"
"strconv"
"strings"
"sync"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/runtime/protoimpl"
)
// StructProperties represents protocol buffer type information for a
// generated protobuf message in the open-struct API.
//
// Deprecated: Do not use.
type StructProperties struct {
// Prop are the properties for each field.
//
// Fields belonging to a oneof are stored in OneofTypes instead, with a
// single Properties representing the parent oneof held here.
//
// The order of Prop matches the order of fields in the Go struct.
// Struct fields that are not related to protobufs have a "XXX_" prefix
// in the Properties.Name and must be ignored by the user.
Prop []*Properties
// OneofTypes contains information about the oneof fields in this message.
// It is keyed by the protobuf field name.
OneofTypes map[string]*OneofProperties
}
// Properties represents the type information for a protobuf message field.
//
// Deprecated: Do not use.
type Properties struct {
// Name is a placeholder name with little meaningful semantic value.
// If the name has an "XXX_" prefix, the entire Properties must be ignored.
Name string
// OrigName is the protobuf field name or oneof name.
OrigName string
// JSONName is the JSON name for the protobuf field.
JSONName string
// Enum is a placeholder name for enums.
// For historical reasons, this is neither the Go name for the enum,
// nor the protobuf name for the enum.
Enum string // Deprecated: Do not use.
// Weak contains the full name of the weakly referenced message.
Weak string
// Wire is a string representation of the wire type.
Wire string
// WireType is the protobuf wire type for the field.
WireType int
// Tag is the protobuf field number.
Tag int
// Required reports whether this is a required field.
Required bool
// Optional reports whether this is a optional field.
Optional bool
// Repeated reports whether this is a repeated field.
Repeated bool
// Packed reports whether this is a packed repeated field of scalars.
Packed bool
// Proto3 reports whether this field operates under the proto3 syntax.
Proto3 bool
// Oneof reports whether this field belongs within a oneof.
Oneof bool
// Default is the default value in string form.
Default string
// HasDefault reports whether the field has a default value.
HasDefault bool
// MapKeyProp is the properties for the key field for a map field.
MapKeyProp *Properties
// MapValProp is the properties for the value field for a map field.
MapValProp *Properties
}
// OneofProperties represents the type information for a protobuf oneof.
//
// Deprecated: Do not use.
type OneofProperties struct {
// Type is a pointer to the generated wrapper type for the field value.
// This is nil for messages that are not in the open-struct API.
Type reflect.Type
// Field is the index into StructProperties.Prop for the containing oneof.
Field int
// Prop is the properties for the field.
Prop *Properties
}
// String formats the properties in the protobuf struct field tag style.
func (p *Properties) String() string {
s := p.Wire
s += "," + strconv.Itoa(p.Tag)
if p.Required {
s += ",req"
}
if p.Optional {
s += ",opt"
}
if p.Repeated {
s += ",rep"
}
if p.Packed {
s += ",packed"
}
s += ",name=" + p.OrigName
if p.JSONName != "" {
s += ",json=" + p.JSONName
}
if len(p.Enum) > 0 {
s += ",enum=" + p.Enum
}
if len(p.Weak) > 0 {
s += ",weak=" + p.Weak
}
if p.Proto3 {
s += ",proto3"
}
if p.Oneof {
s += ",oneof"
}
if p.HasDefault {
s += ",def=" + p.Default
}
return s
}
// Parse populates p by parsing a string in the protobuf struct field tag style.
func (p *Properties) Parse(tag string) {
// For example: "bytes,49,opt,name=foo,def=hello!"
for len(tag) > 0 {
i := strings.IndexByte(tag, ',')
if i < 0 {
i = len(tag)
}
switch s := tag[:i]; {
case strings.HasPrefix(s, "name="):
p.OrigName = s[len("name="):]
case strings.HasPrefix(s, "json="):
p.JSONName = s[len("json="):]
case strings.HasPrefix(s, "enum="):
p.Enum = s[len("enum="):]
case strings.HasPrefix(s, "weak="):
p.Weak = s[len("weak="):]
case strings.Trim(s, "0123456789") == "":
n, _ := strconv.ParseUint(s, 10, 32)
p.Tag = int(n)
case s == "opt":
p.Optional = true
case s == "req":
p.Required = true
case s == "rep":
p.Repeated = true
case s == "varint" || s == "zigzag32" || s == "zigzag64":
p.Wire = s
p.WireType = WireVarint
case s == "fixed32":
p.Wire = s
p.WireType = WireFixed32
case s == "fixed64":
p.Wire = s
p.WireType = WireFixed64
case s == "bytes":
p.Wire = s
p.WireType = WireBytes
case s == "group":
p.Wire = s
p.WireType = WireStartGroup
case s == "packed":
p.Packed = true
case s == "proto3":
p.Proto3 = true
case s == "oneof":
p.Oneof = true
case strings.HasPrefix(s, "def="):
// The default tag is special in that everything afterwards is the
// default regardless of the presence of commas.
p.HasDefault = true
p.Default, i = tag[len("def="):], len(tag)
}
tag = strings.TrimPrefix(tag[i:], ",")
}
}
// Init populates the properties from a protocol buffer struct tag.
//
// Deprecated: Do not use.
func (p *Properties) Init(typ reflect.Type, name, tag string, f *reflect.StructField) {
p.Name = name
p.OrigName = name
if tag == "" {
return
}
p.Parse(tag)
if typ != nil && typ.Kind() == reflect.Map {
p.MapKeyProp = new(Properties)
p.MapKeyProp.Init(nil, "Key", f.Tag.Get("protobuf_key"), nil)
p.MapValProp = new(Properties)
p.MapValProp.Init(nil, "Value", f.Tag.Get("protobuf_val"), nil)
}
}
var propertiesCache sync.Map // map[reflect.Type]*StructProperties
// GetProperties returns the list of properties for the type represented by t,
// which must be a generated protocol buffer message in the open-struct API,
// where protobuf message fields are represented by exported Go struct fields.
//
// Deprecated: Use protobuf reflection instead.
func GetProperties(t reflect.Type) *StructProperties {
if p, ok := propertiesCache.Load(t); ok {
return p.(*StructProperties)
}
p, _ := propertiesCache.LoadOrStore(t, newProperties(t))
return p.(*StructProperties)
}
func newProperties(t reflect.Type) *StructProperties {
if t.Kind() != reflect.Struct {
panic(fmt.Sprintf("%v is not a generated message in the open-struct API", t))
}
var hasOneof bool
prop := new(StructProperties)
// Construct a list of properties for each field in the struct.
for i := 0; i < t.NumField(); i++ {
p := new(Properties)
f := t.Field(i)
tagField := f.Tag.Get("protobuf")
p.Init(f.Type, f.Name, tagField, &f)
tagOneof := f.Tag.Get("protobuf_oneof")
if tagOneof != "" {
hasOneof = true
p.OrigName = tagOneof
}
// Rename unrelated struct fields with the "XXX_" prefix since so much
// user code simply checks for this to exclude special fields.
if tagField == "" && tagOneof == "" && !strings.HasPrefix(p.Name, "XXX_") {
p.Name = "XXX_" + p.Name
p.OrigName = "XXX_" + p.OrigName
} else if p.Weak != "" {
p.Name = p.OrigName // avoid possible "XXX_" prefix on weak field
}
prop.Prop = append(prop.Prop, p)
}
// Construct a mapping of oneof field names to properties.
if hasOneof {
var oneofWrappers []interface{}
if fn, ok := reflect.PtrTo(t).MethodByName("XXX_OneofFuncs"); ok {
oneofWrappers = fn.Func.Call([]reflect.Value{reflect.Zero(fn.Type.In(0))})[3].Interface().([]interface{})
}
if fn, ok := reflect.PtrTo(t).MethodByName("XXX_OneofWrappers"); ok {
oneofWrappers = fn.Func.Call([]reflect.Value{reflect.Zero(fn.Type.In(0))})[0].Interface().([]interface{})
}
if m, ok := reflect.Zero(reflect.PtrTo(t)).Interface().(protoreflect.ProtoMessage); ok {
if m, ok := m.ProtoReflect().(interface{ ProtoMessageInfo() *protoimpl.MessageInfo }); ok {
oneofWrappers = m.ProtoMessageInfo().OneofWrappers
}
}
prop.OneofTypes = make(map[string]*OneofProperties)
for _, wrapper := range oneofWrappers {
p := &OneofProperties{
Type: reflect.ValueOf(wrapper).Type(), // *T
Prop: new(Properties),
}
f := p.Type.Elem().Field(0)
p.Prop.Name = f.Name
p.Prop.Parse(f.Tag.Get("protobuf"))
// Determine the struct field that contains this oneof.
// Each wrapper is assignable to exactly one parent field.
var foundOneof bool
for i := 0; i < t.NumField() && !foundOneof; i++ {
if p.Type.AssignableTo(t.Field(i).Type) {
p.Field = i
foundOneof = true
}
}
if !foundOneof {
panic(fmt.Sprintf("%v is not a generated message in the open-struct API", t))
}
prop.OneofTypes[p.Prop.OrigName] = p
}
}
return prop
}
func (sp *StructProperties) Len() int { return len(sp.Prop) }
func (sp *StructProperties) Less(i, j int) bool { return false }
func (sp *StructProperties) Swap(i, j int) { return }

View File

@@ -1,167 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package proto provides functionality for handling protocol buffer messages.
// In particular, it provides marshaling and unmarshaling between a protobuf
// message and the binary wire format.
//
// See https://developers.google.com/protocol-buffers/docs/gotutorial for
// more information.
//
// Deprecated: Use the "google.golang.org/protobuf/proto" package instead.
package proto
import (
protoV2 "google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/runtime/protoiface"
"google.golang.org/protobuf/runtime/protoimpl"
)
const (
ProtoPackageIsVersion1 = true
ProtoPackageIsVersion2 = true
ProtoPackageIsVersion3 = true
ProtoPackageIsVersion4 = true
)
// GeneratedEnum is any enum type generated by protoc-gen-go
// which is a named int32 kind.
// This type exists for documentation purposes.
type GeneratedEnum interface{}
// GeneratedMessage is any message type generated by protoc-gen-go
// which is a pointer to a named struct kind.
// This type exists for documentation purposes.
type GeneratedMessage interface{}
// Message is a protocol buffer message.
//
// This is the v1 version of the message interface and is marginally better
// than an empty interface as it lacks any method to programatically interact
// with the contents of the message.
//
// A v2 message is declared in "google.golang.org/protobuf/proto".Message and
// exposes protobuf reflection as a first-class feature of the interface.
//
// To convert a v1 message to a v2 message, use the MessageV2 function.
// To convert a v2 message to a v1 message, use the MessageV1 function.
type Message = protoiface.MessageV1
// MessageV1 converts either a v1 or v2 message to a v1 message.
// It returns nil if m is nil.
func MessageV1(m GeneratedMessage) protoiface.MessageV1 {
return protoimpl.X.ProtoMessageV1Of(m)
}
// MessageV2 converts either a v1 or v2 message to a v2 message.
// It returns nil if m is nil.
func MessageV2(m GeneratedMessage) protoV2.Message {
return protoimpl.X.ProtoMessageV2Of(m)
}
// MessageReflect returns a reflective view for a message.
// It returns nil if m is nil.
func MessageReflect(m Message) protoreflect.Message {
return protoimpl.X.MessageOf(m)
}
// Marshaler is implemented by messages that can marshal themselves.
// This interface is used by the following functions: Size, Marshal,
// Buffer.Marshal, and Buffer.EncodeMessage.
//
// Deprecated: Do not implement.
type Marshaler interface {
// Marshal formats the encoded bytes of the message.
// It should be deterministic and emit valid protobuf wire data.
// The caller takes ownership of the returned buffer.
Marshal() ([]byte, error)
}
// Unmarshaler is implemented by messages that can unmarshal themselves.
// This interface is used by the following functions: Unmarshal, UnmarshalMerge,
// Buffer.Unmarshal, Buffer.DecodeMessage, and Buffer.DecodeGroup.
//
// Deprecated: Do not implement.
type Unmarshaler interface {
// Unmarshal parses the encoded bytes of the protobuf wire input.
// The provided buffer is only valid for during method call.
// It should not reset the receiver message.
Unmarshal([]byte) error
}
// Merger is implemented by messages that can merge themselves.
// This interface is used by the following functions: Clone and Merge.
//
// Deprecated: Do not implement.
type Merger interface {
// Merge merges the contents of src into the receiver message.
// It clones all data structures in src such that it aliases no mutable
// memory referenced by src.
Merge(src Message)
}
// RequiredNotSetError is an error type returned when
// marshaling or unmarshaling a message with missing required fields.
type RequiredNotSetError struct {
err error
}
func (e *RequiredNotSetError) Error() string {
if e.err != nil {
return e.err.Error()
}
return "proto: required field not set"
}
func (e *RequiredNotSetError) RequiredNotSet() bool {
return true
}
func checkRequiredNotSet(m protoV2.Message) error {
if err := protoV2.CheckInitialized(m); err != nil {
return &RequiredNotSetError{err: err}
}
return nil
}
// Clone returns a deep copy of src.
func Clone(src Message) Message {
return MessageV1(protoV2.Clone(MessageV2(src)))
}
// Merge merges src into dst, which must be messages of the same type.
//
// Populated scalar fields in src are copied to dst, while populated
// singular messages in src are merged into dst by recursively calling Merge.
// The elements of every list field in src is appended to the corresponded
// list fields in dst. The entries of every map field in src is copied into
// the corresponding map field in dst, possibly replacing existing entries.
// The unknown fields of src are appended to the unknown fields of dst.
func Merge(dst, src Message) {
protoV2.Merge(MessageV2(dst), MessageV2(src))
}
// Equal reports whether two messages are equal.
// If two messages marshal to the same bytes under deterministic serialization,
// then Equal is guaranteed to report true.
//
// Two messages are equal if they are the same protobuf message type,
// have the same set of populated known and extension field values,
// and the same set of unknown fields values.
//
// Scalar values are compared with the equivalent of the == operator in Go,
// except bytes values which are compared using bytes.Equal and
// floating point values which specially treat NaNs as equal.
// Message values are compared by recursively calling Equal.
// Lists are equal if each element value is also equal.
// Maps are equal if they have the same set of keys, where the pair of values
// for each key is also equal.
func Equal(x, y Message) bool {
return protoV2.Equal(MessageV2(x), MessageV2(y))
}
func isMessageSet(md protoreflect.MessageDescriptor) bool {
ms, ok := md.(interface{ IsMessageSet() bool })
return ok && ms.IsMessageSet()
}

View File

@@ -1,323 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"bytes"
"compress/gzip"
"fmt"
"io/ioutil"
"reflect"
"strings"
"sync"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
"google.golang.org/protobuf/runtime/protoimpl"
)
// filePath is the path to the proto source file.
type filePath = string // e.g., "google/protobuf/descriptor.proto"
// fileDescGZIP is the compressed contents of the encoded FileDescriptorProto.
type fileDescGZIP = []byte
var fileCache sync.Map // map[filePath]fileDescGZIP
// RegisterFile is called from generated code to register the compressed
// FileDescriptorProto with the file path for a proto source file.
//
// Deprecated: Use protoregistry.GlobalFiles.RegisterFile instead.
func RegisterFile(s filePath, d fileDescGZIP) {
// Decompress the descriptor.
zr, err := gzip.NewReader(bytes.NewReader(d))
if err != nil {
panic(fmt.Sprintf("proto: invalid compressed file descriptor: %v", err))
}
b, err := ioutil.ReadAll(zr)
if err != nil {
panic(fmt.Sprintf("proto: invalid compressed file descriptor: %v", err))
}
// Construct a protoreflect.FileDescriptor from the raw descriptor.
// Note that DescBuilder.Build automatically registers the constructed
// file descriptor with the v2 registry.
protoimpl.DescBuilder{RawDescriptor: b}.Build()
// Locally cache the raw descriptor form for the file.
fileCache.Store(s, d)
}
// FileDescriptor returns the compressed FileDescriptorProto given the file path
// for a proto source file. It returns nil if not found.
//
// Deprecated: Use protoregistry.GlobalFiles.FindFileByPath instead.
func FileDescriptor(s filePath) fileDescGZIP {
if v, ok := fileCache.Load(s); ok {
return v.(fileDescGZIP)
}
// Find the descriptor in the v2 registry.
var b []byte
if fd, _ := protoregistry.GlobalFiles.FindFileByPath(s); fd != nil {
if fd, ok := fd.(interface{ ProtoLegacyRawDesc() []byte }); ok {
b = fd.ProtoLegacyRawDesc()
} else {
// TODO: Use protodesc.ToFileDescriptorProto to construct
// a descriptorpb.FileDescriptorProto and marshal it.
// However, doing so causes the proto package to have a dependency
// on descriptorpb, leading to cyclic dependency issues.
}
}
// Locally cache the raw descriptor form for the file.
if len(b) > 0 {
v, _ := fileCache.LoadOrStore(s, protoimpl.X.CompressGZIP(b))
return v.(fileDescGZIP)
}
return nil
}
// enumName is the name of an enum. For historical reasons, the enum name is
// neither the full Go name nor the full protobuf name of the enum.
// The name is the dot-separated combination of just the proto package that the
// enum is declared within followed by the Go type name of the generated enum.
type enumName = string // e.g., "my.proto.package.GoMessage_GoEnum"
// enumsByName maps enum values by name to their numeric counterpart.
type enumsByName = map[string]int32
// enumsByNumber maps enum values by number to their name counterpart.
type enumsByNumber = map[int32]string
var enumCache sync.Map // map[enumName]enumsByName
var numFilesCache sync.Map // map[protoreflect.FullName]int
// RegisterEnum is called from the generated code to register the mapping of
// enum value names to enum numbers for the enum identified by s.
//
// Deprecated: Use protoregistry.GlobalTypes.RegisterEnum instead.
func RegisterEnum(s enumName, _ enumsByNumber, m enumsByName) {
if _, ok := enumCache.Load(s); ok {
panic("proto: duplicate enum registered: " + s)
}
enumCache.Store(s, m)
// This does not forward registration to the v2 registry since this API
// lacks sufficient information to construct a complete v2 enum descriptor.
}
// EnumValueMap returns the mapping from enum value names to enum numbers for
// the enum of the given name. It returns nil if not found.
//
// Deprecated: Use protoregistry.GlobalTypes.FindEnumByName instead.
func EnumValueMap(s enumName) enumsByName {
if v, ok := enumCache.Load(s); ok {
return v.(enumsByName)
}
// Check whether the cache is stale. If the number of files in the current
// package differs, then it means that some enums may have been recently
// registered upstream that we do not know about.
var protoPkg protoreflect.FullName
if i := strings.LastIndexByte(s, '.'); i >= 0 {
protoPkg = protoreflect.FullName(s[:i])
}
v, _ := numFilesCache.Load(protoPkg)
numFiles, _ := v.(int)
if protoregistry.GlobalFiles.NumFilesByPackage(protoPkg) == numFiles {
return nil // cache is up-to-date; was not found earlier
}
// Update the enum cache for all enums declared in the given proto package.
numFiles = 0
protoregistry.GlobalFiles.RangeFilesByPackage(protoPkg, func(fd protoreflect.FileDescriptor) bool {
walkEnums(fd, func(ed protoreflect.EnumDescriptor) {
name := protoimpl.X.LegacyEnumName(ed)
if _, ok := enumCache.Load(name); !ok {
m := make(enumsByName)
evs := ed.Values()
for i := evs.Len() - 1; i >= 0; i-- {
ev := evs.Get(i)
m[string(ev.Name())] = int32(ev.Number())
}
enumCache.LoadOrStore(name, m)
}
})
numFiles++
return true
})
numFilesCache.Store(protoPkg, numFiles)
// Check cache again for enum map.
if v, ok := enumCache.Load(s); ok {
return v.(enumsByName)
}
return nil
}
// walkEnums recursively walks all enums declared in d.
func walkEnums(d interface {
Enums() protoreflect.EnumDescriptors
Messages() protoreflect.MessageDescriptors
}, f func(protoreflect.EnumDescriptor)) {
eds := d.Enums()
for i := eds.Len() - 1; i >= 0; i-- {
f(eds.Get(i))
}
mds := d.Messages()
for i := mds.Len() - 1; i >= 0; i-- {
walkEnums(mds.Get(i), f)
}
}
// messageName is the full name of protobuf message.
type messageName = string
var messageTypeCache sync.Map // map[messageName]reflect.Type
// RegisterType is called from generated code to register the message Go type
// for a message of the given name.
//
// Deprecated: Use protoregistry.GlobalTypes.RegisterMessage instead.
func RegisterType(m Message, s messageName) {
mt := protoimpl.X.LegacyMessageTypeOf(m, protoreflect.FullName(s))
if err := protoregistry.GlobalTypes.RegisterMessage(mt); err != nil {
panic(err)
}
messageTypeCache.Store(s, reflect.TypeOf(m))
}
// RegisterMapType is called from generated code to register the Go map type
// for a protobuf message representing a map entry.
//
// Deprecated: Do not use.
func RegisterMapType(m interface{}, s messageName) {
t := reflect.TypeOf(m)
if t.Kind() != reflect.Map {
panic(fmt.Sprintf("invalid map kind: %v", t))
}
if _, ok := messageTypeCache.Load(s); ok {
panic(fmt.Errorf("proto: duplicate proto message registered: %s", s))
}
messageTypeCache.Store(s, t)
}
// MessageType returns the message type for a named message.
// It returns nil if not found.
//
// Deprecated: Use protoregistry.GlobalTypes.FindMessageByName instead.
func MessageType(s messageName) reflect.Type {
if v, ok := messageTypeCache.Load(s); ok {
return v.(reflect.Type)
}
// Derive the message type from the v2 registry.
var t reflect.Type
if mt, _ := protoregistry.GlobalTypes.FindMessageByName(protoreflect.FullName(s)); mt != nil {
t = messageGoType(mt)
}
// If we could not get a concrete type, it is possible that it is a
// pseudo-message for a map entry.
if t == nil {
d, _ := protoregistry.GlobalFiles.FindDescriptorByName(protoreflect.FullName(s))
if md, _ := d.(protoreflect.MessageDescriptor); md != nil && md.IsMapEntry() {
kt := goTypeForField(md.Fields().ByNumber(1))
vt := goTypeForField(md.Fields().ByNumber(2))
t = reflect.MapOf(kt, vt)
}
}
// Locally cache the message type for the given name.
if t != nil {
v, _ := messageTypeCache.LoadOrStore(s, t)
return v.(reflect.Type)
}
return nil
}
func goTypeForField(fd protoreflect.FieldDescriptor) reflect.Type {
switch k := fd.Kind(); k {
case protoreflect.EnumKind:
if et, _ := protoregistry.GlobalTypes.FindEnumByName(fd.Enum().FullName()); et != nil {
return enumGoType(et)
}
return reflect.TypeOf(protoreflect.EnumNumber(0))
case protoreflect.MessageKind, protoreflect.GroupKind:
if mt, _ := protoregistry.GlobalTypes.FindMessageByName(fd.Message().FullName()); mt != nil {
return messageGoType(mt)
}
return reflect.TypeOf((*protoreflect.Message)(nil)).Elem()
default:
return reflect.TypeOf(fd.Default().Interface())
}
}
func enumGoType(et protoreflect.EnumType) reflect.Type {
return reflect.TypeOf(et.New(0))
}
func messageGoType(mt protoreflect.MessageType) reflect.Type {
return reflect.TypeOf(MessageV1(mt.Zero().Interface()))
}
// MessageName returns the full protobuf name for the given message type.
//
// Deprecated: Use protoreflect.MessageDescriptor.FullName instead.
func MessageName(m Message) messageName {
if m == nil {
return ""
}
if m, ok := m.(interface{ XXX_MessageName() messageName }); ok {
return m.XXX_MessageName()
}
return messageName(protoimpl.X.MessageDescriptorOf(m).FullName())
}
// RegisterExtension is called from the generated code to register
// the extension descriptor.
//
// Deprecated: Use protoregistry.GlobalTypes.RegisterExtension instead.
func RegisterExtension(d *ExtensionDesc) {
if err := protoregistry.GlobalTypes.RegisterExtension(d); err != nil {
panic(err)
}
}
type extensionsByNumber = map[int32]*ExtensionDesc
var extensionCache sync.Map // map[messageName]extensionsByNumber
// RegisteredExtensions returns a map of the registered extensions for the
// provided protobuf message, indexed by the extension field number.
//
// Deprecated: Use protoregistry.GlobalTypes.RangeExtensionsByMessage instead.
func RegisteredExtensions(m Message) extensionsByNumber {
// Check whether the cache is stale. If the number of extensions for
// the given message differs, then it means that some extensions were
// recently registered upstream that we do not know about.
s := MessageName(m)
v, _ := extensionCache.Load(s)
xs, _ := v.(extensionsByNumber)
if protoregistry.GlobalTypes.NumExtensionsByMessage(protoreflect.FullName(s)) == len(xs) {
return xs // cache is up-to-date
}
// Cache is stale, re-compute the extensions map.
xs = make(extensionsByNumber)
protoregistry.GlobalTypes.RangeExtensionsByMessage(protoreflect.FullName(s), func(xt protoreflect.ExtensionType) bool {
if xd, ok := xt.(*ExtensionDesc); ok {
xs[int32(xt.TypeDescriptor().Number())] = xd
} else {
// TODO: This implies that the protoreflect.ExtensionType is a
// custom type not generated by protoc-gen-go. We could try and
// convert the type to an ExtensionDesc.
}
return true
})
extensionCache.Store(s, xs)
return xs
}

View File

@@ -1,801 +0,0 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"encoding"
"errors"
"fmt"
"reflect"
"strconv"
"strings"
"unicode/utf8"
"google.golang.org/protobuf/encoding/prototext"
protoV2 "google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
)
const wrapTextUnmarshalV2 = false
// ParseError is returned by UnmarshalText.
type ParseError struct {
Message string
// Deprecated: Do not use.
Line, Offset int
}
func (e *ParseError) Error() string {
if wrapTextUnmarshalV2 {
return e.Message
}
if e.Line == 1 {
return fmt.Sprintf("line 1.%d: %v", e.Offset, e.Message)
}
return fmt.Sprintf("line %d: %v", e.Line, e.Message)
}
// UnmarshalText parses a proto text formatted string into m.
func UnmarshalText(s string, m Message) error {
if u, ok := m.(encoding.TextUnmarshaler); ok {
return u.UnmarshalText([]byte(s))
}
m.Reset()
mi := MessageV2(m)
if wrapTextUnmarshalV2 {
err := prototext.UnmarshalOptions{
AllowPartial: true,
}.Unmarshal([]byte(s), mi)
if err != nil {
return &ParseError{Message: err.Error()}
}
return checkRequiredNotSet(mi)
} else {
if err := newTextParser(s).unmarshalMessage(mi.ProtoReflect(), ""); err != nil {
return err
}
return checkRequiredNotSet(mi)
}
}
type textParser struct {
s string // remaining input
done bool // whether the parsing is finished (success or error)
backed bool // whether back() was called
offset, line int
cur token
}
type token struct {
value string
err *ParseError
line int // line number
offset int // byte number from start of input, not start of line
unquoted string // the unquoted version of value, if it was a quoted string
}
func newTextParser(s string) *textParser {
p := new(textParser)
p.s = s
p.line = 1
p.cur.line = 1
return p
}
func (p *textParser) unmarshalMessage(m protoreflect.Message, terminator string) (err error) {
md := m.Descriptor()
fds := md.Fields()
// A struct is a sequence of "name: value", terminated by one of
// '>' or '}', or the end of the input. A name may also be
// "[extension]" or "[type/url]".
//
// The whole struct can also be an expanded Any message, like:
// [type/url] < ... struct contents ... >
seen := make(map[protoreflect.FieldNumber]bool)
for {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == terminator {
break
}
if tok.value == "[" {
if err := p.unmarshalExtensionOrAny(m, seen); err != nil {
return err
}
continue
}
// This is a normal, non-extension field.
name := protoreflect.Name(tok.value)
fd := fds.ByName(name)
switch {
case fd == nil:
gd := fds.ByName(protoreflect.Name(strings.ToLower(string(name))))
if gd != nil && gd.Kind() == protoreflect.GroupKind && gd.Message().Name() == name {
fd = gd
}
case fd.Kind() == protoreflect.GroupKind && fd.Message().Name() != name:
fd = nil
case fd.IsWeak() && fd.Message().IsPlaceholder():
fd = nil
}
if fd == nil {
typeName := string(md.FullName())
if m, ok := m.Interface().(Message); ok {
t := reflect.TypeOf(m)
if t.Kind() == reflect.Ptr {
typeName = t.Elem().String()
}
}
return p.errorf("unknown field name %q in %v", name, typeName)
}
if od := fd.ContainingOneof(); od != nil && m.WhichOneof(od) != nil {
return p.errorf("field '%s' would overwrite already parsed oneof '%s'", name, od.Name())
}
if fd.Cardinality() != protoreflect.Repeated && seen[fd.Number()] {
return p.errorf("non-repeated field %q was repeated", fd.Name())
}
seen[fd.Number()] = true
// Consume any colon.
if err := p.checkForColon(fd); err != nil {
return err
}
// Parse into the field.
v := m.Get(fd)
if !m.Has(fd) && (fd.IsList() || fd.IsMap() || fd.Message() != nil) {
v = m.Mutable(fd)
}
if v, err = p.unmarshalValue(v, fd); err != nil {
return err
}
m.Set(fd, v)
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
}
return nil
}
func (p *textParser) unmarshalExtensionOrAny(m protoreflect.Message, seen map[protoreflect.FieldNumber]bool) error {
name, err := p.consumeExtensionOrAnyName()
if err != nil {
return err
}
// If it contains a slash, it's an Any type URL.
if slashIdx := strings.LastIndex(name, "/"); slashIdx >= 0 {
tok := p.next()
if tok.err != nil {
return tok.err
}
// consume an optional colon
if tok.value == ":" {
tok = p.next()
if tok.err != nil {
return tok.err
}
}
var terminator string
switch tok.value {
case "<":
terminator = ">"
case "{":
terminator = "}"
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
mt, err := protoregistry.GlobalTypes.FindMessageByURL(name)
if err != nil {
return p.errorf("unrecognized message %q in google.protobuf.Any", name[slashIdx+len("/"):])
}
m2 := mt.New()
if err := p.unmarshalMessage(m2, terminator); err != nil {
return err
}
b, err := protoV2.Marshal(m2.Interface())
if err != nil {
return p.errorf("failed to marshal message of type %q: %v", name[slashIdx+len("/"):], err)
}
urlFD := m.Descriptor().Fields().ByName("type_url")
valFD := m.Descriptor().Fields().ByName("value")
if seen[urlFD.Number()] {
return p.errorf("Any message unpacked multiple times, or %q already set", urlFD.Name())
}
if seen[valFD.Number()] {
return p.errorf("Any message unpacked multiple times, or %q already set", valFD.Name())
}
m.Set(urlFD, protoreflect.ValueOfString(name))
m.Set(valFD, protoreflect.ValueOfBytes(b))
seen[urlFD.Number()] = true
seen[valFD.Number()] = true
return nil
}
xname := protoreflect.FullName(name)
xt, _ := protoregistry.GlobalTypes.FindExtensionByName(xname)
if xt == nil && isMessageSet(m.Descriptor()) {
xt, _ = protoregistry.GlobalTypes.FindExtensionByName(xname.Append("message_set_extension"))
}
if xt == nil {
return p.errorf("unrecognized extension %q", name)
}
fd := xt.TypeDescriptor()
if fd.ContainingMessage().FullName() != m.Descriptor().FullName() {
return p.errorf("extension field %q does not extend message %q", name, m.Descriptor().FullName())
}
if err := p.checkForColon(fd); err != nil {
return err
}
v := m.Get(fd)
if !m.Has(fd) && (fd.IsList() || fd.IsMap() || fd.Message() != nil) {
v = m.Mutable(fd)
}
v, err = p.unmarshalValue(v, fd)
if err != nil {
return err
}
m.Set(fd, v)
return p.consumeOptionalSeparator()
}
func (p *textParser) unmarshalValue(v protoreflect.Value, fd protoreflect.FieldDescriptor) (protoreflect.Value, error) {
tok := p.next()
if tok.err != nil {
return v, tok.err
}
if tok.value == "" {
return v, p.errorf("unexpected EOF")
}
switch {
case fd.IsList():
lv := v.List()
var err error
if tok.value == "[" {
// Repeated field with list notation, like [1,2,3].
for {
vv := lv.NewElement()
vv, err = p.unmarshalSingularValue(vv, fd)
if err != nil {
return v, err
}
lv.Append(vv)
tok := p.next()
if tok.err != nil {
return v, tok.err
}
if tok.value == "]" {
break
}
if tok.value != "," {
return v, p.errorf("Expected ']' or ',' found %q", tok.value)
}
}
return v, nil
}
// One value of the repeated field.
p.back()
vv := lv.NewElement()
vv, err = p.unmarshalSingularValue(vv, fd)
if err != nil {
return v, err
}
lv.Append(vv)
return v, nil
case fd.IsMap():
// The map entry should be this sequence of tokens:
// < key : KEY value : VALUE >
// However, implementations may omit key or value, and technically
// we should support them in any order.
var terminator string
switch tok.value {
case "<":
terminator = ">"
case "{":
terminator = "}"
default:
return v, p.errorf("expected '{' or '<', found %q", tok.value)
}
keyFD := fd.MapKey()
valFD := fd.MapValue()
mv := v.Map()
kv := keyFD.Default()
vv := mv.NewValue()
for {
tok := p.next()
if tok.err != nil {
return v, tok.err
}
if tok.value == terminator {
break
}
var err error
switch tok.value {
case "key":
if err := p.consumeToken(":"); err != nil {
return v, err
}
if kv, err = p.unmarshalSingularValue(kv, keyFD); err != nil {
return v, err
}
if err := p.consumeOptionalSeparator(); err != nil {
return v, err
}
case "value":
if err := p.checkForColon(valFD); err != nil {
return v, err
}
if vv, err = p.unmarshalSingularValue(vv, valFD); err != nil {
return v, err
}
if err := p.consumeOptionalSeparator(); err != nil {
return v, err
}
default:
p.back()
return v, p.errorf(`expected "key", "value", or %q, found %q`, terminator, tok.value)
}
}
mv.Set(kv.MapKey(), vv)
return v, nil
default:
p.back()
return p.unmarshalSingularValue(v, fd)
}
}
func (p *textParser) unmarshalSingularValue(v protoreflect.Value, fd protoreflect.FieldDescriptor) (protoreflect.Value, error) {
tok := p.next()
if tok.err != nil {
return v, tok.err
}
if tok.value == "" {
return v, p.errorf("unexpected EOF")
}
switch fd.Kind() {
case protoreflect.BoolKind:
switch tok.value {
case "true", "1", "t", "True":
return protoreflect.ValueOfBool(true), nil
case "false", "0", "f", "False":
return protoreflect.ValueOfBool(false), nil
}
case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind:
if x, err := strconv.ParseInt(tok.value, 0, 32); err == nil {
return protoreflect.ValueOfInt32(int32(x)), nil
}
// The C++ parser accepts large positive hex numbers that uses
// two's complement arithmetic to represent negative numbers.
// This feature is here for backwards compatibility with C++.
if strings.HasPrefix(tok.value, "0x") {
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
return protoreflect.ValueOfInt32(int32(-(int64(^x) + 1))), nil
}
}
case protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind:
if x, err := strconv.ParseInt(tok.value, 0, 64); err == nil {
return protoreflect.ValueOfInt64(int64(x)), nil
}
// The C++ parser accepts large positive hex numbers that uses
// two's complement arithmetic to represent negative numbers.
// This feature is here for backwards compatibility with C++.
if strings.HasPrefix(tok.value, "0x") {
if x, err := strconv.ParseUint(tok.value, 0, 64); err == nil {
return protoreflect.ValueOfInt64(int64(-(int64(^x) + 1))), nil
}
}
case protoreflect.Uint32Kind, protoreflect.Fixed32Kind:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
return protoreflect.ValueOfUint32(uint32(x)), nil
}
case protoreflect.Uint64Kind, protoreflect.Fixed64Kind:
if x, err := strconv.ParseUint(tok.value, 0, 64); err == nil {
return protoreflect.ValueOfUint64(uint64(x)), nil
}
case protoreflect.FloatKind:
// Ignore 'f' for compatibility with output generated by C++,
// but don't remove 'f' when the value is "-inf" or "inf".
v := tok.value
if strings.HasSuffix(v, "f") && v != "-inf" && v != "inf" {
v = v[:len(v)-len("f")]
}
if x, err := strconv.ParseFloat(v, 32); err == nil {
return protoreflect.ValueOfFloat32(float32(x)), nil
}
case protoreflect.DoubleKind:
// Ignore 'f' for compatibility with output generated by C++,
// but don't remove 'f' when the value is "-inf" or "inf".
v := tok.value
if strings.HasSuffix(v, "f") && v != "-inf" && v != "inf" {
v = v[:len(v)-len("f")]
}
if x, err := strconv.ParseFloat(v, 64); err == nil {
return protoreflect.ValueOfFloat64(float64(x)), nil
}
case protoreflect.StringKind:
if isQuote(tok.value[0]) {
return protoreflect.ValueOfString(tok.unquoted), nil
}
case protoreflect.BytesKind:
if isQuote(tok.value[0]) {
return protoreflect.ValueOfBytes([]byte(tok.unquoted)), nil
}
case protoreflect.EnumKind:
if x, err := strconv.ParseInt(tok.value, 0, 32); err == nil {
return protoreflect.ValueOfEnum(protoreflect.EnumNumber(x)), nil
}
vd := fd.Enum().Values().ByName(protoreflect.Name(tok.value))
if vd != nil {
return protoreflect.ValueOfEnum(vd.Number()), nil
}
case protoreflect.MessageKind, protoreflect.GroupKind:
var terminator string
switch tok.value {
case "{":
terminator = "}"
case "<":
terminator = ">"
default:
return v, p.errorf("expected '{' or '<', found %q", tok.value)
}
err := p.unmarshalMessage(v.Message(), terminator)
return v, err
default:
panic(fmt.Sprintf("invalid kind %v", fd.Kind()))
}
return v, p.errorf("invalid %v: %v", fd.Kind(), tok.value)
}
// Consume a ':' from the input stream (if the next token is a colon),
// returning an error if a colon is needed but not present.
func (p *textParser) checkForColon(fd protoreflect.FieldDescriptor) *ParseError {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != ":" {
if fd.Message() == nil {
return p.errorf("expected ':', found %q", tok.value)
}
p.back()
}
return nil
}
// consumeExtensionOrAnyName consumes an extension name or an Any type URL and
// the following ']'. It returns the name or URL consumed.
func (p *textParser) consumeExtensionOrAnyName() (string, error) {
tok := p.next()
if tok.err != nil {
return "", tok.err
}
// If extension name or type url is quoted, it's a single token.
if len(tok.value) > 2 && isQuote(tok.value[0]) && tok.value[len(tok.value)-1] == tok.value[0] {
name, err := unquoteC(tok.value[1:len(tok.value)-1], rune(tok.value[0]))
if err != nil {
return "", err
}
return name, p.consumeToken("]")
}
// Consume everything up to "]"
var parts []string
for tok.value != "]" {
parts = append(parts, tok.value)
tok = p.next()
if tok.err != nil {
return "", p.errorf("unrecognized type_url or extension name: %s", tok.err)
}
if p.done && tok.value != "]" {
return "", p.errorf("unclosed type_url or extension name")
}
}
return strings.Join(parts, ""), nil
}
// consumeOptionalSeparator consumes an optional semicolon or comma.
// It is used in unmarshalMessage to provide backward compatibility.
func (p *textParser) consumeOptionalSeparator() error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != ";" && tok.value != "," {
p.back()
}
return nil
}
func (p *textParser) errorf(format string, a ...interface{}) *ParseError {
pe := &ParseError{fmt.Sprintf(format, a...), p.cur.line, p.cur.offset}
p.cur.err = pe
p.done = true
return pe
}
func (p *textParser) skipWhitespace() {
i := 0
for i < len(p.s) && (isWhitespace(p.s[i]) || p.s[i] == '#') {
if p.s[i] == '#' {
// comment; skip to end of line or input
for i < len(p.s) && p.s[i] != '\n' {
i++
}
if i == len(p.s) {
break
}
}
if p.s[i] == '\n' {
p.line++
}
i++
}
p.offset += i
p.s = p.s[i:len(p.s)]
if len(p.s) == 0 {
p.done = true
}
}
func (p *textParser) advance() {
// Skip whitespace
p.skipWhitespace()
if p.done {
return
}
// Start of non-whitespace
p.cur.err = nil
p.cur.offset, p.cur.line = p.offset, p.line
p.cur.unquoted = ""
switch p.s[0] {
case '<', '>', '{', '}', ':', '[', ']', ';', ',', '/':
// Single symbol
p.cur.value, p.s = p.s[0:1], p.s[1:len(p.s)]
case '"', '\'':
// Quoted string
i := 1
for i < len(p.s) && p.s[i] != p.s[0] && p.s[i] != '\n' {
if p.s[i] == '\\' && i+1 < len(p.s) {
// skip escaped char
i++
}
i++
}
if i >= len(p.s) || p.s[i] != p.s[0] {
p.errorf("unmatched quote")
return
}
unq, err := unquoteC(p.s[1:i], rune(p.s[0]))
if err != nil {
p.errorf("invalid quoted string %s: %v", p.s[0:i+1], err)
return
}
p.cur.value, p.s = p.s[0:i+1], p.s[i+1:len(p.s)]
p.cur.unquoted = unq
default:
i := 0
for i < len(p.s) && isIdentOrNumberChar(p.s[i]) {
i++
}
if i == 0 {
p.errorf("unexpected byte %#x", p.s[0])
return
}
p.cur.value, p.s = p.s[0:i], p.s[i:len(p.s)]
}
p.offset += len(p.cur.value)
}
// Back off the parser by one token. Can only be done between calls to next().
// It makes the next advance() a no-op.
func (p *textParser) back() { p.backed = true }
// Advances the parser and returns the new current token.
func (p *textParser) next() *token {
if p.backed || p.done {
p.backed = false
return &p.cur
}
p.advance()
if p.done {
p.cur.value = ""
} else if len(p.cur.value) > 0 && isQuote(p.cur.value[0]) {
// Look for multiple quoted strings separated by whitespace,
// and concatenate them.
cat := p.cur
for {
p.skipWhitespace()
if p.done || !isQuote(p.s[0]) {
break
}
p.advance()
if p.cur.err != nil {
return &p.cur
}
cat.value += " " + p.cur.value
cat.unquoted += p.cur.unquoted
}
p.done = false // parser may have seen EOF, but we want to return cat
p.cur = cat
}
return &p.cur
}
func (p *textParser) consumeToken(s string) error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != s {
p.back()
return p.errorf("expected %q, found %q", s, tok.value)
}
return nil
}
var errBadUTF8 = errors.New("proto: bad UTF-8")
func unquoteC(s string, quote rune) (string, error) {
// This is based on C++'s tokenizer.cc.
// Despite its name, this is *not* parsing C syntax.
// For instance, "\0" is an invalid quoted string.
// Avoid allocation in trivial cases.
simple := true
for _, r := range s {
if r == '\\' || r == quote {
simple = false
break
}
}
if simple {
return s, nil
}
buf := make([]byte, 0, 3*len(s)/2)
for len(s) > 0 {
r, n := utf8.DecodeRuneInString(s)
if r == utf8.RuneError && n == 1 {
return "", errBadUTF8
}
s = s[n:]
if r != '\\' {
if r < utf8.RuneSelf {
buf = append(buf, byte(r))
} else {
buf = append(buf, string(r)...)
}
continue
}
ch, tail, err := unescape(s)
if err != nil {
return "", err
}
buf = append(buf, ch...)
s = tail
}
return string(buf), nil
}
func unescape(s string) (ch string, tail string, err error) {
r, n := utf8.DecodeRuneInString(s)
if r == utf8.RuneError && n == 1 {
return "", "", errBadUTF8
}
s = s[n:]
switch r {
case 'a':
return "\a", s, nil
case 'b':
return "\b", s, nil
case 'f':
return "\f", s, nil
case 'n':
return "\n", s, nil
case 'r':
return "\r", s, nil
case 't':
return "\t", s, nil
case 'v':
return "\v", s, nil
case '?':
return "?", s, nil // trigraph workaround
case '\'', '"', '\\':
return string(r), s, nil
case '0', '1', '2', '3', '4', '5', '6', '7':
if len(s) < 2 {
return "", "", fmt.Errorf(`\%c requires 2 following digits`, r)
}
ss := string(r) + s[:2]
s = s[2:]
i, err := strconv.ParseUint(ss, 8, 8)
if err != nil {
return "", "", fmt.Errorf(`\%s contains non-octal digits`, ss)
}
return string([]byte{byte(i)}), s, nil
case 'x', 'X', 'u', 'U':
var n int
switch r {
case 'x', 'X':
n = 2
case 'u':
n = 4
case 'U':
n = 8
}
if len(s) < n {
return "", "", fmt.Errorf(`\%c requires %d following digits`, r, n)
}
ss := s[:n]
s = s[n:]
i, err := strconv.ParseUint(ss, 16, 64)
if err != nil {
return "", "", fmt.Errorf(`\%c%s contains non-hexadecimal digits`, r, ss)
}
if r == 'x' || r == 'X' {
return string([]byte{byte(i)}), s, nil
}
if i > utf8.MaxRune {
return "", "", fmt.Errorf(`\%c%s is not a valid Unicode code point`, r, ss)
}
return string(i), s, nil
}
return "", "", fmt.Errorf(`unknown escape \%c`, r)
}
func isIdentOrNumberChar(c byte) bool {
switch {
case 'A' <= c && c <= 'Z', 'a' <= c && c <= 'z':
return true
case '0' <= c && c <= '9':
return true
}
switch c {
case '-', '+', '.', '_':
return true
}
return false
}
func isWhitespace(c byte) bool {
switch c {
case ' ', '\t', '\n', '\r':
return true
}
return false
}
func isQuote(c byte) bool {
switch c {
case '"', '\'':
return true
}
return false
}

View File

@@ -1,560 +0,0 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
"bytes"
"encoding"
"fmt"
"io"
"math"
"sort"
"strings"
"google.golang.org/protobuf/encoding/prototext"
"google.golang.org/protobuf/encoding/protowire"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
)
const wrapTextMarshalV2 = false
// TextMarshaler is a configurable text format marshaler.
type TextMarshaler struct {
Compact bool // use compact text format (one line)
ExpandAny bool // expand google.protobuf.Any messages of known types
}
// Marshal writes the proto text format of m to w.
func (tm *TextMarshaler) Marshal(w io.Writer, m Message) error {
b, err := tm.marshal(m)
if len(b) > 0 {
if _, err := w.Write(b); err != nil {
return err
}
}
return err
}
// Text returns a proto text formatted string of m.
func (tm *TextMarshaler) Text(m Message) string {
b, _ := tm.marshal(m)
return string(b)
}
func (tm *TextMarshaler) marshal(m Message) ([]byte, error) {
mr := MessageReflect(m)
if mr == nil || !mr.IsValid() {
return []byte("<nil>"), nil
}
if wrapTextMarshalV2 {
if m, ok := m.(encoding.TextMarshaler); ok {
return m.MarshalText()
}
opts := prototext.MarshalOptions{
AllowPartial: true,
EmitUnknown: true,
}
if !tm.Compact {
opts.Indent = " "
}
if !tm.ExpandAny {
opts.Resolver = (*protoregistry.Types)(nil)
}
return opts.Marshal(mr.Interface())
} else {
w := &textWriter{
compact: tm.Compact,
expandAny: tm.ExpandAny,
complete: true,
}
if m, ok := m.(encoding.TextMarshaler); ok {
b, err := m.MarshalText()
if err != nil {
return nil, err
}
w.Write(b)
return w.buf, nil
}
err := w.writeMessage(mr)
return w.buf, err
}
}
var (
defaultTextMarshaler = TextMarshaler{}
compactTextMarshaler = TextMarshaler{Compact: true}
)
// MarshalText writes the proto text format of m to w.
func MarshalText(w io.Writer, m Message) error { return defaultTextMarshaler.Marshal(w, m) }
// MarshalTextString returns a proto text formatted string of m.
func MarshalTextString(m Message) string { return defaultTextMarshaler.Text(m) }
// CompactText writes the compact proto text format of m to w.
func CompactText(w io.Writer, m Message) error { return compactTextMarshaler.Marshal(w, m) }
// CompactTextString returns a compact proto text formatted string of m.
func CompactTextString(m Message) string { return compactTextMarshaler.Text(m) }
var (
newline = []byte("\n")
endBraceNewline = []byte("}\n")
posInf = []byte("inf")
negInf = []byte("-inf")
nan = []byte("nan")
)
// textWriter is an io.Writer that tracks its indentation level.
type textWriter struct {
compact bool // same as TextMarshaler.Compact
expandAny bool // same as TextMarshaler.ExpandAny
complete bool // whether the current position is a complete line
indent int // indentation level; never negative
buf []byte
}
func (w *textWriter) Write(p []byte) (n int, _ error) {
newlines := bytes.Count(p, newline)
if newlines == 0 {
if !w.compact && w.complete {
w.writeIndent()
}
w.buf = append(w.buf, p...)
w.complete = false
return len(p), nil
}
frags := bytes.SplitN(p, newline, newlines+1)
if w.compact {
for i, frag := range frags {
if i > 0 {
w.buf = append(w.buf, ' ')
n++
}
w.buf = append(w.buf, frag...)
n += len(frag)
}
return n, nil
}
for i, frag := range frags {
if w.complete {
w.writeIndent()
}
w.buf = append(w.buf, frag...)
n += len(frag)
if i+1 < len(frags) {
w.buf = append(w.buf, '\n')
n++
}
}
w.complete = len(frags[len(frags)-1]) == 0
return n, nil
}
func (w *textWriter) WriteByte(c byte) error {
if w.compact && c == '\n' {
c = ' '
}
if !w.compact && w.complete {
w.writeIndent()
}
w.buf = append(w.buf, c)
w.complete = c == '\n'
return nil
}
func (w *textWriter) writeName(fd protoreflect.FieldDescriptor) {
if !w.compact && w.complete {
w.writeIndent()
}
w.complete = false
if fd.Kind() != protoreflect.GroupKind {
w.buf = append(w.buf, fd.Name()...)
w.WriteByte(':')
} else {
// Use message type name for group field name.
w.buf = append(w.buf, fd.Message().Name()...)
}
if !w.compact {
w.WriteByte(' ')
}
}
func requiresQuotes(u string) bool {
// When type URL contains any characters except [0-9A-Za-z./\-]*, it must be quoted.
for _, ch := range u {
switch {
case ch == '.' || ch == '/' || ch == '_':
continue
case '0' <= ch && ch <= '9':
continue
case 'A' <= ch && ch <= 'Z':
continue
case 'a' <= ch && ch <= 'z':
continue
default:
return true
}
}
return false
}
// writeProto3Any writes an expanded google.protobuf.Any message.
//
// It returns (false, nil) if sv value can't be unmarshaled (e.g. because
// required messages are not linked in).
//
// It returns (true, error) when sv was written in expanded format or an error
// was encountered.
func (w *textWriter) writeProto3Any(m protoreflect.Message) (bool, error) {
md := m.Descriptor()
fdURL := md.Fields().ByName("type_url")
fdVal := md.Fields().ByName("value")
url := m.Get(fdURL).String()
mt, err := protoregistry.GlobalTypes.FindMessageByURL(url)
if err != nil {
return false, nil
}
b := m.Get(fdVal).Bytes()
m2 := mt.New()
if err := proto.Unmarshal(b, m2.Interface()); err != nil {
return false, nil
}
w.Write([]byte("["))
if requiresQuotes(url) {
w.writeQuotedString(url)
} else {
w.Write([]byte(url))
}
if w.compact {
w.Write([]byte("]:<"))
} else {
w.Write([]byte("]: <\n"))
w.indent++
}
if err := w.writeMessage(m2); err != nil {
return true, err
}
if w.compact {
w.Write([]byte("> "))
} else {
w.indent--
w.Write([]byte(">\n"))
}
return true, nil
}
func (w *textWriter) writeMessage(m protoreflect.Message) error {
md := m.Descriptor()
if w.expandAny && md.FullName() == "google.protobuf.Any" {
if canExpand, err := w.writeProto3Any(m); canExpand {
return err
}
}
fds := md.Fields()
for i := 0; i < fds.Len(); {
fd := fds.Get(i)
if od := fd.ContainingOneof(); od != nil {
fd = m.WhichOneof(od)
i += od.Fields().Len()
} else {
i++
}
if fd == nil || !m.Has(fd) {
continue
}
switch {
case fd.IsList():
lv := m.Get(fd).List()
for j := 0; j < lv.Len(); j++ {
w.writeName(fd)
v := lv.Get(j)
if err := w.writeSingularValue(v, fd); err != nil {
return err
}
w.WriteByte('\n')
}
case fd.IsMap():
kfd := fd.MapKey()
vfd := fd.MapValue()
mv := m.Get(fd).Map()
type entry struct{ key, val protoreflect.Value }
var entries []entry
mv.Range(func(k protoreflect.MapKey, v protoreflect.Value) bool {
entries = append(entries, entry{k.Value(), v})
return true
})
sort.Slice(entries, func(i, j int) bool {
switch kfd.Kind() {
case protoreflect.BoolKind:
return !entries[i].key.Bool() && entries[j].key.Bool()
case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind, protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind:
return entries[i].key.Int() < entries[j].key.Int()
case protoreflect.Uint32Kind, protoreflect.Fixed32Kind, protoreflect.Uint64Kind, protoreflect.Fixed64Kind:
return entries[i].key.Uint() < entries[j].key.Uint()
case protoreflect.StringKind:
return entries[i].key.String() < entries[j].key.String()
default:
panic("invalid kind")
}
})
for _, entry := range entries {
w.writeName(fd)
w.WriteByte('<')
if !w.compact {
w.WriteByte('\n')
}
w.indent++
w.writeName(kfd)
if err := w.writeSingularValue(entry.key, kfd); err != nil {
return err
}
w.WriteByte('\n')
w.writeName(vfd)
if err := w.writeSingularValue(entry.val, vfd); err != nil {
return err
}
w.WriteByte('\n')
w.indent--
w.WriteByte('>')
w.WriteByte('\n')
}
default:
w.writeName(fd)
if err := w.writeSingularValue(m.Get(fd), fd); err != nil {
return err
}
w.WriteByte('\n')
}
}
if b := m.GetUnknown(); len(b) > 0 {
w.writeUnknownFields(b)
}
return w.writeExtensions(m)
}
func (w *textWriter) writeSingularValue(v protoreflect.Value, fd protoreflect.FieldDescriptor) error {
switch fd.Kind() {
case protoreflect.FloatKind, protoreflect.DoubleKind:
switch vf := v.Float(); {
case math.IsInf(vf, +1):
w.Write(posInf)
case math.IsInf(vf, -1):
w.Write(negInf)
case math.IsNaN(vf):
w.Write(nan)
default:
fmt.Fprint(w, v.Interface())
}
case protoreflect.StringKind:
// NOTE: This does not validate UTF-8 for historical reasons.
w.writeQuotedString(string(v.String()))
case protoreflect.BytesKind:
w.writeQuotedString(string(v.Bytes()))
case protoreflect.MessageKind, protoreflect.GroupKind:
var bra, ket byte = '<', '>'
if fd.Kind() == protoreflect.GroupKind {
bra, ket = '{', '}'
}
w.WriteByte(bra)
if !w.compact {
w.WriteByte('\n')
}
w.indent++
m := v.Message()
if m2, ok := m.Interface().(encoding.TextMarshaler); ok {
b, err := m2.MarshalText()
if err != nil {
return err
}
w.Write(b)
} else {
w.writeMessage(m)
}
w.indent--
w.WriteByte(ket)
case protoreflect.EnumKind:
if ev := fd.Enum().Values().ByNumber(v.Enum()); ev != nil {
fmt.Fprint(w, ev.Name())
} else {
fmt.Fprint(w, v.Enum())
}
default:
fmt.Fprint(w, v.Interface())
}
return nil
}
// writeQuotedString writes a quoted string in the protocol buffer text format.
func (w *textWriter) writeQuotedString(s string) {
w.WriteByte('"')
for i := 0; i < len(s); i++ {
switch c := s[i]; c {
case '\n':
w.buf = append(w.buf, `\n`...)
case '\r':
w.buf = append(w.buf, `\r`...)
case '\t':
w.buf = append(w.buf, `\t`...)
case '"':
w.buf = append(w.buf, `\"`...)
case '\\':
w.buf = append(w.buf, `\\`...)
default:
if isPrint := c >= 0x20 && c < 0x7f; isPrint {
w.buf = append(w.buf, c)
} else {
w.buf = append(w.buf, fmt.Sprintf(`\%03o`, c)...)
}
}
}
w.WriteByte('"')
}
func (w *textWriter) writeUnknownFields(b []byte) {
if !w.compact {
fmt.Fprintf(w, "/* %d unknown bytes */\n", len(b))
}
for len(b) > 0 {
num, wtyp, n := protowire.ConsumeTag(b)
if n < 0 {
return
}
b = b[n:]
if wtyp == protowire.EndGroupType {
w.indent--
w.Write(endBraceNewline)
continue
}
fmt.Fprint(w, num)
if wtyp != protowire.StartGroupType {
w.WriteByte(':')
}
if !w.compact || wtyp == protowire.StartGroupType {
w.WriteByte(' ')
}
switch wtyp {
case protowire.VarintType:
v, n := protowire.ConsumeVarint(b)
if n < 0 {
return
}
b = b[n:]
fmt.Fprint(w, v)
case protowire.Fixed32Type:
v, n := protowire.ConsumeFixed32(b)
if n < 0 {
return
}
b = b[n:]
fmt.Fprint(w, v)
case protowire.Fixed64Type:
v, n := protowire.ConsumeFixed64(b)
if n < 0 {
return
}
b = b[n:]
fmt.Fprint(w, v)
case protowire.BytesType:
v, n := protowire.ConsumeBytes(b)
if n < 0 {
return
}
b = b[n:]
fmt.Fprintf(w, "%q", v)
case protowire.StartGroupType:
w.WriteByte('{')
w.indent++
default:
fmt.Fprintf(w, "/* unknown wire type %d */", wtyp)
}
w.WriteByte('\n')
}
}
// writeExtensions writes all the extensions in m.
func (w *textWriter) writeExtensions(m protoreflect.Message) error {
md := m.Descriptor()
if md.ExtensionRanges().Len() == 0 {
return nil
}
type ext struct {
desc protoreflect.FieldDescriptor
val protoreflect.Value
}
var exts []ext
m.Range(func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool {
if fd.IsExtension() {
exts = append(exts, ext{fd, v})
}
return true
})
sort.Slice(exts, func(i, j int) bool {
return exts[i].desc.Number() < exts[j].desc.Number()
})
for _, ext := range exts {
// For message set, use the name of the message as the extension name.
name := string(ext.desc.FullName())
if isMessageSet(ext.desc.ContainingMessage()) {
name = strings.TrimSuffix(name, ".message_set_extension")
}
if !ext.desc.IsList() {
if err := w.writeSingularExtension(name, ext.val, ext.desc); err != nil {
return err
}
} else {
lv := ext.val.List()
for i := 0; i < lv.Len(); i++ {
if err := w.writeSingularExtension(name, lv.Get(i), ext.desc); err != nil {
return err
}
}
}
}
return nil
}
func (w *textWriter) writeSingularExtension(name string, v protoreflect.Value, fd protoreflect.FieldDescriptor) error {
fmt.Fprintf(w, "[%s]:", name)
if !w.compact {
w.WriteByte(' ')
}
if err := w.writeSingularValue(v, fd); err != nil {
return err
}
w.WriteByte('\n')
return nil
}
func (w *textWriter) writeIndent() {
if !w.complete {
return
}
for i := 0; i < w.indent*2; i++ {
w.buf = append(w.buf, ' ')
}
w.complete = false
}

View File

@@ -1,78 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
import (
protoV2 "google.golang.org/protobuf/proto"
"google.golang.org/protobuf/runtime/protoiface"
)
// Size returns the size in bytes of the wire-format encoding of m.
func Size(m Message) int {
if m == nil {
return 0
}
mi := MessageV2(m)
return protoV2.Size(mi)
}
// Marshal returns the wire-format encoding of m.
func Marshal(m Message) ([]byte, error) {
b, err := marshalAppend(nil, m, false)
if b == nil {
b = zeroBytes
}
return b, err
}
var zeroBytes = make([]byte, 0, 0)
func marshalAppend(buf []byte, m Message, deterministic bool) ([]byte, error) {
if m == nil {
return nil, ErrNil
}
mi := MessageV2(m)
nbuf, err := protoV2.MarshalOptions{
Deterministic: deterministic,
AllowPartial: true,
}.MarshalAppend(buf, mi)
if err != nil {
return buf, err
}
if len(buf) == len(nbuf) {
if !mi.ProtoReflect().IsValid() {
return buf, ErrNil
}
}
return nbuf, checkRequiredNotSet(mi)
}
// Unmarshal parses a wire-format message in b and places the decoded results in m.
//
// Unmarshal resets m before starting to unmarshal, so any existing data in m is always
// removed. Use UnmarshalMerge to preserve and append to existing data.
func Unmarshal(b []byte, m Message) error {
m.Reset()
return UnmarshalMerge(b, m)
}
// UnmarshalMerge parses a wire-format message in b and places the decoded results in m.
func UnmarshalMerge(b []byte, m Message) error {
mi := MessageV2(m)
out, err := protoV2.UnmarshalOptions{
AllowPartial: true,
Merge: true,
}.UnmarshalState(protoiface.UnmarshalInput{
Buf: b,
Message: mi.ProtoReflect(),
})
if err != nil {
return err
}
if out.Flags&protoiface.UnmarshalInitialized > 0 {
return nil
}
return checkRequiredNotSet(mi)
}

View File

@@ -1,34 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package proto
// Bool stores v in a new bool value and returns a pointer to it.
func Bool(v bool) *bool { return &v }
// Int stores v in a new int32 value and returns a pointer to it.
//
// Deprecated: Use Int32 instead.
func Int(v int) *int32 { return Int32(int32(v)) }
// Int32 stores v in a new int32 value and returns a pointer to it.
func Int32(v int32) *int32 { return &v }
// Int64 stores v in a new int64 value and returns a pointer to it.
func Int64(v int64) *int64 { return &v }
// Uint32 stores v in a new uint32 value and returns a pointer to it.
func Uint32(v uint32) *uint32 { return &v }
// Uint64 stores v in a new uint64 value and returns a pointer to it.
func Uint64(v uint64) *uint64 { return &v }
// Float32 stores v in a new float32 value and returns a pointer to it.
func Float32(v float32) *float32 { return &v }
// Float64 stores v in a new float64 value and returns a pointer to it.
func Float64(v float64) *float64 { return &v }
// String stores v in a new string value and returns a pointer to it.
func String(v string) *string { return &v }

View File

@@ -1,165 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ptypes
import (
"fmt"
"strings"
"github.com/golang/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
anypb "github.com/golang/protobuf/ptypes/any"
)
const urlPrefix = "type.googleapis.com/"
// AnyMessageName returns the message name contained in an anypb.Any message.
// Most type assertions should use the Is function instead.
func AnyMessageName(any *anypb.Any) (string, error) {
name, err := anyMessageName(any)
return string(name), err
}
func anyMessageName(any *anypb.Any) (protoreflect.FullName, error) {
if any == nil {
return "", fmt.Errorf("message is nil")
}
name := protoreflect.FullName(any.TypeUrl)
if i := strings.LastIndex(any.TypeUrl, "/"); i >= 0 {
name = name[i+len("/"):]
}
if !name.IsValid() {
return "", fmt.Errorf("message type url %q is invalid", any.TypeUrl)
}
return name, nil
}
// MarshalAny marshals the given message m into an anypb.Any message.
func MarshalAny(m proto.Message) (*anypb.Any, error) {
switch dm := m.(type) {
case DynamicAny:
m = dm.Message
case *DynamicAny:
if dm == nil {
return nil, proto.ErrNil
}
m = dm.Message
}
b, err := proto.Marshal(m)
if err != nil {
return nil, err
}
return &anypb.Any{TypeUrl: urlPrefix + proto.MessageName(m), Value: b}, nil
}
// Empty returns a new message of the type specified in an anypb.Any message.
// It returns protoregistry.NotFound if the corresponding message type could not
// be resolved in the global registry.
func Empty(any *anypb.Any) (proto.Message, error) {
name, err := anyMessageName(any)
if err != nil {
return nil, err
}
mt, err := protoregistry.GlobalTypes.FindMessageByName(name)
if err != nil {
return nil, err
}
return proto.MessageV1(mt.New().Interface()), nil
}
// UnmarshalAny unmarshals the encoded value contained in the anypb.Any message
// into the provided message m. It returns an error if the target message
// does not match the type in the Any message or if an unmarshal error occurs.
//
// The target message m may be a *DynamicAny message. If the underlying message
// type could not be resolved, then this returns protoregistry.NotFound.
func UnmarshalAny(any *anypb.Any, m proto.Message) error {
if dm, ok := m.(*DynamicAny); ok {
if dm.Message == nil {
var err error
dm.Message, err = Empty(any)
if err != nil {
return err
}
}
m = dm.Message
}
anyName, err := AnyMessageName(any)
if err != nil {
return err
}
msgName := proto.MessageName(m)
if anyName != msgName {
return fmt.Errorf("mismatched message type: got %q want %q", anyName, msgName)
}
return proto.Unmarshal(any.Value, m)
}
// Is reports whether the Any message contains a message of the specified type.
func Is(any *anypb.Any, m proto.Message) bool {
if any == nil || m == nil {
return false
}
name := proto.MessageName(m)
if !strings.HasSuffix(any.TypeUrl, name) {
return false
}
return len(any.TypeUrl) == len(name) || any.TypeUrl[len(any.TypeUrl)-len(name)-1] == '/'
}
// DynamicAny is a value that can be passed to UnmarshalAny to automatically
// allocate a proto.Message for the type specified in an anypb.Any message.
// The allocated message is stored in the embedded proto.Message.
//
// Example:
// var x ptypes.DynamicAny
// if err := ptypes.UnmarshalAny(a, &x); err != nil { ... }
// fmt.Printf("unmarshaled message: %v", x.Message)
type DynamicAny struct{ proto.Message }
func (m DynamicAny) String() string {
if m.Message == nil {
return "<nil>"
}
return m.Message.String()
}
func (m DynamicAny) Reset() {
if m.Message == nil {
return
}
m.Message.Reset()
}
func (m DynamicAny) ProtoMessage() {
return
}
func (m DynamicAny) ProtoReflect() protoreflect.Message {
if m.Message == nil {
return nil
}
return dynamicAny{proto.MessageReflect(m.Message)}
}
type dynamicAny struct{ protoreflect.Message }
func (m dynamicAny) Type() protoreflect.MessageType {
return dynamicAnyType{m.Message.Type()}
}
func (m dynamicAny) New() protoreflect.Message {
return dynamicAnyType{m.Message.Type()}.New()
}
func (m dynamicAny) Interface() protoreflect.ProtoMessage {
return DynamicAny{proto.MessageV1(m.Message.Interface())}
}
type dynamicAnyType struct{ protoreflect.MessageType }
func (t dynamicAnyType) New() protoreflect.Message {
return dynamicAny{t.MessageType.New()}
}
func (t dynamicAnyType) Zero() protoreflect.Message {
return dynamicAny{t.MessageType.Zero()}
}

View File

@@ -1,62 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: github.com/golang/protobuf/ptypes/any/any.proto
package any
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
anypb "google.golang.org/protobuf/types/known/anypb"
reflect "reflect"
)
// Symbols defined in public import of google/protobuf/any.proto.
type Any = anypb.Any
var File_github_com_golang_protobuf_ptypes_any_any_proto protoreflect.FileDescriptor
var file_github_com_golang_protobuf_ptypes_any_any_proto_rawDesc = []byte{
0x0a, 0x2f, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c,
0x61, 0x6e, 0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79,
0x70, 0x65, 0x73, 0x2f, 0x61, 0x6e, 0x79, 0x2f, 0x61, 0x6e, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x1a, 0x19, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
0x75, 0x66, 0x2f, 0x61, 0x6e, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x42, 0x2b, 0x5a, 0x29,
0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c, 0x61, 0x6e,
0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79, 0x70, 0x65,
0x73, 0x2f, 0x61, 0x6e, 0x79, 0x3b, 0x61, 0x6e, 0x79, 0x50, 0x00, 0x62, 0x06, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x33,
}
var file_github_com_golang_protobuf_ptypes_any_any_proto_goTypes = []interface{}{}
var file_github_com_golang_protobuf_ptypes_any_any_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_github_com_golang_protobuf_ptypes_any_any_proto_init() }
func file_github_com_golang_protobuf_ptypes_any_any_proto_init() {
if File_github_com_golang_protobuf_ptypes_any_any_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_github_com_golang_protobuf_ptypes_any_any_proto_rawDesc,
NumEnums: 0,
NumMessages: 0,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_github_com_golang_protobuf_ptypes_any_any_proto_goTypes,
DependencyIndexes: file_github_com_golang_protobuf_ptypes_any_any_proto_depIdxs,
}.Build()
File_github_com_golang_protobuf_ptypes_any_any_proto = out.File
file_github_com_golang_protobuf_ptypes_any_any_proto_rawDesc = nil
file_github_com_golang_protobuf_ptypes_any_any_proto_goTypes = nil
file_github_com_golang_protobuf_ptypes_any_any_proto_depIdxs = nil
}

View File

@@ -1,6 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package ptypes provides functionality for interacting with well-known types.
package ptypes

View File

@@ -1,72 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ptypes
import (
"errors"
"fmt"
"time"
durationpb "github.com/golang/protobuf/ptypes/duration"
)
// Range of google.protobuf.Duration as specified in duration.proto.
// This is about 10,000 years in seconds.
const (
maxSeconds = int64(10000 * 365.25 * 24 * 60 * 60)
minSeconds = -maxSeconds
)
// Duration converts a durationpb.Duration to a time.Duration.
// Duration returns an error if dur is invalid or overflows a time.Duration.
func Duration(dur *durationpb.Duration) (time.Duration, error) {
if err := validateDuration(dur); err != nil {
return 0, err
}
d := time.Duration(dur.Seconds) * time.Second
if int64(d/time.Second) != dur.Seconds {
return 0, fmt.Errorf("duration: %v is out of range for time.Duration", dur)
}
if dur.Nanos != 0 {
d += time.Duration(dur.Nanos) * time.Nanosecond
if (d < 0) != (dur.Nanos < 0) {
return 0, fmt.Errorf("duration: %v is out of range for time.Duration", dur)
}
}
return d, nil
}
// DurationProto converts a time.Duration to a durationpb.Duration.
func DurationProto(d time.Duration) *durationpb.Duration {
nanos := d.Nanoseconds()
secs := nanos / 1e9
nanos -= secs * 1e9
return &durationpb.Duration{
Seconds: int64(secs),
Nanos: int32(nanos),
}
}
// validateDuration determines whether the durationpb.Duration is valid
// according to the definition in google/protobuf/duration.proto.
// A valid durpb.Duration may still be too large to fit into a time.Duration
// Note that the range of durationpb.Duration is about 10,000 years,
// while the range of time.Duration is about 290 years.
func validateDuration(dur *durationpb.Duration) error {
if dur == nil {
return errors.New("duration: nil Duration")
}
if dur.Seconds < minSeconds || dur.Seconds > maxSeconds {
return fmt.Errorf("duration: %v: seconds out of range", dur)
}
if dur.Nanos <= -1e9 || dur.Nanos >= 1e9 {
return fmt.Errorf("duration: %v: nanos out of range", dur)
}
// Seconds and Nanos must have the same sign, unless d.Nanos is zero.
if (dur.Seconds < 0 && dur.Nanos > 0) || (dur.Seconds > 0 && dur.Nanos < 0) {
return fmt.Errorf("duration: %v: seconds and nanos have different signs", dur)
}
return nil
}

View File

@@ -1,63 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: github.com/golang/protobuf/ptypes/duration/duration.proto
package duration
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
durationpb "google.golang.org/protobuf/types/known/durationpb"
reflect "reflect"
)
// Symbols defined in public import of google/protobuf/duration.proto.
type Duration = durationpb.Duration
var File_github_com_golang_protobuf_ptypes_duration_duration_proto protoreflect.FileDescriptor
var file_github_com_golang_protobuf_ptypes_duration_duration_proto_rawDesc = []byte{
0x0a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c,
0x61, 0x6e, 0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79,
0x70, 0x65, 0x73, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x64, 0x75, 0x72,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f,
0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72,
0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x42, 0x35, 0x5a, 0x33, 0x67,
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c, 0x61, 0x6e, 0x67,
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79, 0x70, 0x65, 0x73,
0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x3b, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x50, 0x00, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var file_github_com_golang_protobuf_ptypes_duration_duration_proto_goTypes = []interface{}{}
var file_github_com_golang_protobuf_ptypes_duration_duration_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_github_com_golang_protobuf_ptypes_duration_duration_proto_init() }
func file_github_com_golang_protobuf_ptypes_duration_duration_proto_init() {
if File_github_com_golang_protobuf_ptypes_duration_duration_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_github_com_golang_protobuf_ptypes_duration_duration_proto_rawDesc,
NumEnums: 0,
NumMessages: 0,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_github_com_golang_protobuf_ptypes_duration_duration_proto_goTypes,
DependencyIndexes: file_github_com_golang_protobuf_ptypes_duration_duration_proto_depIdxs,
}.Build()
File_github_com_golang_protobuf_ptypes_duration_duration_proto = out.File
file_github_com_golang_protobuf_ptypes_duration_duration_proto_rawDesc = nil
file_github_com_golang_protobuf_ptypes_duration_duration_proto_goTypes = nil
file_github_com_golang_protobuf_ptypes_duration_duration_proto_depIdxs = nil
}

View File

@@ -1,103 +0,0 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ptypes
import (
"errors"
"fmt"
"time"
timestamppb "github.com/golang/protobuf/ptypes/timestamp"
)
// Range of google.protobuf.Duration as specified in timestamp.proto.
const (
// Seconds field of the earliest valid Timestamp.
// This is time.Date(1, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
minValidSeconds = -62135596800
// Seconds field just after the latest valid Timestamp.
// This is time.Date(10000, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
maxValidSeconds = 253402300800
)
// Timestamp converts a timestamppb.Timestamp to a time.Time.
// It returns an error if the argument is invalid.
//
// Unlike most Go functions, if Timestamp returns an error, the first return
// value is not the zero time.Time. Instead, it is the value obtained from the
// time.Unix function when passed the contents of the Timestamp, in the UTC
// locale. This may or may not be a meaningful time; many invalid Timestamps
// do map to valid time.Times.
//
// A nil Timestamp returns an error. The first return value in that case is
// undefined.
func Timestamp(ts *timestamppb.Timestamp) (time.Time, error) {
// Don't return the zero value on error, because corresponds to a valid
// timestamp. Instead return whatever time.Unix gives us.
var t time.Time
if ts == nil {
t = time.Unix(0, 0).UTC() // treat nil like the empty Timestamp
} else {
t = time.Unix(ts.Seconds, int64(ts.Nanos)).UTC()
}
return t, validateTimestamp(ts)
}
// TimestampNow returns a google.protobuf.Timestamp for the current time.
func TimestampNow() *timestamppb.Timestamp {
ts, err := TimestampProto(time.Now())
if err != nil {
panic("ptypes: time.Now() out of Timestamp range")
}
return ts
}
// TimestampProto converts the time.Time to a google.protobuf.Timestamp proto.
// It returns an error if the resulting Timestamp is invalid.
func TimestampProto(t time.Time) (*timestamppb.Timestamp, error) {
ts := &timestamppb.Timestamp{
Seconds: t.Unix(),
Nanos: int32(t.Nanosecond()),
}
if err := validateTimestamp(ts); err != nil {
return nil, err
}
return ts, nil
}
// TimestampString returns the RFC 3339 string for valid Timestamps.
// For invalid Timestamps, it returns an error message in parentheses.
func TimestampString(ts *timestamppb.Timestamp) string {
t, err := Timestamp(ts)
if err != nil {
return fmt.Sprintf("(%v)", err)
}
return t.Format(time.RFC3339Nano)
}
// validateTimestamp determines whether a Timestamp is valid.
// A valid timestamp represents a time in the range [0001-01-01, 10000-01-01)
// and has a Nanos field in the range [0, 1e9).
//
// If the Timestamp is valid, validateTimestamp returns nil.
// Otherwise, it returns an error that describes the problem.
//
// Every valid Timestamp can be represented by a time.Time,
// but the converse is not true.
func validateTimestamp(ts *timestamppb.Timestamp) error {
if ts == nil {
return errors.New("timestamp: nil Timestamp")
}
if ts.Seconds < minValidSeconds {
return fmt.Errorf("timestamp: %v before 0001-01-01", ts)
}
if ts.Seconds >= maxValidSeconds {
return fmt.Errorf("timestamp: %v after 10000-01-01", ts)
}
if ts.Nanos < 0 || ts.Nanos >= 1e9 {
return fmt.Errorf("timestamp: %v: nanos not in range [0, 1e9)", ts)
}
return nil
}

View File

@@ -1,64 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
package timestamp
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
)
// Symbols defined in public import of google/protobuf/timestamp.proto.
type Timestamp = timestamppb.Timestamp
var File_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto protoreflect.FileDescriptor
var file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_rawDesc = []byte{
0x0a, 0x3b, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c,
0x61, 0x6e, 0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79,
0x70, 0x65, 0x73, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2f, 0x74, 0x69,
0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1f, 0x67,
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74,
0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x42, 0x37,
0x5a, 0x35, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x6c,
0x61, 0x6e, 0x67, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x70, 0x74, 0x79,
0x70, 0x65, 0x73, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x3b, 0x74, 0x69,
0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x50, 0x00, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x33,
}
var file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_goTypes = []interface{}{}
var file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_init() }
func file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_init() {
if File_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto != nil {
return
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_rawDesc,
NumEnums: 0,
NumMessages: 0,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_goTypes,
DependencyIndexes: file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_depIdxs,
}.Build()
File_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto = out.File
file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_rawDesc = nil
file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_goTypes = nil
file_github_com_golang_protobuf_ptypes_timestamp_timestamp_proto_depIdxs = nil
}

View File

@@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2017 Jaime Pillora
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,119 +0,0 @@
# Backoff
A simple exponential backoff counter in Go (Golang)
[![GoDoc](https://godoc.org/github.com/jpillora/backoff?status.svg)](https://godoc.org/github.com/jpillora/backoff) [![Circle CI](https://circleci.com/gh/jpillora/backoff.svg?style=shield)](https://circleci.com/gh/jpillora/backoff)
### Install
```
$ go get -v github.com/jpillora/backoff
```
### Usage
Backoff is a `time.Duration` counter. It starts at `Min`. After every call to `Duration()` it is multiplied by `Factor`. It is capped at `Max`. It returns to `Min` on every call to `Reset()`. `Jitter` adds randomness ([see below](#example-using-jitter)). Used in conjunction with the `time` package.
---
#### Simple example
``` go
b := &backoff.Backoff{
//These are the defaults
Min: 100 * time.Millisecond,
Max: 10 * time.Second,
Factor: 2,
Jitter: false,
}
fmt.Printf("%s\n", b.Duration())
fmt.Printf("%s\n", b.Duration())
fmt.Printf("%s\n", b.Duration())
fmt.Printf("Reset!\n")
b.Reset()
fmt.Printf("%s\n", b.Duration())
```
```
100ms
200ms
400ms
Reset!
100ms
```
---
#### Example using `net` package
``` go
b := &backoff.Backoff{
Max: 5 * time.Minute,
}
for {
conn, err := net.Dial("tcp", "example.com:5309")
if err != nil {
d := b.Duration()
fmt.Printf("%s, reconnecting in %s", err, d)
time.Sleep(d)
continue
}
//connected
b.Reset()
conn.Write([]byte("hello world!"))
// ... Read ... Write ... etc
conn.Close()
//disconnected
}
```
---
#### Example using `Jitter`
Enabling `Jitter` adds some randomization to the backoff durations. [See Amazon's writeup of performance gains using jitter](http://www.awsarchitectureblog.com/2015/03/backoff.html). Seeding is not necessary but doing so gives repeatable results.
```go
import "math/rand"
b := &backoff.Backoff{
Jitter: true,
}
rand.Seed(42)
fmt.Printf("%s\n", b.Duration())
fmt.Printf("%s\n", b.Duration())
fmt.Printf("%s\n", b.Duration())
fmt.Printf("Reset!\n")
b.Reset()
fmt.Printf("%s\n", b.Duration())
fmt.Printf("%s\n", b.Duration())
fmt.Printf("%s\n", b.Duration())
```
```
100ms
106.600049ms
281.228155ms
Reset!
100ms
104.381845ms
214.957989ms
```
#### Documentation
https://godoc.org/github.com/jpillora/backoff
#### Credits
Forked from [some JavaScript](https://github.com/segmentio/backo) written by [@tj](https://github.com/tj)

View File

@@ -1,100 +0,0 @@
// Package backoff provides an exponential-backoff implementation.
package backoff
import (
"math"
"math/rand"
"sync/atomic"
"time"
)
// Backoff is a time.Duration counter, starting at Min. After every call to
// the Duration method the current timing is multiplied by Factor, but it
// never exceeds Max.
//
// Backoff is not generally concurrent-safe, but the ForAttempt method can
// be used concurrently.
type Backoff struct {
attempt uint64
// Factor is the multiplying factor for each increment step
Factor float64
// Jitter eases contention by randomizing backoff steps
Jitter bool
// Min and Max are the minimum and maximum values of the counter
Min, Max time.Duration
}
// Duration returns the duration for the current attempt before incrementing
// the attempt counter. See ForAttempt.
func (b *Backoff) Duration() time.Duration {
d := b.ForAttempt(float64(atomic.AddUint64(&b.attempt, 1) - 1))
return d
}
const maxInt64 = float64(math.MaxInt64 - 512)
// ForAttempt returns the duration for a specific attempt. This is useful if
// you have a large number of independent Backoffs, but don't want use
// unnecessary memory storing the Backoff parameters per Backoff. The first
// attempt should be 0.
//
// ForAttempt is concurrent-safe.
func (b *Backoff) ForAttempt(attempt float64) time.Duration {
// Zero-values are nonsensical, so we use
// them to apply defaults
min := b.Min
if min <= 0 {
min = 100 * time.Millisecond
}
max := b.Max
if max <= 0 {
max = 10 * time.Second
}
if min >= max {
// short-circuit
return max
}
factor := b.Factor
if factor <= 0 {
factor = 2
}
//calculate this duration
minf := float64(min)
durf := minf * math.Pow(factor, attempt)
if b.Jitter {
durf = rand.Float64()*(durf-minf) + minf
}
//ensure float64 wont overflow int64
if durf > maxInt64 {
return max
}
dur := time.Duration(durf)
//keep within bounds
if dur < min {
return min
}
if dur > max {
return max
}
return dur
}
// Reset restarts the current attempt counter at zero.
func (b *Backoff) Reset() {
atomic.StoreUint64(&b.attempt, 0)
}
// Attempt returns the current attempt counter value.
func (b *Backoff) Attempt() float64 {
return float64(atomic.LoadUint64(&b.attempt))
}
// Copy returns a backoff with equals constraints as the original
func (b *Backoff) Copy() *Backoff {
return &Backoff{
Factor: b.Factor,
Jitter: b.Jitter,
Min: b.Min,
Max: b.Max,
}
}

View File

@@ -1,3 +0,0 @@
module github.com/jpillora/backoff
go 1.13

View File

@@ -1,9 +0,0 @@
(The MIT License)
Copyright (c) 2017 marvin + konsorten GmbH (open-source@konsorten.de)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,42 +0,0 @@
# Windows Terminal Sequences
This library allow for enabling Windows terminal color support for Go.
See [Console Virtual Terminal Sequences](https://docs.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences) for details.
## Usage
```go
import (
"syscall"
sequences "github.com/konsorten/go-windows-terminal-sequences"
)
func main() {
sequences.EnableVirtualTerminalProcessing(syscall.Stdout, true)
}
```
## Authors
The tool is sponsored by the [marvin + konsorten GmbH](http://www.konsorten.de).
We thank all the authors who provided code to this library:
* Felix Kollmann
* Nicolas Perraut
* @dirty49374
## License
(The MIT License)
Copyright (c) 2018 marvin + konsorten GmbH (open-source@konsorten.de)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1 +0,0 @@
module github.com/konsorten/go-windows-terminal-sequences

Some files were not shown because too many files have changed in this diff Show More