Compare commits

...

83 Commits
v1.1.0 ... main

Author SHA1 Message Date
Adam Martin
4c68654424 update tablewriter to v1.1.2 (#512)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2026-02-14 11:55:54 -05:00
Eric Klatzer
8ecd87d944 fix for file:// dependency chart path resolutions (#510)
Signed-off-by: Eric Klatzer <eric@klatzer.at>
2026-02-14 11:43:30 -05:00
Adam Martin
a355898171 update cosign fork to 3.0.4 plus dep tidy (#509)
* update cosign fork to 3.0.4 plus dep tidy

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* update to cosign fork tag v3.0.4+hauler.2

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2026-02-12 22:07:53 -05:00
dependabot[bot]
3440b1a641 bump github.com/theupdateframework/go-tuf/v2 (#502)
bumps the go_modules group with 1 update in the / directory: [github.com/theupdateframework/go-tuf/v2](https://github.com/theupdateframework/go-tuf).

updates `github.com/theupdateframework/go-tuf/v2` from 2.3.1 to 2.4.1
- [Release notes](https://github.com/theupdateframework/go-tuf/releases)
- [Commits](https://github.com/theupdateframework/go-tuf/compare/v2.3.1...v2.4.1)

---

updated-dependencies:
- dependency-name: github.com/theupdateframework/go-tuf/v2
  dependency-version: 2.4.1
  dependency-type: indirect
  dependency-group: go_modules

...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-27 09:55:45 -05:00
dependabot[bot]
9081ac257b Bump github.com/sigstore/sigstore (#498)
bumps the go_modules group with 1 update in the / directory: [github.com/sigstore/sigstore](https://github.com/sigstore/sigstore).

updates `github.com/sigstore/sigstore` from 1.10.3 to 1.10.4
- [Release notes](https://github.com/sigstore/sigstore/releases)
- [Commits](https://github.com/sigstore/sigstore/compare/v1.10.3...v1.10.4)

---

updated-dependencies:
- dependency-name: github.com/sigstore/sigstore
  dependency-version: 1.10.4
  dependency-type: indirect
  dependency-group: go_modules

...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-22 16:18:01 -05:00
dependabot[bot]
a01895bfff bump github.com/sigstore/rekor (#497)
bumps the go_modules group with 1 update in the / directory: [github.com/sigstore/rekor](https://github.com/sigstore/rekor).

updates `github.com/sigstore/rekor` from 1.4.3 to 1.5.0
- [Release notes](https://github.com/sigstore/rekor/releases)
- [Changelog](https://github.com/sigstore/rekor/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sigstore/rekor/compare/v1.4.3...v1.5.0)

---

updated-dependencies:
- dependency-name: github.com/sigstore/rekor
  dependency-version: 1.5.0
  dependency-type: indirect
  dependency-group: go_modules

...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-22 14:25:27 -05:00
Zack Brady
e8a5f82b7d new fix for new helm chart features (#496) 2026-01-22 10:30:59 -05:00
dependabot[bot]
dffcb8254c bump github.com/theupdateframework/go-tuf/v2 (#495)
bumps the go_modules group with 1 update in the / directory: [github.com/theupdateframework/go-tuf/v2](https://github.com/theupdateframework/go-tuf).

updates `github.com/theupdateframework/go-tuf/v2` from 2.3.0 to 2.3.1
- [Release notes](https://github.com/theupdateframework/go-tuf/releases)
- [Commits](https://github.com/theupdateframework/go-tuf/compare/v2.3.0...v2.3.1)

---

updated-dependencies:
- dependency-name: github.com/theupdateframework/go-tuf/v2
  dependency-version: 2.3.1
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-21 13:24:06 -05:00
Zack Brady
4a2b7b13a7 fixed typos for containerd imports (#493) 2026-01-15 20:45:24 -05:00
Zack Brady
cf22fa8551 fix and support containerd imports of hauls (#492)
* fixed hauler save for containerd
* added flag for containerd compatibility
2026-01-15 09:09:23 -05:00
dependabot[bot]
28432fc057 bump github.com/sigstore/fulcio (#489)
bumps the go_modules group with 1 update in the / directory: [github.com/sigstore/fulcio](https://github.com/sigstore/fulcio).

updates `github.com/sigstore/fulcio` from 1.8.3 to 1.8.5
- [Release notes](https://github.com/sigstore/fulcio/releases)
- [Changelog](https://github.com/sigstore/fulcio/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sigstore/fulcio/compare/v1.8.3...v1.8.5)

---

updated-dependencies:
- dependency-name: github.com/sigstore/fulcio
  dependency-version: 1.8.5
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 23:57:10 -05:00
Zack Brady
ac7d82b55f added/updated logging for serve and remove (#487)
* added error logging for hauler store serve
* updated logging for hauler store remove to match others
* added more error logging for user responses
2026-01-12 16:36:37 -05:00
Zack Brady
ded947d609 added/fixed helm chart images/dependencies features (#485)
* added/fixed helm chart images/dependencies features
* added helm chart images/dependencies features to sync/manifests
* more fixes for helm chart images/dependencies features
* fixed tests for incorrect referenced images
* fixed sync for helm chart images/dependencies
* added helm chart image annotations and registry/platform features
* updated ordering of experimental
* added more parsing types for helm images/dependencies
* a few more remove artifacts updates

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2026-01-09 13:39:52 -05:00
Zack Brady
ff3cece87f more experimental feature updates (#486)
* updates for experimental features and renamed delete to remove
* added examples back for experimental features
* update stability warning message

Co-authored-by: Camryn Carter <camryn.carter@ranchergovernment.com>
Signed-off-by: Zack Brady <zackbrady123@gmail.com>

* fixed more tests to use ghcr for hauler
* updated test data workflow

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
Co-authored-by: Camryn Carter <camryn.carter@ranchergovernment.com>
2026-01-08 14:57:52 -05:00
Camryn Carter
c54065f316 add experimental notes (#483) 2026-01-08 00:59:23 -05:00
Zack Brady
382dea42a5 updated tempdir flag to store persistent flags (#484) 2026-01-07 08:31:40 -05:00
Camryn Carter
3c073688f3 delete artifacts from store (#473)
* WIP delete artifacts
* handle multiple matches
* confirm deletion
* --force flag for delete-artifact
* clean up remaining unreferenced blobs
* more robust handling of manifest structure
* fixed loop to process all layers
* tests pt 1
* fix tests
* test order
* updated tests for deleting chart and file
* tool cleanup tests

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2026-01-06 11:54:06 -05:00
Camryn Carter
96bab7b81f path rewrites (#475)
* image reference rewrite
* remove need for rewrite-provider
* handle charts
* handle leading slash and missing tags
* tests
* tool cleanup
* removed intermediate store cleanup
* fix?
* test cleanup
* clean up tools folder again
* debug test
* clear tempdir after each filename provided by load

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* update tar command in load integration test

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* clean up debug prints
* clean up debug prints

* fix typo

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
Signed-off-by: Zack Brady <zackbrady123@gmail.com>
Co-authored-by: Adam Martin <adam.martin@ranchergovernment.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2026-01-06 11:48:49 -05:00
Zack Brady
5ea9b29b8f updated/fixed workflow dependency versions (#478)
* updated/fixed workflow dependency versions
* added hosted tools cache clean up
2026-01-02 14:35:10 -05:00
Adam Martin
15867e84ad bump to latest cosign fork release (#481)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-12-12 14:25:40 -05:00
dependabot[bot]
c5da018450 Bump golang.org/x/crypto in the go_modules group across 1 directory (#476)
Bumps the go_modules group with 1 update in the / directory: [golang.org/x/crypto](https://github.com/golang/crypto).


Updates `golang.org/x/crypto` from 0.43.0 to 0.45.0
- [Commits](https://github.com/golang/crypto/compare/v0.43.0...v0.45.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.45.0
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-12 10:39:52 -05:00
dependabot[bot]
5edc8802ee bump github.com/containerd/containerd (#474)
bumps the go_modules group with 1 update in the / directory: [github.com/containerd/containerd](https://github.com/containerd/containerd).

udates `github.com/containerd/containerd` from 1.7.28 to 1.7.29
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.7.28...v1.7.29)

---

updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-version: 1.7.29
  dependency-type: direct:production
  dependency-group: go_modules

...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-06 18:03:19 -05:00
Zack Brady
a3d62b204f another fix to tests for new tests (#472) 2025-10-27 18:40:05 -04:00
Zack Brady
d85a1b0775 fixed typo in testdata (#471)
signed-off-by: zack brady <zackbrady123@gmail.com>
2025-10-27 18:14:58 -04:00
Zack Brady
ea10bc0256 fixed/cleaned new tests (#470) 2025-10-27 18:00:01 -04:00
Zack Brady
1aea670588 trying a new way for hauler testing (#467) 2025-10-27 14:31:32 -04:00
Adam Martin
f1a632a207 update for cosign v3 verify (#469)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2025-10-24 17:07:49 -04:00
Zack Brady
802e062f47 added digests view to info (#465)
* added digests view to info
* updated digests to be more accurate
2025-10-23 15:08:45 -04:00
dependabot[bot]
d227e1f18f bump github.com/nwaples/rardecode/v2 from 2.1.1 to 2.2.0 in the go_modules group across 1 directory (#457)
* Bump github.com/nwaples/rardecode/v2

Bumps the go_modules group with 1 update in the / directory: [github.com/nwaples/rardecode/v2](https://github.com/nwaples/rardecode).


Updates `github.com/nwaples/rardecode/v2` from 2.1.1 to 2.2.0
- [Commits](https://github.com/nwaples/rardecode/compare/v2.1.1...v2.2.0)

---
updated-dependencies:
- dependency-name: github.com/nwaples/rardecode/v2
  dependency-version: 2.2.0
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>

* fixed versions/dependencies for rardecode and archives

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2025-10-22 18:55:51 -04:00
Adam Martin
33a9bb3f78 update oras-go to v1.2.7 for security patches (#464)
signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-10-22 13:42:25 -04:00
Adam Martin
344c008607 update cosign to v3.0.2+hauler.1 (#463)
signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-10-22 12:48:56 -04:00
Zack Brady
09a149dab6 fixed homebrew directory deprecation (#462) 2025-10-21 19:44:43 -04:00
Garret Noling
f7f1e2db8f add registry logout command (#460)
* add registry logout command
* moved logout/credential removal tests to the end
2025-10-21 19:31:24 -04:00
dependabot[bot]
0fafca87f9 bump the go_modules group across 1 directory with 2 updates (#455)
bumps the go_modules group with 2 updates in the / directory: [github.com/docker/docker](https://github.com/docker/docker) and [github.com/go-viper/mapstructure/v2](https://github.com/go-viper/mapstructure).

updates `github.com/docker/docker` from 28.2.2+incompatible to 28.3.3+incompatible
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.2.2...v28.3.3)

updates `github.com/go-viper/mapstructure/v2` from 2.3.0 to 2.4.0
- [Release notes](https://github.com/go-viper/mapstructure/releases)
- [Changelog](https://github.com/go-viper/mapstructure/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-viper/mapstructure/compare/v2.3.0...v2.4.0)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.3.3+incompatible
  dependency-type: indirect
  dependency-group: go_modules
- dependency-name: github.com/go-viper/mapstructure/v2
  dependency-version: 2.4.0
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 16:04:36 -04:00
Zack Brady
38e676e934 upgraded versions/dependencies/deprecations (#454)
* upgrade go versions/dependencies
* updated deprecated goreleaser code for homebrew
* updated deprecated goreleaser code for docker
2025-10-02 14:23:22 -04:00
Zack Brady
369c85bab9 allow loading of docker tarballs (#452)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
Co-authored-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-10-01 11:56:36 -04:00
dependabot[bot]
acbd1f1b6a bump the go_modules group across 1 directory with 2 updates (#449)
bumps the go_modules group with 2 updates in the / directory: [helm.sh/helm/v3](https://github.com/helm/helm) and [github.com/docker/docker](https://github.com/docker/docker).

updates `helm.sh/helm/v3` from 3.18.4 to 3.18.5
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.18.4...v3.18.5)

updates `github.com/docker/docker` from 27.5.0+incompatible to 28.0.0+incompatible
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v27.5.0...v28.0.0)

---

updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.18.5
  dependency-type: direct:production
  dependency-group: go_modules
- dependency-name: github.com/docker/docker
  dependency-version: 28.0.0+incompatible
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-24 21:24:58 -04:00
Zack Brady
3e44c53b75 upgraded go and dependencies versions (#444)
* upgraded go version
* upgraded dependencies
2025-07-11 11:49:01 -04:00
dependabot[bot]
062bb3ff2c Bump github.com/go-viper/mapstructure/v2 (#442)
bumps the go_modules group with 1 update in the / directory: [github.com/go-viper/mapstructure/v2](https://github.com/go-viper/mapstructure).

Updates `github.com/go-viper/mapstructure/v2` from 2.2.1 to 2.3.0
- [Release notes](https://github.com/go-viper/mapstructure/releases)
- [Changelog](https://github.com/go-viper/mapstructure/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-viper/mapstructure/compare/v2.2.1...v2.3.0)

---
updated-dependencies:
- dependency-name: github.com/go-viper/mapstructure/v2
  dependency-version: 2.3.0
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-02 14:37:59 -04:00
dependabot[bot]
c8b4e80371 bump github.com/cloudflare/circl (#441)
bumps the go_modules group with 1 update in the / directory: [github.com/cloudflare/circl](https://github.com/cloudflare/circl).


Updates `github.com/cloudflare/circl` from 1.3.7 to 1.6.1
- [Release notes](https://github.com/cloudflare/circl/releases)
- [Commits](https://github.com/cloudflare/circl/compare/v1.3.7...v1.6.1)

---
updated-dependencies:
- dependency-name: github.com/cloudflare/circl
  dependency-version: 1.6.1
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-11 00:45:29 -04:00
Zack Brady
d86957bf20 deprecate auth from hauler store copy (#440) 2025-06-05 14:07:53 -04:00
dependabot[bot]
4a6fc8cec2 Bump github.com/open-policy-agent/opa (#438)
Bumps the go_modules group with 1 update in the / directory: [github.com/open-policy-agent/opa](https://github.com/open-policy-agent/opa).


Updates `github.com/open-policy-agent/opa` from 1.1.0 to 1.4.0
- [Release notes](https://github.com/open-policy-agent/opa/releases)
- [Changelog](https://github.com/open-policy-agent/opa/blob/main/CHANGELOG.md)
- [Commits](https://github.com/open-policy-agent/opa/compare/v1.1.0...v1.4.0)

---
updated-dependencies:
- dependency-name: github.com/open-policy-agent/opa
  dependency-version: 1.4.0
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-06 12:34:17 -04:00
Zack Brady
e089c31879 minor tests updates (#437) 2025-05-01 11:07:00 -04:00
dependabot[bot]
b7b599e6ed bump golang.org/x/net in the go_modules group across 1 directory (#433)
bumps the go_modules group with 1 update in the / directory: [golang.org/x/net](https://github.com/golang/net).

Updates `golang.org/x/net` from 0.37.0 to 0.38.0
- [Commits](https://github.com/golang/net/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2025-04-30 22:21:37 -04:00
Adam Martin
ea53002f3a formatting and flag text updates
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-04-22 12:43:04 -04:00
Nathaniel Churchill
4d0f779ae6 add keyless signature verification (#434) 2025-04-21 15:27:17 -04:00
dependabot[bot]
4d0b407452 bump helm.sh/helm/v3 in the go_modules group across 1 directory (#430)
bumps the go_modules group with 1 update in the / directory: [helm.sh/helm/v3](https://github.com/helm/helm).


Updates `helm.sh/helm/v3` from 3.17.0 to 3.17.3
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.17.0...v3.17.3)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.17.3
  dependency-type: direct:production
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-11 00:16:05 -05:00
Adam Martin
3b96a95a94 add --only flag to hauler store copy (for images) (#429)
* bump cosign fork and tidy

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* add --only to hauler store copy

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2025-04-08 09:27:12 -04:00
Adam Toy
f9a188259f fix tlog verification error/warning output (#428)
* fix tlog verification error/warning output
* extend error msg to avoid ambiguity
2025-04-04 16:51:26 -05:00
Adam Martin
5021f3ab6b cleanup new tlog flag typos and add shorthand (#426)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-03-28 13:15:48 -04:00
Adam Martin
db065a1088 default public transparency log verification to false to be airgap friendly but allow override (#425)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-03-27 09:13:23 -04:00
dependabot[bot]
01bf58de03 bump github.com/golang-jwt/jwt/v4 (#423)
bumps the go_modules group with 1 update in the / directory: [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt).


Updates `github.com/golang-jwt/jwt/v4` from 4.5.1 to 4.5.2
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.1...v4.5.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 15:09:34 -04:00
dependabot[bot]
38b979d0c5 bump the go_modules group across 1 directory with 2 updates (#422)
bumps the go_modules group with 2 updates in the / directory: [github.com/containerd/containerd](https://github.com/containerd/containerd) and [golang.org/x/net](https://github.com/golang/net).


Updates `github.com/containerd/containerd` from 1.7.24 to 1.7.27
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.7.24...v1.7.27)

Updates `golang.org/x/net` from 0.33.0 to 0.36.0
- [Commits](https://github.com/golang/net/compare/v0.33.0...v0.36.0)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
  dependency-group: go_modules
- dependency-name: golang.org/x/net
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 11:52:25 -04:00
dependabot[bot]
7de20a1f15 bump github.com/go-jose/go-jose/v3 (#417)
bumps the go_modules group with 1 update in the / directory: [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose).

Updates `github.com/go-jose/go-jose/v3` from 3.0.3 to 3.0.4
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.3...v3.0.4)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-19 23:14:27 -04:00
dependabot[bot]
088fde5aa9 bump github.com/go-jose/go-jose/v4 (#415)
bumps the go_modules group with 1 update in the / directory: [github.com/go-jose/go-jose/v4](https://github.com/go-jose/go-jose).

updates `github.com/go-jose/go-jose/v4` from 4.0.4 to 4.0.5
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v4.0.4...v4.0.5)

---

updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v4
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-28 03:29:02 -05:00
Adam Martin
eb275b9690 clear default manifest name if product flag used with sync (#412)
* clear default manifest name if product flag used with sync
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* only clear the default if the user did NOT explicitly set --filename
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-02-07 13:46:23 -05:00
Zack Brady
7d28df1949 updates for v1.2.0 (#408)
* updates for v1.2.0
* fixed readme typo
2025-02-05 13:26:34 -05:00
Zack Brady
08f566fb28 fixed remote code (#407) 2025-02-05 11:25:11 -05:00
Zack Brady
c465d2c143 added remote file fetch to load (#406)
* updated load flag and related logs [breaking change]
* fixed load flag typo
* added remote file fetch to load
* fixed merge/rebase typo

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-02-05 09:32:35 -05:00
Zack Brady
39325585eb added remote and multiple file fetch to sync (#405)
* updated sync flag and related logs [breaking change]
* fixed typo in sync flag name
* and the last typo fix...
* added remote and multiple file fetch to sync

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-02-05 09:13:38 -05:00
Zack Brady
535a82c1b5 updated save flag and related logs (#404)
Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-02-05 09:06:13 -05:00
Zack Brady
53cf953750 updated load flag and related logs [breaking change] (#403)
* updated load flag and related logs [breaking change]
* fixed load flag typo

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-02-05 09:01:18 -05:00
Zack Brady
ff144b1180 updated sync flag and related logs [breaking change] (#402)
* updated sync flag and related logs [breaking change]
* fixed typo in sync flag name
* and the last typo fix...
2025-02-05 08:54:54 -05:00
Zack Brady
938914ba5c upgraded api update to v1/updated dependencies (#400)
* initial api update to v1
* updated dependencies
* updated manifests
* last bit of updates
* fixed manifest typo
* added deprecation warning
2025-02-04 21:36:20 -05:00
Zack Brady
603249dea9 fixed consts for oci declarations (#398) 2025-02-03 08:14:44 -05:00
Adam Martin
37032f5379 fix for correctly grabbing platform post cosign 2.4 updates (#393) 2025-01-27 18:44:11 -05:00
Adam Martin
ec9ac48476 use cosign v2.4.1+carbide.2 to address containerd annotation in index.json (#390) 2025-01-19 10:32:12 -05:00
dependabot[bot]
5f5cd64c2f Bump the go_modules group across 1 directory with 2 updates (#385) 2025-01-10 19:59:36 -05:00
Adam Martin
882713b725 replace mholt/archiver with mholt/archives (#384) 2025-01-10 19:15:00 -05:00
Adam Martin
a20d7bf950 forked cosign bump to 2.4.1 and use as a library vs embedded binary (#383)
* only ignore project-root/store not all store paths
* remove embedded cosign binary in favor of fork library
2025-01-10 17:14:44 -05:00
Zack Brady
e97adcdfed cleaned up registry and improved logging (#378)
* cleaned up registry and improved logging
* last bit of logging improvements/import clean up

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-01-09 15:21:03 -05:00
dependabot[bot]
cc17b030a9 Bump golang.org/x/crypto in the go_modules group across 1 directory (#377)
Bumps the go_modules group with 1 update in the / directory: [golang.org/x/crypto](https://github.com/golang/crypto).

Updates `golang.org/x/crypto` from 0.28.0 to 0.31.0
- [Commits](https://github.com/golang/crypto/compare/v0.28.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-14 01:02:58 -05:00
Zack Brady
090f4dc905 fixed cli desc for store env var (#374) 2024-12-09 08:13:33 -05:00
Zack Brady
74aa40c69b updated versions for go/k8s/helm (#373) 2024-12-04 16:10:36 -05:00
Zack Brady
f5e3b38a6d updated version flag to internal/flags (#369) 2024-12-04 14:29:51 -05:00
Zack Brady
01faf396bb renamed incorrectly named consts (#371) 2024-12-04 14:29:26 -05:00
Zack Brady
235218cfff added store env var (#370)
* added strictmode flag and consts
* updated code with strictmode
* added flag for retryoperation
* updated registry short flag
* added store env var
* added/fixed more env var code

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2024-12-04 14:26:53 -05:00
Zack Brady
4270a27819 adding ignore errors and retries for continue on error/fail on error (#368)
* added strictmode flag and consts
* updated code with strictmode
* added flag for retryoperation
* updated registry short flag
* updated strictmode to ignore errors
* fixed command description
* cleaned up error/debug logs
2024-12-02 17:18:58 -05:00
Zack Brady
1b77295438 updated/fixed hauler directory (#354)
* added env variables for haulerDir/tempDir
* updated hauler directory for the install script
* cleanup/fixes for install script
* updated variables based on feedback
* revert "updated variables based on feedback"
  * reverts commit 54f7a4d695
* minor restructure to root flags
* updated logic to include haulerdir flag
* cleaned up/formatted new logic
* more cleanup and formatting
2024-11-14 21:37:25 -05:00
Zack Brady
38c7d1b17a standardize consts (#353)
* removed k3s code
* standardize and formatted consts
* fixed consts for sync.go
* trying another fix for sync

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2024-11-06 18:31:06 -05:00
Zack Brady
2fa6c36208 removed cachedir code (#355)
co-authored-by: Jacob Blain Christen <jacob.blain.christen@ranchergovernment.com>
2024-11-06 14:56:33 -05:00
Zack Brady
dd50ed9dba removed k3s code (#352) 2024-11-06 10:46:38 -07:00
Zack Brady
fb100a27ac updated dependencies for go, helm, and k8s (#351)
* updated dependencies for go, helm, and k8s
* fixed dependency version for k8s
2024-11-01 17:06:13 -04:00
85 changed files with 4629 additions and 1819 deletions

View File

@@ -1,40 +1,51 @@
# Simple workflow for deploying static content to GitHub Pages
name: 📋
name: Pages Workflow
on:
workflow_dispatch:
push:
branches:
- main
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
# Single deploy job since we're just deploying
deploy:
deploy-pages:
name: Deploy GitHub Pages
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Clean Up Actions Tools Cache
run: rm -rf /opt/hostedtoolcache
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6.0.1
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Upload artifact
- name: Upload Pages Artifacts
uses: actions/upload-pages-artifact@v3
with:
path: './static'
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@@ -11,9 +11,13 @@ jobs:
name: GoReleaser Job
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Clean Up Actions Tools Cache
run: rm -rf /opt/hostedtoolcache
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6.0.1
with:
fetch-depth: 0
@@ -23,37 +27,37 @@ jobs:
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Set Up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6.1.0
with:
go-version-file: go.mod
check-latest: true
- name: Set Up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v3.7.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v3.11.1
- name: Authenticate to GitHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Authenticate to DockerHub Container Registry
uses: docker/login-action@v3
uses: docker/login-action@v3.6.0
with:
registry: docker.io
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v6
uses: goreleaser/goreleaser-action@v6.4.0
with:
distribution: goreleaser
version: "~> v2"
args: "release --clean --parallelism 1 --timeout 60m"
args: "release --clean --timeout 60m"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
HOMEBREW_TAP_GITHUB_TOKEN: ${{ secrets.HOMEBREW_TAP_GITHUB_TOKEN }}

41
.github/workflows/testdata.yaml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Refresh Hauler Testdata
on:
workflow_dispatch:
jobs:
refresh-testdata:
name: Refresh Hauler Testdata
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Clean Up Actions Tools Cache
run: rm -rf /opt/hostedtoolcache
- name: Checkout Repository
uses: actions/checkout@v6.0.1
- name: Fetch Hauler Binary
run: curl -sfL https://get.hauler.dev | bash
- name: Login to GitHub Container Registry and Docker Hub Container Registry
run: |
hauler login ghcr.io --username ${{ github.repository_owner }} --password ${{ secrets.GITHUB_TOKEN }}
hauler login docker.io --username ${{ secrets.DOCKERHUB_USERNAME }} --password ${{ secrets.DOCKERHUB_TOKEN }}
- name: Process Images for Tests
run: |
hauler store add image nginx:1.25-alpine
hauler store add image nginx:1.26-alpine
hauler store add image busybox
hauler store add image busybox:stable
hauler store add image gcr.io/distroless/base
hauler store add image gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
- name: Push Store Contents to Hauler-Dev GitHub Container Registry
run: |
hauler store copy registry://ghcr.io/${{ github.repository_owner }}
- name: Verify Hauler Store Contents
run: hauler store info

View File

@@ -14,9 +14,13 @@ jobs:
name: Unit Tests
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Clean Up Actions Tools Cache
run: rm -rf /opt/hostedtoolcache
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6.0.1
with:
fetch-depth: 0
@@ -26,13 +30,13 @@ jobs:
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Set Up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6.1.0
with:
go-version-file: go.mod
check-latest: true
- name: Install Go Releaser
uses: goreleaser/goreleaser-action@v6
uses: goreleaser/goreleaser-action@v6.4.0
with:
install-only: true
@@ -47,13 +51,13 @@ jobs:
make build-all
- name: Upload Hauler Binaries
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v4.6.2
with:
name: hauler-binaries
path: dist/*
- name: Upload Coverage Report
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v4.6.2
with:
name: coverage-report
path: coverage.out
@@ -63,9 +67,13 @@ jobs:
runs-on: ubuntu-latest
needs: [unit-tests]
timeout-minutes: 30
steps:
- name: Clean Up Actions Tools Cache
run: rm -rf /opt/hostedtoolcache
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v6.0.1
with:
fetch-depth: 0
@@ -81,7 +89,7 @@ jobs:
sudo apt-get install -y tree
- name: Download Artifacts
uses: actions/download-artifact@v4
uses: actions/download-artifact@v6.0.0
with:
name: hauler-binaries
path: dist
@@ -114,12 +122,8 @@ jobs:
- name: Verify - hauler login
run: |
hauler login --help
hauler login docker.io --username bob --password haulin
echo "hauler" | hauler login docker.io -u bob --password-stdin
- name: Remove Hauler Store Credentials
run: |
rm -rf ~/.docker/config.json
hauler login docker.io --username ${{ secrets.DOCKERHUB_USERNAME }} --password ${{ secrets.DOCKERHUB_TOKEN }}
echo ${{ secrets.GITHUB_TOKEN }} | hauler login ghcr.io --username ${{ github.repository_owner }} --password-stdin
- name: Verify - hauler store
run: |
@@ -150,6 +154,21 @@ jobs:
# verify via the hauler store contents
hauler store info
- name: Verify - hauler store add chart --rewrite
run: |
# add chart with rewrite flag
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable --version 2.8.4 --rewrite custom-path/rancher:2.8.4
# verify new ref in store
hauler store info | grep 'custom-path/rancher:2.8.4'
# confrim leading slash trimmed from rewrite
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable --version 2.8.4 --rewrite /custom-path/rancher:2.8.4
# verify no leading slash
! hauler store info | grep '/custom-path/rancher:2.8.4'
# confirm old tag used if not specified
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable --version 2.8.4 --rewrite /custom-path/rancher
# confirm tag
hauler store info | grep '2.8.4'
- name: Verify - hauler store add file
run: |
hauler store add file --help
@@ -166,14 +185,29 @@ jobs:
run: |
hauler store add image --help
# verify via image reference
hauler store add image busybox
hauler store add image ghcr.io/hauler-dev/library/busybox
# verify via image reference with version and platform
hauler store add image busybox:stable --platform linux/amd64
hauler store add image ghcr.io/hauler-dev/library/busybox:stable --platform linux/amd64
# verify via image reference with full reference
hauler store add image gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
hauler store add image ghcr.io/hauler-dev/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
# verify via the hauler store contents
hauler store info
- name: Verify - hauler store add image --rewrite
run: |
# add image with rewrite flag
hauler store add image ghcr.io/hauler-dev/library/busybox --rewrite custom-registry.io/custom-path/busybox:latest
# verify new ref in store
hauler store info | grep 'custom-registry.io/custom-path/busybox:latest'
# confrim leading slash trimmed from rewrite
hauler store add image ghcr.io/hauler-dev/library/busybox --rewrite /custom-path/busybox:latest
# verify no leading slash
! hauler store info | grep '/custom-path/busybox:latest'
# confirm old tag used if not specified
hauler store add image ghcr.io/hauler-dev/library/busybox:stable --rewrite /custom-path/busybox
# confirm tag
hauler store info | grep ':stable'
- name: Verify - hauler store copy
run: |
hauler store copy --help
@@ -222,11 +256,15 @@ jobs:
run: |
hauler store load --help
# verify via load
hauler store load haul.tar.zst
hauler store load
# verify via load with multiple files
hauler store load --filename haul.tar.zst --filename store.tar.zst
# confirm store contents
tar -xOf store.tar.zst index.json
# verify via load with filename and temp directory
hauler store load store.tar.zst --tempdir /opt
hauler store load --filename store.tar.zst --tempdir /opt
# verify via load with filename and platform (amd64)
hauler store load store-amd64.tar.zst
hauler store load --filename store-amd64.tar.zst
- name: Verify Hauler Store Contents
run: |
@@ -255,10 +293,10 @@ jobs:
- name: Verify - hauler store sync
run: |
hauler store sync --help
# download local helm repository
curl -sfOL https://github.com/rancherfederal/rancher-cluster-templates/releases/download/rancher-cluster-templates-0.5.2/rancher-cluster-templates-0.5.2.tgz
# verify via sync
hauler store sync --files testdata/hauler-manifest-pipeline.yaml
hauler store sync --filename testdata/hauler-manifest-pipeline.yaml
# verify via sync with multiple files
hauler store sync --filename testdata/hauler-manifest-pipeline.yaml --filename testdata/hauler-manifest.yaml
# need more tests here
- name: Verify - hauler store serve
@@ -316,6 +354,105 @@ jobs:
# verify fileserver directory structure
tree -hC fileserver
- name: Verify - hauler store remove (image)
run: |
hauler store remove --help
# add test images
hauler store add image ghcr.io/hauler-dev/library/nginx:1.25-alpine
hauler store add image ghcr.io/hauler-dev/library/nginx:1.26-alpine
# confirm artifacts
hauler store info | grep 'nginx:1.25'
hauler store info | grep 'nginx:1.26'
# count blobs before delete
BLOBS_BEFORE=$(find store/blobs/sha256 -type f | wc -l | xargs)
echo "blobs before deletion: $BLOBS_BEFORE"
# delete one artifact
hauler store remove nginx:1.25 --force
# verify artifact removed
! hauler store info | grep -q "nginx:1.25"
# non-deleted artifact exists
hauler store info | grep -q "nginx:1.26"
# count blobs after delete
BLOBS_AFTER=$(find store/blobs/sha256 -type f | wc -l | xargs)
echo "blobs after deletion: $BLOBS_AFTER"
# verify only unreferenced blobs removed
if [ "$BLOBS_AFTER" -ge "$BLOBS_BEFORE" ]; then
echo "ERROR: No blobs were cleaned up"
exit 1
fi
if [ "$BLOBS_AFTER" -eq 0 ]; then
echo "ERROR: All blobs deleted (shared layers removed)"
exit 1
fi
# verify remaining image not missing layers
hauler store extract ghcr.io/hauler-dev/library/nginx:1.26-alpine
- name: Verify - hauler store remove (chart)
run: |
hauler store remove --help
# add test images
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable --version 2.8.4
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable --version 2.8.5
# confirm artifacts
hauler store info | grep '2.8.4'
hauler store info | grep '2.8.5'
# count blobs before delete
BLOBS_BEFORE=$(find store/blobs/sha256 -type f | wc -l | xargs)
echo "blobs before deletion: $BLOBS_BEFORE"
# delete one artifact
hauler store remove 2.8.4 --force
# verify artifact removed
! hauler store info | grep -q "2.8.4"
# non-deleted artifact exists
hauler store info | grep -q "2.8.5"
# count blobs after delete
BLOBS_AFTER=$(find store/blobs/sha256 -type f | wc -l | xargs)
echo "blobs after deletion: $BLOBS_AFTER"
# verify only unreferenced blobs removed
if [ "$BLOBS_AFTER" -ge "$BLOBS_BEFORE" ]; then
echo "ERROR: No blobs were cleaned up"
exit 1
fi
if [ "$BLOBS_AFTER" -eq 0 ]; then
echo "ERROR: All blobs deleted (shared layers removed)"
exit 1
fi
# verify remaining image not missing layers
hauler store extract hauler/rancher:2.8.5
- name: Verify - hauler store remove (file)
run: |
hauler store remove --help
# add test images
hauler store add file https://get.hauler.dev
hauler store add file https://get.rke2.io/install.sh
# confirm artifacts
hauler store info | grep 'get.hauler.dev'
hauler store info | grep 'install.sh'
# count blobs before delete
BLOBS_BEFORE=$(find store/blobs/sha256 -type f | wc -l | xargs)
echo "blobs before deletion: $BLOBS_BEFORE"
# delete one artifact
hauler store remove get.hauler.dev --force
# verify artifact removed
! hauler store info | grep -q "get.hauler.dev"
# non-deleted artifact exists
hauler store info | grep -q "install.sh"
# count blobs after delete
BLOBS_AFTER=$(find store/blobs/sha256 -type f | wc -l | xargs)
echo "blobs after deletion: $BLOBS_AFTER"
# verify only unreferenced blobs removed
if [ "$BLOBS_AFTER" -ge "$BLOBS_BEFORE" ]; then
echo "ERROR: No blobs were cleaned up"
exit 1
fi
if [ "$BLOBS_AFTER" -eq 0 ]; then
echo "ERROR: All blobs deleted (shared layers removed)"
exit 1
fi
# verify remaining image not missing layers
hauler store extract hauler/install.sh:latest
- name: Create Hauler Report
run: |
hauler version >> hauler-report.txt
@@ -327,7 +464,17 @@ jobs:
hauler store info
- name: Upload Hauler Report
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v4.6.2
with:
name: hauler-report
path: hauler-report.txt
- name: Verify - hauler logout
run: |
hauler logout --help
hauler logout docker.io
hauler logout ghcr.io
- name: Remove Hauler Store Credentials
run: |
rm -rf ~/.docker/config.json

2
.gitignore vendored
View File

@@ -10,7 +10,7 @@
dist/
tmp/
bin/
store/
/store/
registry/
fileserver/
cmd/hauler/binaries

View File

@@ -3,15 +3,11 @@ version: 2
project_name: hauler
before:
hooks:
- rm -rf cmd/hauler/binaries
- mkdir -p cmd/hauler/binaries
- touch cmd/hauler/binaries/file
- go mod tidy
- go mod download
- go fmt ./...
- go vet ./...
- go test ./... -cover -race -covermode=atomic -coverprofile=coverage.out
- rm -rf cmd/hauler/binaries
release:
prerelease: auto
@@ -32,11 +28,6 @@ builds:
- arm64
ldflags:
- -s -w -X {{ .Env.vpkg }}.gitVersion={{ .Version }} -X {{ .Env.vpkg }}.gitCommit={{ .ShortCommit }} -X {{ .Env.vpkg }}.gitTreeState={{if .IsGitDirty}}dirty{{else}}clean{{end}} -X {{ .Env.vpkg }}.buildDate={{ .Date }}
hooks:
pre:
- wget -P cmd/hauler/binaries/ https://github.com/hauler-dev/cosign/releases/download/{{ .Env.cosign_version }}/cosign-{{ .Os }}-{{ .Arch }}{{ if eq .Os "windows" }}.exe{{ end }}
post:
- rm -rf cmd/hauler/binaries
env:
- CGO_ENABLED=0
- GOEXPERIMENT=boringcrypto
@@ -48,83 +39,53 @@ changelog:
disable: false
use: git
brews:
homebrew_casks:
- name: hauler
repository:
owner: hauler-dev
name: homebrew-tap
token: "{{ .Env.HOMEBREW_TAP_GITHUB_TOKEN }}"
directory: Formula
description: "Hauler CLI"
description: "Hauler: Airgap Swiss Army Knife"
dockers:
- id: hauler-amd64
goos: linux
goarch: amd64
use: buildx
dockers_v2:
- id: hauler
dockerfile: Dockerfile
build_flag_templates:
- "--platform=linux/amd64"
flags:
- "--target=release"
image_templates:
- "docker.io/hauler/hauler-amd64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-amd64:{{ .Version }}"
- id: hauler-arm64
goos: linux
goarch: arm64
use: buildx
dockerfile: Dockerfile
build_flag_templates:
- "--platform=linux/arm64"
- "--target=release"
image_templates:
- "docker.io/hauler/hauler-arm64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-arm64:{{ .Version }}"
- id: hauler-debug-amd64
goos: linux
goarch: amd64
use: buildx
dockerfile: Dockerfile
build_flag_templates:
- "--platform=linux/amd64"
- "--target=debug"
image_templates:
- "docker.io/hauler/hauler-debug-amd64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-debug-amd64:{{ .Version }}"
- id: hauler-debug-arm64
goos: linux
goarch: arm64
use: buildx
dockerfile: Dockerfile
build_flag_templates:
- "--platform=linux/arm64"
- "--target=debug"
image_templates:
- "docker.io/hauler/hauler-debug-arm64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-debug-arm64:{{ .Version }}"
images:
- docker.io/hauler/hauler
- ghcr.io/hauler-dev/hauler
tags:
- "{{ .Version }}"
platforms:
- linux/amd64
- linux/arm64
labels:
"classification": "UNCLASSIFIED"
"org.opencontainers.image.created": "{{.Date}}"
"org.opencontainers.image.description": "Hauler: Airgap Swiss Army Knife"
"org.opencontainers.image.name": "{{.ProjectName}}-debug"
"org.opencontainers.image.revision": "{{.FullCommit}}"
"org.opencontainers.image.source": "{{.GitURL}}"
"org.opencontainers.image.version": "{{.Version}}"
docker_manifests:
- id: hauler-docker
use: docker
name_template: "docker.io/hauler/hauler:{{ .Version }}"
image_templates:
- "docker.io/hauler/hauler-amd64:{{ .Version }}"
- "docker.io/hauler/hauler-arm64:{{ .Version }}"
- id: hauler-ghcr
use: docker
name_template: "ghcr.io/hauler-dev/hauler:{{ .Version }}"
image_templates:
- "ghcr.io/hauler-dev/hauler-amd64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-arm64:{{ .Version }}"
- id: hauler-debug-docker
use: docker
name_template: "docker.io/hauler/hauler-debug:{{ .Version }}"
image_templates:
- "docker.io/hauler/hauler-debug-amd64:{{ .Version }}"
- "docker.io/hauler/hauler-debug-arm64:{{ .Version }}"
- id: hauler-debug-ghcr
use: docker
name_template: "ghcr.io/hauler-dev/hauler-debug:{{ .Version }}"
image_templates:
- "ghcr.io/hauler-dev/hauler-debug-amd64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-debug-arm64:{{ .Version }}"
- id: hauler-debug
dockerfile: Dockerfile
flags:
- "--target=debug"
images:
- docker.io/hauler/hauler-debug
- ghcr.io/hauler-dev/hauler-debug
tags:
- "{{ .Version }}"
platforms:
- linux/amd64
- linux/arm64
labels:
"classification": "UNCLASSIFIED"
"org.opencontainers.image.created": "{{.Date}}"
"org.opencontainers.image.description": "Hauler: Airgap Swiss Army Knife"
"org.opencontainers.image.name": "{{.ProjectName}}-debug"
"org.opencontainers.image.revision": "{{.FullCommit}}"
"org.opencontainers.image.source": "{{.GitURL}}"
"org.opencontainers.image.version": "{{.Version}}"

View File

@@ -1,8 +1,9 @@
# builder stage
FROM registry.suse.com/bci/bci-base:15.5 AS builder
FROM registry.suse.com/bci/bci-base:15.7 AS builder
ARG TARGETPLATFORM
# fetched from goreleaser build proccess
COPY hauler /hauler
# fetched from goreleaser build process
COPY $TARGETPLATFORM/hauler /hauler
RUN echo "hauler:x:1001:1001::/home/hauler:" > /etc/passwd \
&& echo "hauler:x:1001:hauler" > /etc/group \
@@ -39,4 +40,4 @@ COPY --from=builder --chown=hauler:hauler /hauler /usr/local/bin/hauler
RUN apk --no-cache add curl
USER hauler
WORKDIR /home/hauler
WORKDIR /home/hauler

View File

@@ -10,28 +10,24 @@ GO_COVERPROFILE=coverage.out
# set build variables
BIN_DIRECTORY=bin
DIST_DIRECTORY=dist
BINARIES_DIRECTORY=cmd/hauler/binaries
# local build of hauler for current platform
# references/configuration from .goreleaser.yaml
build:
goreleaser build --clean --snapshot --parallelism 1 --timeout 60m --single-target
goreleaser build --clean --snapshot --timeout 60m --single-target
# local build of hauler for all platforms
# references/configuration from .goreleaser.yaml
build-all:
goreleaser build --clean --snapshot --parallelism 1 --timeout 60m
goreleaser build --clean --snapshot --timeout 60m
# local release of hauler for all platforms
# references/configuration from .goreleaser.yaml
release:
goreleaser release --clean --snapshot --parallelism 1 --timeout 60m
goreleaser release --clean --snapshot --timeout 60m
# install depedencies
install:
rm -rf $(BINARIES_DIRECTORY)
mkdir -p $(BINARIES_DIRECTORY)
touch cmd/hauler/binaries/file
go mod tidy
go mod download
CGO_ENABLED=0 go install ./cmd/...
@@ -50,4 +46,4 @@ test:
# cleanup artifacts
clean:
rm -rf $(BIN_DIRECTORY) $(BINARIES_DIRECTORY) $(DIST_DIRECTORY) $(GO_COVERPROFILE)
rm -rf $(BIN_DIRECTORY) $(DIST_DIRECTORY) $(GO_COVERPROFILE)

View File

@@ -10,6 +10,34 @@
For more information, please review the **[Hauler Documentation](https://hauler.dev)!**
## Recent Changes
### In Hauler v1.4.0...
- Added a notice to `hauler store sync --products/--product-registry` to warn users the default registry will be updated in a future release.
- Users will see logging notices when using the `--products/--product-registry` such as...
- `!!! WARNING !!! [--products] will be updating its default registry in a future release...`
- `!!! WARNING !!! [--product-registry] will be updating its default registry in a future release...`
### In Hauler v1.2.0...
- Upgraded the `apiVersion` to `v1` from `v1alpha1`
- Users are able to use `v1` and `v1alpha1`, but `v1alpha1` is now deprecated and will be removed in a future release. We will update the community when we fully deprecate and remove the functionality of `v1alpha1`
- Users will see logging notices when using the old `apiVersion` such as...
- `!!! DEPRECATION WARNING !!! apiVersion [v1alpha1] will be removed in a future release...`
---
- Updated the behavior of `hauler store load` to default to loading a `haul` with the name of `haul.tar.zst` and requires the flag of `--filename/-f` to load a `haul` with a different name
- Users can load multiple `hauls` by specifying multiple flags of `--filename/-f`
- updated command usage: `hauler store load --filename hauling-hauls.tar.zst`
- previous command usage (do not use): `hauler store load hauling-hauls.tar.zst`
---
- Updated the behavior of `hauler store sync` to default to syncing a `manifest` with the name of `hauler-manifest.yaml` and requires the flag of `--filename/-f` to sync a `manifest` with a different name
- Users can sync multiple `manifests` by specifying multiple flags of `--filename/-f`
- updated command usage: `hauler store sync --filename hauling-hauls-manifest.yaml`
- previous command usage (do not use): `hauler store sync --files hauling-hauls-manifest.yaml`
---
Please review the documentation for any additional [Known Limits, Issues, and Notices](https://docs.hauler.dev/docs/known-limits)!
## Installation
### Linux/Darwin

View File

@@ -1,22 +1,25 @@
package cli
import (
"github.com/spf13/cobra"
"context"
cranecmd "github.com/google/go-containerregistry/cmd/crane/cmd"
"github.com/spf13/cobra"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/log"
)
var ro = &flags.CliRootOpts{}
func New() *cobra.Command {
func New(ctx context.Context, ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "hauler",
Short: "Airgap Swiss Army Knife",
Use: "hauler",
Short: "Airgap Swiss Army Knife",
Example: " View the Docs: https://docs.hauler.dev\n Environment Variables: " + consts.HaulerDir + " | " + consts.HaulerTempDir + " | " + consts.HaulerStoreDir + " | " + consts.HaulerIgnoreErrors + "\n Warnings: Hauler commands and flags marked with (EXPERIMENTAL) are not yet stable and may change in the future.",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
l := log.FromContext(cmd.Context())
l := log.FromContext(ctx)
l.SetLevel(ro.LogLevel)
l.Debugf("running cli command [%s]", cmd.CommandPath())
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
@@ -24,14 +27,13 @@ func New() *cobra.Command {
},
}
pf := cmd.PersistentFlags()
pf.StringVarP(&ro.LogLevel, "log-level", "l", "info", "")
flags.AddRootFlags(cmd, ro)
// Add subcommands
addLogin(cmd)
addStore(cmd)
addVersion(cmd)
addCompletion(cmd)
cmd.AddCommand(cranecmd.NewCmdAuthLogin("hauler"))
cmd.AddCommand(cranecmd.NewCmdAuthLogout("hauler"))
addStore(cmd, ro)
addVersion(cmd, ro)
addCompletion(cmd, ro)
return cmd
}

View File

@@ -5,25 +5,27 @@ import (
"os"
"github.com/spf13/cobra"
"hauler.dev/go/hauler/internal/flags"
)
func addCompletion(parent *cobra.Command) {
func addCompletion(parent *cobra.Command, ro *flags.CliRootOpts) {
cmd := &cobra.Command{
Use: "completion",
Short: "Generate auto-completion scripts for various shells",
}
cmd.AddCommand(
addCompletionZsh(),
addCompletionBash(),
addCompletionFish(),
addCompletionPowershell(),
addCompletionZsh(ro),
addCompletionBash(ro),
addCompletionFish(ro),
addCompletionPowershell(ro),
)
parent.AddCommand(cmd)
}
func addCompletionZsh() *cobra.Command {
func addCompletionZsh(ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "zsh",
Short: "Generates auto-completion scripts for zsh",
@@ -52,7 +54,7 @@ func addCompletionZsh() *cobra.Command {
return cmd
}
func addCompletionBash() *cobra.Command {
func addCompletionBash(ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "bash",
Short: "Generates auto-completion scripts for bash",
@@ -71,7 +73,7 @@ func addCompletionBash() *cobra.Command {
return cmd
}
func addCompletionFish() *cobra.Command {
func addCompletionFish(ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "fish",
Short: "Generates auto-completion scripts for fish",
@@ -87,7 +89,7 @@ func addCompletionFish() *cobra.Command {
return cmd
}
func addCompletionPowershell() *cobra.Command {
func addCompletionPowershell(ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "powershell",
Short: "Generates auto-completion scripts for powershell",

View File

@@ -1,62 +0,0 @@
package cli
import (
"context"
"fmt"
"io"
"os"
"strings"
"github.com/spf13/cobra"
"oras.land/oras-go/pkg/content"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/cosign"
)
func addLogin(parent *cobra.Command) {
o := &flags.LoginOpts{}
cmd := &cobra.Command{
Use: "login",
Short: "Login to a registry",
Long: "Login to an OCI Compliant Registry (stored at ~/.docker/config.json)",
Example: "# login to registry.example.com\nhauler login registry.example.com -u bob -p haulin",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, arg []string) error {
ctx := cmd.Context()
if o.PasswordStdin {
contents, err := io.ReadAll(os.Stdin)
if err != nil {
return err
}
o.Password = strings.TrimSuffix(string(contents), "\n")
o.Password = strings.TrimSuffix(o.Password, "\r")
}
if o.Username == "" && o.Password == "" {
return fmt.Errorf("username and password required")
}
return login(ctx, o, arg[0])
},
}
o.AddFlags(cmd)
parent.AddCommand(cmd)
}
func login(ctx context.Context, o *flags.LoginOpts, registry string) error {
ropts := content.RegistryOptions{
Username: o.Username,
Password: o.Password,
}
err := cosign.RegistryLogin(ctx, nil, registry, ropts)
if err != nil {
return err
}
return nil
}

View File

@@ -8,11 +8,12 @@ import (
"hauler.dev/go/hauler/cmd/hauler/cli/store"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/log"
)
var rootStoreOpts = &flags.StoreRootOpts{}
func addStore(parent *cobra.Command, ro *flags.CliRootOpts) {
rso := &flags.StoreRootOpts{}
func addStore(parent *cobra.Command) {
cmd := &cobra.Command{
Use: "store",
Aliases: []string{"s"},
@@ -21,24 +22,25 @@ func addStore(parent *cobra.Command) {
return cmd.Help()
},
}
rootStoreOpts.AddFlags(cmd)
rso.AddFlags(cmd)
cmd.AddCommand(
addStoreSync(),
addStoreExtract(),
addStoreLoad(),
addStoreSave(),
addStoreServe(),
addStoreInfo(),
addStoreCopy(),
addStoreAdd(),
addStoreSync(rso, ro),
addStoreExtract(rso, ro),
addStoreLoad(rso, ro),
addStoreSave(rso, ro),
addStoreServe(rso, ro),
addStoreInfo(rso, ro),
addStoreCopy(rso, ro),
addStoreAdd(rso, ro),
addStoreRemove(rso, ro),
)
parent.AddCommand(cmd)
}
func addStoreExtract() *cobra.Command {
o := &flags.ExtractOpts{StoreRootOpts: rootStoreOpts}
func addStoreExtract(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.ExtractOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "extract",
@@ -61,12 +63,30 @@ func addStoreExtract() *cobra.Command {
return cmd
}
func addStoreSync() *cobra.Command {
o := &flags.SyncOpts{StoreRootOpts: rootStoreOpts}
func addStoreSync(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.SyncOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "sync",
Short: "Sync content to the content store",
Args: cobra.ExactArgs(0),
PreRunE: func(cmd *cobra.Command, args []string) error {
// warn if products or product-registry flag is used by the user
if cmd.Flags().Changed("products") {
log.FromContext(cmd.Context()).Warnf("!!! WARNING !!! [--products] will be updating its default registry in a future release.")
}
if cmd.Flags().Changed("product-registry") {
log.FromContext(cmd.Context()).Warnf("!!! WARNING !!! [--product-registry] will be updating its default registry in a future release.")
}
// check if the products flag was passed
if len(o.Products) > 0 {
// only clear the default if the user did not explicitly set it
if !cmd.Flags().Changed("filename") {
o.FileName = []string{}
}
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -75,7 +95,7 @@ func addStoreSync() *cobra.Command {
return err
}
return store.SyncCmd(ctx, o, s)
return store.SyncCmd(ctx, o, s, rso, ro)
},
}
o.AddFlags(cmd)
@@ -83,13 +103,13 @@ func addStoreSync() *cobra.Command {
return cmd
}
func addStoreLoad() *cobra.Command {
o := &flags.LoadOpts{StoreRootOpts: rootStoreOpts}
func addStoreLoad(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.LoadOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "load",
Short: "Load a content store from a store archive",
Args: cobra.MinimumNArgs(1),
Args: cobra.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -99,7 +119,7 @@ func addStoreLoad() *cobra.Command {
}
_ = s
return store.LoadCmd(ctx, o, args...)
return store.LoadCmd(ctx, o, rso, ro)
},
}
o.AddFlags(cmd)
@@ -107,7 +127,7 @@ func addStoreLoad() *cobra.Command {
return cmd
}
func addStoreServe() *cobra.Command {
func addStoreServe(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "serve",
Short: "Serve the content store via an OCI Compliant Registry or Fileserver",
@@ -116,16 +136,16 @@ func addStoreServe() *cobra.Command {
},
}
cmd.AddCommand(
addStoreServeRegistry(),
addStoreServeFiles(),
addStoreServeRegistry(rso, ro),
addStoreServeFiles(rso, ro),
)
return cmd
}
// RegistryCmd serves the registry
func addStoreServeRegistry() *cobra.Command {
o := &flags.ServeRegistryOpts{StoreRootOpts: rootStoreOpts}
func addStoreServeRegistry(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.ServeRegistryOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "registry",
Short: "Serve the OCI Compliant Registry",
@@ -137,7 +157,7 @@ func addStoreServeRegistry() *cobra.Command {
return err
}
return store.ServeRegistryCmd(ctx, o, s)
return store.ServeRegistryCmd(ctx, o, s, rso, ro)
},
}
@@ -146,9 +166,9 @@ func addStoreServeRegistry() *cobra.Command {
return cmd
}
// FileServerCmd serves the file server
func addStoreServeFiles() *cobra.Command {
o := &flags.ServeFilesOpts{StoreRootOpts: rootStoreOpts}
func addStoreServeFiles(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.ServeFilesOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "fileserver",
Short: "Serve the Fileserver",
@@ -160,7 +180,7 @@ func addStoreServeFiles() *cobra.Command {
return err
}
return store.ServeFilesCmd(ctx, o, s)
return store.ServeFilesCmd(ctx, o, s, ro)
},
}
@@ -169,8 +189,8 @@ func addStoreServeFiles() *cobra.Command {
return cmd
}
func addStoreSave() *cobra.Command {
o := &flags.SaveOpts{StoreRootOpts: rootStoreOpts}
func addStoreSave(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.SaveOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "save",
@@ -185,7 +205,7 @@ func addStoreSave() *cobra.Command {
}
_ = s
return store.SaveCmd(ctx, o, o.FileName)
return store.SaveCmd(ctx, o, rso, ro)
},
}
o.AddFlags(cmd)
@@ -193,8 +213,8 @@ func addStoreSave() *cobra.Command {
return cmd
}
func addStoreInfo() *cobra.Command {
o := &flags.InfoOpts{StoreRootOpts: rootStoreOpts}
func addStoreInfo(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.InfoOpts{StoreRootOpts: rso}
var allowedValues = []string{"image", "chart", "file", "sigs", "atts", "sbom", "all"}
@@ -224,8 +244,8 @@ func addStoreInfo() *cobra.Command {
return cmd
}
func addStoreCopy() *cobra.Command {
o := &flags.CopyOpts{StoreRootOpts: rootStoreOpts}
func addStoreCopy(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.CopyOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "copy",
@@ -239,7 +259,7 @@ func addStoreCopy() *cobra.Command {
return err
}
return store.CopyCmd(ctx, o, s, args[0])
return store.CopyCmd(ctx, o, s, args[0], ro)
},
}
o.AddFlags(cmd)
@@ -247,7 +267,7 @@ func addStoreCopy() *cobra.Command {
return cmd
}
func addStoreAdd() *cobra.Command {
func addStoreAdd(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "add",
Short: "Add content to the store",
@@ -257,16 +277,16 @@ func addStoreAdd() *cobra.Command {
}
cmd.AddCommand(
addStoreAddFile(),
addStoreAddImage(),
addStoreAddChart(),
addStoreAddFile(rso, ro),
addStoreAddImage(rso, ro),
addStoreAddChart(rso, ro),
)
return cmd
}
func addStoreAddFile() *cobra.Command {
o := &flags.AddFileOpts{StoreRootOpts: rootStoreOpts}
func addStoreAddFile(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.AddFileOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "file",
@@ -296,8 +316,8 @@ hauler store add file https://get.hauler.dev --name hauler-install.sh`,
return cmd
}
func addStoreAddImage() *cobra.Command {
o := &flags.AddImageOpts{StoreRootOpts: rootStoreOpts}
func addStoreAddImage(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.AddImageOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "image",
@@ -309,13 +329,17 @@ hauler store add image busybox
hauler store add image library/busybox:stable
# fetch image with full image reference and specific platform
hauler store add image ghcr.io/hauler-dev/hauler-debug:v1.0.7 --platform linux/amd64
hauler store add image ghcr.io/hauler-dev/hauler-debug:v1.2.0 --platform linux/amd64
# fetch image with full image reference via digest
hauler store add image gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
# fetch image with full image reference, specific platform, and signature verification
hauler store add image rgcrprod.azurecr.us/hauler/rke2-manifest.yaml:v1.28.12-rke2r1 --platform linux/amd64 --key carbide-key.pub`,
curl -sfOL https://raw.githubusercontent.com/rancherfederal/carbide-releases/main/carbide-key.pub
hauler store add image rgcrprod.azurecr.us/rancher/rke2-runtime:v1.31.5-rke2r1 --platform linux/amd64 --key carbide-key.pub
# fetch image and rewrite path
hauler store add image busybox --rewrite custom-path/busybox:latest`,
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -325,7 +349,7 @@ hauler store add image rgcrprod.azurecr.us/hauler/rke2-manifest.yaml:v1.28.12-rk
return err
}
return store.AddImageCmd(ctx, o, s, args[0])
return store.AddImageCmd(ctx, o, s, args[0], rso, ro)
},
}
o.AddFlags(cmd)
@@ -333,11 +357,8 @@ hauler store add image rgcrprod.azurecr.us/hauler/rke2-manifest.yaml:v1.28.12-rk
return cmd
}
func addStoreAddChart() *cobra.Command {
o := &flags.AddChartOpts{
StoreRootOpts: rootStoreOpts,
ChartOpts: &action.ChartPathOptions{},
}
func addStoreAddChart(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.AddChartOpts{StoreRootOpts: rso, ChartOpts: &action.ChartPathOptions{}}
cmd := &cobra.Command{
Use: "chart",
@@ -352,13 +373,16 @@ hauler store add chart path/to/chart.tar.gz --repo .
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev
# fetch remote oci helm chart with version
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev --version 1.0.6
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev --version 1.2.0
# fetch remote helm chart
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable
# fetch remote helm chart with specific version
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/latest --version 2.9.1`,
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/latest --version 2.10.1
# fetch remote helm chart and rewrite path
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev --rewrite custom-path/hauler-chart:latest`,
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -368,7 +392,49 @@ hauler store add chart rancher --repo https://releases.rancher.com/server-charts
return err
}
return store.AddChartCmd(ctx, o, s, args[0])
return store.AddChartCmd(ctx, o, s, args[0], rso, ro)
},
}
o.AddFlags(cmd)
return cmd
}
func addStoreRemove(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.RemoveOpts{}
cmd := &cobra.Command{
Use: "remove <artifact-ref>",
Short: "(EXPERIMENTAL) Remove an artifact from the content store",
Example: `# remove an image using full store reference
hauler store info
hauler store remove index.docker.io/library/busybox:stable
# remove a chart using full store reference
hauler store info
hauler store remove hauler/rancher:2.8.4
# remove a file using full store reference
hauler store info
hauler store remove hauler/rke2-install.sh
# remove any artifact with the latest tag
hauler store remove :latest
# remove any artifact with 'busybox' in the reference
hauler store remove busybox
# force remove without verification
hauler store remove busybox:latest --force`,
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := rso.Store(ctx)
if err != nil {
return err
}
return store.RemoveCmd(ctx, o, s, args[0])
},
}
o.AddFlags(cmd)

View File

@@ -2,23 +2,34 @@ package store
import (
"context"
"fmt"
"os"
"path/filepath"
"regexp"
"slices"
"strings"
"github.com/google/go-containerregistry/pkg/name"
"hauler.dev/go/hauler/pkg/artifacts/file/getter"
"helm.sh/helm/v3/pkg/action"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
helmchart "helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/chartutil"
"helm.sh/helm/v3/pkg/engine"
"k8s.io/apimachinery/pkg/util/yaml"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
v1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
"hauler.dev/go/hauler/pkg/artifacts/file"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/content/chart"
"hauler.dev/go/hauler/pkg/cosign"
"hauler.dev/go/hauler/pkg/getter"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/reference"
"hauler.dev/go/hauler/pkg/store"
)
func AddFileCmd(ctx context.Context, o *flags.AddFileOpts, s *store.Layout, reference string) error {
cfg := v1alpha1.File{
cfg := v1.File{
Path: reference,
}
if len(o.Name) > 0 {
@@ -27,7 +38,7 @@ func AddFileCmd(ctx context.Context, o *flags.AddFileOpts, s *store.Layout, refe
return storeFile(ctx, s, cfg)
}
func storeFile(ctx context.Context, s *store.Layout, fi v1alpha1.File) error {
func storeFile(ctx context.Context, s *store.Layout, fi v1.File) error {
l := log.FromContext(ctx)
copts := getter.ClientOptions{
@@ -35,81 +46,293 @@ func storeFile(ctx context.Context, s *store.Layout, fi v1alpha1.File) error {
}
f := file.NewFile(fi.Path, file.WithClient(getter.NewClient(copts)))
ref, err := reference.NewTagged(f.Name(fi.Path), reference.DefaultTag)
ref, err := reference.NewTagged(f.Name(fi.Path), consts.DefaultTag)
if err != nil {
return err
}
l.Infof("adding 'file' [%s] to the store as [%s]", fi.Path, ref.Name())
l.Infof("adding file [%s] to the store as [%s]", fi.Path, ref.Name())
_, err = s.AddOCI(ctx, f, ref.Name())
if err != nil {
return err
}
l.Infof("successfully added 'file' [%s]", ref.Name())
l.Infof("successfully added file [%s]", ref.Name())
return nil
}
func AddImageCmd(ctx context.Context, o *flags.AddImageOpts, s *store.Layout, reference string) error {
func AddImageCmd(ctx context.Context, o *flags.AddImageOpts, s *store.Layout, reference string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
cfg := v1alpha1.Image{
Name: reference,
cfg := v1.Image{
Name: reference,
Rewrite: o.Rewrite,
}
// Check if the user provided a key.
if o.Key != "" {
// verify signature using the provided key.
err := cosign.VerifySignature(ctx, s, o.Key, cfg.Name)
err := cosign.VerifySignature(ctx, s, o.Key, o.Tlog, cfg.Name, rso, ro)
if err != nil {
return err
}
l.Infof("signature verified for image [%s]", cfg.Name)
} else if o.CertIdentityRegexp != "" || o.CertIdentity != "" {
// verify signature using keyless details
l.Infof("verifying keyless signature for [%s]", cfg.Name)
err := cosign.VerifyKeylessSignature(ctx, s, o.CertIdentity, o.CertIdentityRegexp, o.CertOidcIssuer, o.CertOidcIssuerRegexp, o.CertGithubWorkflowRepository, o.Tlog, cfg.Name, rso, ro)
if err != nil {
return err
}
l.Infof("keyless signature verified for image [%s]", cfg.Name)
}
return storeImage(ctx, s, cfg, o.Platform)
return storeImage(ctx, s, cfg, o.Platform, rso, ro, o.Rewrite)
}
func storeImage(ctx context.Context, s *store.Layout, i v1alpha1.Image, platform string) error {
func storeImage(ctx context.Context, s *store.Layout, i v1.Image, platform string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts, rewrite string) error {
l := log.FromContext(ctx)
l.Infof("adding 'image' [%s] to the store", i.Name)
if !ro.IgnoreErrors {
envVar := os.Getenv(consts.HaulerIgnoreErrors)
if envVar == "true" {
ro.IgnoreErrors = true
}
}
l.Infof("adding image [%s] to the store", i.Name)
r, err := name.ParseReference(i.Name)
if err != nil {
l.Warnf("unable to parse 'image' [%s], skipping...", r.Name())
return nil
if ro.IgnoreErrors {
l.Warnf("unable to parse image [%s]: %v... skipping...", i.Name, err)
return nil
} else {
l.Errorf("unable to parse image [%s]: %v", i.Name, err)
return err
}
}
err = cosign.SaveImage(ctx, s, r.Name(), platform)
// copy and sig verification
err = cosign.SaveImage(ctx, s, r.Name(), platform, rso, ro)
if err != nil {
l.Warnf("unable to add 'image' [%s] to store. skipping...", r.Name())
return nil
if ro.IgnoreErrors {
l.Warnf("unable to add image [%s] to store: %v... skipping...", r.Name(), err)
return nil
} else {
l.Errorf("unable to add image [%s] to store: %v", r.Name(), err)
return err
}
}
l.Infof("successfully added 'image' [%s]", r.Name())
if rewrite != "" {
rewrite = strings.TrimPrefix(rewrite, "/")
if !strings.Contains(rewrite, ":") {
rewrite = strings.Join([]string{rewrite, r.(name.Tag).TagStr()}, ":")
}
// rename image name in store
newRef, err := name.ParseReference(rewrite)
if err != nil {
l.Errorf("unable to parse rewrite name: %w", err)
}
rewriteReference(ctx, s, r, newRef)
}
l.Infof("successfully added image [%s]", r.Name())
return nil
}
func AddChartCmd(ctx context.Context, o *flags.AddChartOpts, s *store.Layout, chartName string) error {
// TODO: Reduce duplicates between api chart and upstream helm opts
cfg := v1alpha1.Chart{
func rewriteReference(ctx context.Context, s *store.Layout, oldRef name.Reference, newRef name.Reference) error {
l := log.FromContext(ctx)
l.Infof("rewriting [%s] to [%s]", oldRef.Name(), newRef.Name())
s.OCI.LoadIndex()
//TODO: improve string manipulation
oldRefContext := oldRef.Context()
newRefContext := newRef.Context()
oldRepo := oldRefContext.RepositoryStr()
newRepo := newRefContext.RepositoryStr()
oldTag := oldRef.(name.Tag).TagStr()
newTag := newRef.(name.Tag).TagStr()
oldRegistry := strings.TrimPrefix(oldRefContext.RegistryStr(), "index.")
newRegistry := strings.TrimPrefix(newRefContext.RegistryStr(), "index.")
oldTotal := oldRepo + ":" + oldTag
newTotal := newRepo + ":" + newTag
oldTotalReg := oldRegistry + "/" + oldTotal
newTotalReg := newRegistry + "/" + newTotal
//find and update reference
found := false
if err := s.OCI.Walk(func(k string, d ocispec.Descriptor) error {
if d.Annotations[ocispec.AnnotationRefName] == oldTotal && d.Annotations[consts.ContainerdImageNameKey] == oldTotalReg {
d.Annotations[ocispec.AnnotationRefName] = newTotal
d.Annotations[consts.ContainerdImageNameKey] = newTotalReg
found = true
}
return nil
}); err != nil {
return err
}
if !found {
return fmt.Errorf("could not find image [%s] in store", oldRef.Name())
}
return s.OCI.SaveIndex()
}
func AddChartCmd(ctx context.Context, o *flags.AddChartOpts, s *store.Layout, chartName string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
cfg := v1.Chart{
Name: chartName,
RepoURL: o.ChartOpts.RepoURL,
Version: o.ChartOpts.Version,
}
return storeChart(ctx, s, cfg, o.ChartOpts)
rewrite := ""
if o.Rewrite != "" {
rewrite = o.Rewrite
}
return storeChart(ctx, s, cfg, o, rso, ro, rewrite)
}
func storeChart(ctx context.Context, s *store.Layout, cfg v1alpha1.Chart, opts *action.ChartPathOptions) error {
// unexported type for the context key to avoid collisions
type isSubchartKey struct{}
// imageregex parses image references starting with "image:" and with optional spaces or optional quotes
var imageRegex = regexp.MustCompile(`(?m)^[ \t]*image:[ \t]*['"]?([^\s'"#]+)`)
// helmAnnotatedImage parses images references from helm chart annotations
type helmAnnotatedImage struct {
Image string `yaml:"image"`
Name string `yaml:"name,omitempty"`
}
// imagesFromChartAnnotations parses image references from helm chart annotations
func imagesFromChartAnnotations(c *helmchart.Chart) ([]string, error) {
if c == nil || c.Metadata == nil || c.Metadata.Annotations == nil {
return nil, nil
}
// support multiple annotations
keys := []string{
"helm.sh/images",
"images",
}
var out []string
for _, k := range keys {
raw, ok := c.Metadata.Annotations[k]
if !ok || strings.TrimSpace(raw) == "" {
continue
}
var items []helmAnnotatedImage
if err := yaml.Unmarshal([]byte(raw), &items); err != nil {
return nil, fmt.Errorf("failed to parse helm chart annotation %q: %w", k, err)
}
for _, it := range items {
img := strings.TrimSpace(it.Image)
if img == "" {
continue
}
img = strings.TrimPrefix(img, "/")
out = append(out, img)
}
}
slices.Sort(out)
out = slices.Compact(out)
return out, nil
}
// imagesFromImagesLock parses image references from images lock files in the chart directory
func imagesFromImagesLock(chartDir string) ([]string, error) {
var out []string
for _, name := range []string{
"images.lock",
"images-lock.yaml",
"images.lock.yaml",
".images.lock.yaml",
} {
p := filepath.Join(chartDir, name)
b, err := os.ReadFile(p)
if err != nil {
continue
}
matches := imageRegex.FindAllSubmatch(b, -1)
for _, m := range matches {
if len(m) > 1 {
out = append(out, string(m[1]))
}
}
}
if len(out) == 0 {
return nil, nil
}
for i := range out {
out[i] = strings.TrimPrefix(out[i], "/")
}
slices.Sort(out)
out = slices.Compact(out)
return out, nil
}
func applyDefaultRegistry(img string, defaultRegistry string) (string, error) {
img = strings.TrimSpace(strings.TrimPrefix(img, "/"))
if img == "" || defaultRegistry == "" {
return img, nil
}
ref, err := reference.Parse(img)
if err != nil {
return "", err
}
if ref.Context().RegistryStr() != "" {
return img, nil
}
newRef, err := reference.Relocate(img, defaultRegistry)
if err != nil {
return "", err
}
return newRef.Name(), nil
}
func storeChart(ctx context.Context, s *store.Layout, cfg v1.Chart, opts *flags.AddChartOpts, rso *flags.StoreRootOpts, ro *flags.CliRootOpts, rewrite string) error {
l := log.FromContext(ctx)
l.Infof("adding 'chart' [%s] to the store", cfg.Name)
// TODO: This shouldn't be necessary
opts.RepoURL = cfg.RepoURL
opts.Version = cfg.Version
// subchart logging prefix
isSubchart := ctx.Value(isSubchartKey{}) == true
prefix := ""
if isSubchart {
prefix = " ↳ "
}
chrt, err := chart.NewChart(cfg.Name, opts)
// normalize chart name for logging
displayName := cfg.Name
if strings.Contains(cfg.Name, string(os.PathSeparator)) {
displayName = filepath.Base(cfg.Name)
}
l.Infof("%sadding chart [%s] to the store", prefix, displayName)
opts.ChartOpts.RepoURL = cfg.RepoURL
opts.ChartOpts.Version = cfg.Version
chrt, err := chart.NewChart(cfg.Name, opts.ChartOpts)
if err != nil {
return err
}
@@ -123,11 +346,257 @@ func storeChart(ctx context.Context, s *store.Layout, cfg v1alpha1.Chart, opts *
if err != nil {
return err
}
_, err = s.AddOCI(ctx, chrt, ref.Name())
if err != nil {
if _, err := s.AddOCI(ctx, chrt, ref.Name()); err != nil {
return err
}
if err := s.OCI.SaveIndex(); err != nil {
return err
}
l.Infof("successfully added 'chart' [%s]", ref.Name())
l.Infof("%ssuccessfully added chart [%s:%s]", prefix, c.Name(), c.Metadata.Version)
tempOverride := rso.TempOverride
if tempOverride == "" {
tempOverride = os.Getenv(consts.HaulerTempDir)
}
tempDir, err := os.MkdirTemp(tempOverride, consts.DefaultHaulerTempDirName)
if err != nil {
return fmt.Errorf("failed to create temp dir: %w", err)
}
defer os.RemoveAll(tempDir)
chartPath := chrt.Path()
if strings.HasSuffix(chartPath, ".tgz") {
l.Debugf("%sextracting chart archive [%s]", prefix, filepath.Base(chartPath))
if err := chartutil.ExpandFile(tempDir, chartPath); err != nil {
return fmt.Errorf("failed to extract chart: %w", err)
}
// expanded chart should be in a directory matching the chart name
expectedChartDir := filepath.Join(tempDir, c.Name())
if _, err := os.Stat(expectedChartDir); err != nil {
return fmt.Errorf("chart archive did not expand into expected directory '%s': %w", c.Name(), err)
}
chartPath = expectedChartDir
}
// add-images
if opts.AddImages {
userValues := chartutil.Values{}
if opts.HelmValues != "" {
userValues, err = chartutil.ReadValuesFile(opts.HelmValues)
if err != nil {
return fmt.Errorf("failed to read helm values file [%s]: %w", opts.HelmValues, err)
}
}
// set helm default capabilities
caps := chartutil.DefaultCapabilities.Copy()
// only parse and override if provided kube version
if opts.KubeVersion != "" {
kubeVersion, err := chartutil.ParseKubeVersion(opts.KubeVersion)
if err != nil {
l.Warnf("%sinvalid kube-version [%s], using default kubernetes version", prefix, opts.KubeVersion)
} else {
caps.KubeVersion = *kubeVersion
}
}
values, err := chartutil.ToRenderValues(c, userValues, chartutil.ReleaseOptions{Namespace: "hauler"}, caps)
if err != nil {
return err
}
// helper for normalization and deduping slices
normalizeUniq := func(in []string) []string {
if len(in) == 0 {
return nil
}
for i := range in {
in[i] = strings.TrimPrefix(in[i], "/")
}
slices.Sort(in)
return slices.Compact(in)
}
// Collect images by method so we can debug counts
var (
templateImages []string
annotationImages []string
lockImages []string
)
// parse helm chart templates and values for images
rendered, err := engine.Render(c, values)
if err != nil {
// charts may fail due to values so still try helm chart annotations and lock
l.Warnf("%sfailed to render chart [%s]: %v", prefix, c.Name(), err)
rendered = map[string]string{}
}
for _, manifest := range rendered {
matches := imageRegex.FindAllStringSubmatch(manifest, -1)
for _, match := range matches {
if len(match) > 1 {
templateImages = append(templateImages, match[1])
}
}
}
// parse helm chart annotations for images
annotationImages, err = imagesFromChartAnnotations(c)
if err != nil {
l.Warnf("%sfailed to parse helm chart annotation for [%s:%s]: %v", prefix, c.Name(), c.Metadata.Version, err)
annotationImages = nil
}
// parse images lock files for images
lockImages, err = imagesFromImagesLock(chartPath)
if err != nil {
l.Warnf("%sfailed to parse images lock: %v", prefix, err)
lockImages = nil
}
// normalization and deduping the slices
templateImages = normalizeUniq(templateImages)
annotationImages = normalizeUniq(annotationImages)
lockImages = normalizeUniq(lockImages)
// merge all sources then final dedupe
images := append(append(templateImages, annotationImages...), lockImages...)
images = normalizeUniq(images)
l.Debugf("%simage references identified for helm template: [%d] image(s)", prefix, len(templateImages))
l.Debugf("%simage references identified for helm chart annotations: [%d] image(s)", prefix, len(annotationImages))
l.Debugf("%simage references identified for helm image lock file: [%d] image(s)", prefix, len(lockImages))
l.Debugf("%ssuccessfully parsed and deduped image references: [%d] image(s)", prefix, len(images))
l.Debugf("%ssuccessfully parsed image references %v", prefix, images)
if len(images) > 0 {
l.Infof("%s ↳ identified [%d] image(s) in [%s:%s]", prefix, len(images), c.Name(), c.Metadata.Version)
}
for _, image := range images {
image, err := applyDefaultRegistry(image, opts.Registry)
if err != nil {
if ro.IgnoreErrors {
l.Warnf("%s ↳ unable to apply registry to image [%s]: %v... skipping...", prefix, image, err)
continue
}
return fmt.Errorf("unable to apply registry to image [%s]: %w", image, err)
}
imgCfg := v1.Image{Name: image}
if err := storeImage(ctx, s, imgCfg, opts.Platform, rso, ro, ""); err != nil {
if ro.IgnoreErrors {
l.Warnf("%s ↳ failed to store image [%s]: %v... skipping...", prefix, image, err)
continue
}
return fmt.Errorf("failed to store image [%s]: %w", image, err)
}
s.OCI.LoadIndex()
if err := s.OCI.SaveIndex(); err != nil {
return err
}
}
}
// add-dependencies
if opts.AddDependencies && len(c.Metadata.Dependencies) > 0 {
for _, dep := range c.Metadata.Dependencies {
l.Infof("%sadding dependent chart [%s:%s]", prefix, dep.Name, dep.Version)
depOpts := *opts
depOpts.AddDependencies = true
depOpts.AddImages = true
subCtx := context.WithValue(ctx, isSubchartKey{}, true)
var depCfg v1.Chart
var err error
if strings.HasPrefix(dep.Repository, "file://") {
subchartPath := filepath.Join(chartPath, "charts", dep.Name)
depCfg = v1.Chart{Name: subchartPath, RepoURL: "", Version: ""}
depOpts.ChartOpts.RepoURL = ""
depOpts.ChartOpts.Version = ""
err = storeChart(subCtx, s, depCfg, &depOpts, rso, ro, "")
} else {
depCfg = v1.Chart{Name: dep.Name, RepoURL: dep.Repository, Version: dep.Version}
depOpts.ChartOpts.RepoURL = dep.Repository
depOpts.ChartOpts.Version = dep.Version
err = storeChart(subCtx, s, depCfg, &depOpts, rso, ro, "")
}
if err != nil {
if ro.IgnoreErrors {
l.Warnf("%s ↳ failed to add dependent chart [%s]: %v... skipping...", prefix, dep.Name, err)
} else {
l.Errorf("%s ↳ failed to add dependent chart [%s]: %v", prefix, dep.Name, err)
return err
}
}
}
}
// chart rewrite functionality
if rewrite != "" {
rewrite = strings.TrimPrefix(rewrite, "/")
newRef, err := name.ParseReference(rewrite)
if err != nil {
// error... don't continue with a bad reference
return fmt.Errorf("unable to parse rewrite name [%s]: %w", rewrite, err)
}
// if rewrite omits a tag... keep the existing tag
oldTag := ref.(name.Tag).TagStr()
if !strings.Contains(rewrite, ":") {
rewrite = strings.Join([]string{rewrite, oldTag}, ":")
newRef, err = name.ParseReference(rewrite)
if err != nil {
return fmt.Errorf("unable to parse rewrite name [%s]: %w", rewrite, err)
}
}
// rename chart name in store
s.OCI.LoadIndex()
oldRefContext := ref.Context()
newRefContext := newRef.Context()
oldRepo := oldRefContext.RepositoryStr()
newRepo := newRefContext.RepositoryStr()
newTag := newRef.(name.Tag).TagStr()
oldTotal := oldRepo + ":" + oldTag
newTotal := newRepo + ":" + newTag
found := false
if err := s.OCI.Walk(func(k string, d ocispec.Descriptor) error {
if d.Annotations[ocispec.AnnotationRefName] == oldTotal {
d.Annotations[ocispec.AnnotationRefName] = newTotal
found = true
}
return nil
}); err != nil {
return err
}
if !found {
return fmt.Errorf("could not find chart [%s] in store", ref.Name())
}
if err := s.OCI.SaveIndex(); err != nil {
return err
}
}
return nil
}

View File

@@ -13,13 +13,17 @@ import (
"hauler.dev/go/hauler/pkg/store"
)
func CopyCmd(ctx context.Context, o *flags.CopyOpts, s *store.Layout, targetRef string) error {
func CopyCmd(ctx context.Context, o *flags.CopyOpts, s *store.Layout, targetRef string, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
if o.Username != "" || o.Password != "" {
return fmt.Errorf("--username/--password have been deprecated, please use 'hauler login'")
}
components := strings.SplitN(targetRef, "://", 2)
switch components[0] {
case "dir":
l.Debugf("identified directory target reference")
l.Debugf("identified directory target reference of [%s]", components[1])
fs := content.NewFile(components[1])
defer fs.Close()
@@ -29,22 +33,13 @@ func CopyCmd(ctx context.Context, o *flags.CopyOpts, s *store.Layout, targetRef
}
case "registry":
l.Debugf("identified registry target reference")
l.Debugf("identified registry target reference of [%s]", components[1])
ropts := content.RegistryOptions{
Username: o.Username,
Password: o.Password,
Insecure: o.Insecure,
PlainHTTP: o.PlainHTTP,
}
if ropts.Username != "" {
err := cosign.RegistryLogin(ctx, s, components[1], ropts)
if err != nil {
return err
}
}
err := cosign.LoadImages(ctx, s, components[1], ropts)
err := cosign.LoadImages(ctx, s, components[1], o.Only, ropts, ro)
if err != nil {
return err
}

View File

@@ -6,8 +6,10 @@ import (
"fmt"
"os"
"sort"
"strings"
"github.com/olekukonko/tablewriter"
"github.com/olekukonko/tablewriter/tw"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"hauler.dev/go/hauler/internal/flags"
@@ -47,12 +49,20 @@ func InfoCmd(ctx context.Context, o *flags.InfoOpts, s *store.Layout) error {
return err
}
i := newItem(s, desc, internalManifest, fmt.Sprintf("%s/%s", internalDesc.Platform.OS, internalDesc.Platform.Architecture), o)
i := newItemWithDigest(
s,
internalDesc.Digest.String(),
desc,
internalManifest,
fmt.Sprintf("%s/%s", internalDesc.Platform.OS, internalDesc.Platform.Architecture),
o,
)
var emptyItem item
if i != emptyItem {
items = append(items, i)
}
}
// handle "non" multi-arch images
} else if desc.MediaType == consts.DockerManifestSchema2 || desc.MediaType == consts.OCIManifestSchema1 {
var m ocispec.Manifest
@@ -66,14 +76,15 @@ func InfoCmd(ctx context.Context, o *flags.InfoOpts, s *store.Layout) error {
}
defer rc.Close()
// Unmarshal the OCI image content
// unmarshal the oci image content
var internalManifest ocispec.Image
if err := json.NewDecoder(rc).Decode(&internalManifest); err != nil {
return err
}
if internalManifest.Architecture != "" {
i := newItem(s, desc, m, fmt.Sprintf("%s/%s", internalManifest.OS, internalManifest.Architecture), o)
i := newItem(s, desc, m,
fmt.Sprintf("%s/%s", internalManifest.OS, internalManifest.Architecture), o)
var emptyItem item
if i != emptyItem {
items = append(items, i)
@@ -85,7 +96,8 @@ func InfoCmd(ctx context.Context, o *flags.InfoOpts, s *store.Layout) error {
items = append(items, i)
}
}
// handle the rest
// handle everything else (charts, files, sigs, etc.)
} else {
var m ocispec.Manifest
if err := json.NewDecoder(rc).Decode(&m); err != nil {
@@ -118,13 +130,15 @@ func InfoCmd(ctx context.Context, o *flags.InfoOpts, s *store.Layout) error {
msg = buildJson(items...)
fmt.Println(msg)
default:
buildTable(items...)
if err := buildTable(o.ShowDigests, items...); err != nil {
return err
}
}
return nil
}
func buildListRepos(items ...item) {
// Create map to track unique repository names
// create map to track unique repository names
repos := make(map[string]bool)
for _, i := range items {
@@ -141,37 +155,86 @@ func buildListRepos(items ...item) {
repos[repoName] = true
}
// Collect and print unique repository names
// collect and print unique repository names
for repoName := range repos {
fmt.Println(repoName)
}
}
func buildTable(items ...item) {
// Create a table for the results
table := tablewriter.NewWriter(os.Stdout)
table.SetHeader([]string{"Reference", "Type", "Platform", "# Layers", "Size"})
table.SetHeaderAlignment(tablewriter.ALIGN_LEFT)
table.SetRowLine(false)
table.SetAutoMergeCellsByColumnIndex([]int{0})
func buildTable(showDigests bool, items ...item) error {
table := tablewriter.NewTable(os.Stdout)
table.Configure(func(cfg *tablewriter.Config) {
cfg.Header.Alignment.Global = tw.AlignLeft
cfg.Row.Merging.Mode = tw.MergeVertical
cfg.Row.Merging.ByColumnIndex = tw.NewBoolMapper(0)
})
if showDigests {
table.Header("Reference", "Type", "Platform", "Digest", "# Layers", "Size")
} else {
table.Header("Reference", "Type", "Platform", "# Layers", "Size")
}
totalSize := int64(0)
for _, i := range items {
if i.Type != "" {
row := []string{
i.Reference,
if i.Type == "" {
continue
}
ref := truncateReference(i.Reference)
var row []string
if showDigests {
digest := i.Digest
if digest == "" {
digest = "-"
}
row = []string{
ref,
i.Type,
i.Platform,
digest,
fmt.Sprintf("%d", i.Layers),
byteCountSI(i.Size),
}
} else {
row = []string{
ref,
i.Type,
i.Platform,
fmt.Sprintf("%d", i.Layers),
byteCountSI(i.Size),
}
totalSize += i.Size
table.Append(row)
}
totalSize += i.Size
if err := table.Append(row); err != nil {
return err
}
}
table.SetFooter([]string{"", "", "", "Total", byteCountSI(totalSize)})
table.Render()
// align total column based on digest visibility
if showDigests {
table.Footer("", "", "", "", "Total", byteCountSI(totalSize))
} else {
table.Footer("", "", "", "Total", byteCountSI(totalSize))
}
return table.Render()
}
// truncateReference shortens the digest of a reference
func truncateReference(ref string) string {
const prefix = "@sha256:"
idx := strings.Index(ref, prefix)
if idx == -1 {
return ref
}
if len(ref) > idx+len(prefix)+12 {
return ref[:idx+len(prefix)+12] + "…"
}
return ref
}
func buildJson(item ...item) string {
@@ -186,6 +249,7 @@ type item struct {
Reference string
Type string
Platform string
Digest string
Layers int
Size int64
}
@@ -210,6 +274,13 @@ func (a byReferenceAndArch) Less(i, j int) bool {
return a[i].Reference < a[j].Reference
}
// overrides the digest with a specific per platform digest
func newItemWithDigest(s *store.Layout, digestStr string, desc ocispec.Descriptor, m ocispec.Manifest, plat string, o *flags.InfoOpts) item {
item := newItem(s, desc, m, plat, o)
item.Digest = digestStr
return item
}
func newItem(s *store.Layout, desc ocispec.Descriptor, m ocispec.Manifest, plat string, o *flags.InfoOpts) item {
var size int64 = 0
for _, l := range m.Layers {
@@ -255,6 +326,7 @@ func newItem(s *store.Layout, desc ocispec.Descriptor, m ocispec.Manifest, plat
Reference: ref.Name(),
Type: ctype,
Platform: plat,
Digest: desc.Digest.String(),
Layers: len(m.Layers),
Size: size,
}

View File

@@ -2,45 +2,130 @@ package store
import (
"context"
"encoding/json"
"io"
"net/url"
"os"
"github.com/mholt/archiver/v3"
"path/filepath"
"strings"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/archives"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/content"
"hauler.dev/go/hauler/pkg/getter"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/store"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// LoadCmd
// TODO: Just use mholt/archiver for now, even though we don't need most of it
func LoadCmd(ctx context.Context, o *flags.LoadOpts, archiveRefs ...string) error {
// extracts the contents of an archived oci layout to an existing oci layout
func LoadCmd(ctx context.Context, o *flags.LoadOpts, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
for _, archiveRef := range archiveRefs {
l.Infof("loading content from [%s] to [%s]", archiveRef, o.StoreDir)
err := unarchiveLayoutTo(ctx, archiveRef, o.StoreDir, o.TempOverride)
tempOverride := rso.TempOverride
if tempOverride == "" {
tempOverride = os.Getenv(consts.HaulerTempDir)
}
tempDir, err := os.MkdirTemp(tempOverride, consts.DefaultHaulerTempDirName)
if err != nil {
return err
}
defer os.RemoveAll(tempDir)
l.Debugf("using temporary directory at [%s]", tempDir)
for _, fileName := range o.FileName {
l.Infof("loading haul [%s] to [%s]", fileName, o.StoreDir)
err := unarchiveLayoutTo(ctx, fileName, o.StoreDir, tempDir)
if err != nil {
return err
}
clearDir(tempDir)
}
return nil
}
// unarchiveLayoutTo accepts an archived oci layout and extracts the contents to an existing oci layout, preserving the index
func unarchiveLayoutTo(ctx context.Context, archivePath string, dest string, tempOverride string) error {
tmpdir, err := os.MkdirTemp(tempOverride, "hauler")
// accepts an archived OCI layout, extracts the contents to an existing OCI layout, and preserves the index
func unarchiveLayoutTo(ctx context.Context, haulPath string, dest string, tempDir string) error {
l := log.FromContext(ctx)
if strings.HasPrefix(haulPath, "http://") || strings.HasPrefix(haulPath, "https://") {
l.Debugf("detected remote archive... starting download... [%s]", haulPath)
h := getter.NewHttp()
parsedURL, err := url.Parse(haulPath)
if err != nil {
return err
}
rc, err := h.Open(ctx, parsedURL)
if err != nil {
return err
}
defer rc.Close()
fileName := h.Name(parsedURL)
if fileName == "" {
fileName = filepath.Base(parsedURL.Path)
}
haulPath = filepath.Join(tempDir, fileName)
out, err := os.Create(haulPath)
if err != nil {
return err
}
defer out.Close()
if _, err = io.Copy(out, rc); err != nil {
return err
}
}
if err := archives.Unarchive(ctx, haulPath, tempDir); err != nil {
return err
}
// ensure the incoming index.json has the correct annotations.
data, err := os.ReadFile(tempDir + "/index.json")
if err != nil {
return (err)
}
var idx ocispec.Index
if err := json.Unmarshal(data, &idx); err != nil {
return (err)
}
for i := range idx.Manifests {
if idx.Manifests[i].Annotations == nil {
idx.Manifests[i].Annotations = make(map[string]string)
}
if _, exists := idx.Manifests[i].Annotations[consts.KindAnnotationName]; !exists {
idx.Manifests[i].Annotations[consts.KindAnnotationName] = consts.KindAnnotationImage
}
if ref, ok := idx.Manifests[i].Annotations[consts.ContainerdImageNameKey]; ok {
if slash := strings.Index(ref, "/"); slash != -1 {
ref = ref[slash+1:]
}
if idx.Manifests[i].Annotations[consts.ImageRefKey] != ref {
idx.Manifests[i].Annotations[consts.ImageRefKey] = ref
}
}
}
out, err := json.MarshalIndent(idx, "", " ")
if err != nil {
return err
}
defer os.RemoveAll(tmpdir)
if err := archiver.Unarchive(archivePath, tmpdir); err != nil {
if err := os.WriteFile(tempDir+"/index.json", out, 0644); err != nil {
return err
}
s, err := store.NewLayout(tmpdir)
s, err := store.NewLayout(tempDir)
if err != nil {
return err
}
@@ -53,3 +138,19 @@ func unarchiveLayoutTo(ctx context.Context, archivePath string, dest string, tem
_, err = s.CopyAll(ctx, ts, nil)
return err
}
func clearDir(path string) error {
entries, err := os.ReadDir(path)
if err != nil {
return err
}
for _, entry := range entries {
err = os.RemoveAll(filepath.Join(path, entry.Name()))
if err != nil {
return err
}
}
return nil
}

View File

@@ -0,0 +1,122 @@
package store
import (
"bufio"
"context"
"errors"
"fmt"
"io"
"os"
"strings"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/store"
)
func formatReference(ref string) string {
tagIdx := strings.LastIndex(ref, ":")
if tagIdx == -1 {
return ref
}
dashIdx := strings.Index(ref[tagIdx+1:], "-")
if dashIdx == -1 {
return ref
}
dashIdx = tagIdx + 1 + dashIdx
base := ref[:dashIdx]
suffix := ref[dashIdx+1:]
if base == "" || suffix == "" {
return ref
}
return fmt.Sprintf("%s [%s]", base, suffix)
}
func RemoveCmd(ctx context.Context, o *flags.RemoveOpts, s *store.Layout, ref string) error {
l := log.FromContext(ctx)
// collect matching artifacts
type match struct {
reference string
desc ocispec.Descriptor
}
var matches []match
if err := s.Walk(func(reference string, desc ocispec.Descriptor) error {
if !strings.Contains(reference, ref) {
return nil
}
matches = append(matches, match{
reference: reference,
desc: desc,
})
return nil // continue walking
}); err != nil {
return err
}
if len(matches) == 0 {
return fmt.Errorf("reference [%s] not found in store (use `hauler store info` to list store contents)", ref)
}
if len(matches) >= 1 {
l.Infof("found %d matching references:", len(matches))
for _, m := range matches {
l.Infof(" - [%s]", formatReference(m.reference))
}
}
if !o.Force {
fmt.Printf(" ↳ are you sure you want to remove [%d] artifact(s) from the store? (yes/no) ", len(matches))
reader := bufio.NewReader(os.Stdin)
line, err := reader.ReadString('\n')
if err != nil && !errors.Is(err, io.EOF) {
return fmt.Errorf("failed to read response: [%w]... please answer 'yes' or 'no'", err)
}
response := strings.ToLower(strings.TrimSpace(line))
switch response {
case "yes", "y":
l.Infof("starting to remove artifacts from store...")
case "no", "n":
l.Infof("successfully cancelled removal of artifacts from store")
return nil
case "":
return fmt.Errorf("failed to read response... please answer 'yes' or 'no'")
default:
return fmt.Errorf("invalid response [%s]... please answer 'yes' or 'no'", response)
}
}
// remove artifact(s)
for _, m := range matches {
if err := s.RemoveArtifact(ctx, m.reference, m.desc); err != nil {
return fmt.Errorf("failed to remove artifact [%s]: %w", formatReference(m.reference), err)
}
l.Infof("successfully removed [%s] of type [%s] with digest [%s]", formatReference(m.reference), m.desc.MediaType, m.desc.Digest.String())
}
// clean up unreferenced blobs
l.Infof("cleaning up all unreferenced blobs...")
removedCount, removedSize, err := s.CleanUp(ctx)
if err != nil {
l.Warnf("garbage collection failed: [%v]", err)
} else if removedCount > 0 {
l.Infof("successfully removed [%d] unreferenced blobs [freed %d bytes]", removedCount, removedSize)
}
return nil
}

View File

@@ -15,24 +15,27 @@ import (
"github.com/google/go-containerregistry/pkg/v1/layout"
"github.com/google/go-containerregistry/pkg/v1/tarball"
"github.com/google/go-containerregistry/pkg/v1/types"
"github.com/mholt/archiver/v3"
imagev1 "github.com/opencontainers/image-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/archives"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/log"
)
// SaveCmd
// TODO: Just use mholt/archiver for now, even though we don't need most of it
func SaveCmd(ctx context.Context, o *flags.SaveOpts, outputFile string) error {
// saves a content store to store archives
func SaveCmd(ctx context.Context, o *flags.SaveOpts, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
// TODO: Support more formats?
a := archiver.NewTarZstd()
a.OverwriteExisting = true
// maps to handle compression and archival types
compressionMap := archives.CompressionMap
archivalMap := archives.ArchivalMap
absOutputfile, err := filepath.Abs(outputFile)
// select the compression and archival type based parsed filename extension
compression := compressionMap["zst"]
archival := archivalMap["tar"]
absOutputfile, err := filepath.Abs(o.FileName)
if err != nil {
return err
}
@@ -46,16 +49,30 @@ func SaveCmd(ctx context.Context, o *flags.SaveOpts, outputFile string) error {
return err
}
// create the manifest.json file
if err := writeExportsManifest(ctx, ".", o.Platform); err != nil {
return err
}
err = a.Archive([]string{"."}, absOutputfile)
// strip out the oci-layout file from the haul
// required for containerd to be able to interpret the haul correctly for all mediatypes and artifactypes
if o.ContainerdCompatibility {
if err := os.Remove(filepath.Join(".", ocispec.ImageLayoutFile)); err != nil {
if !os.IsNotExist(err) {
return err
}
} else {
l.Warnf("compatibility warning... containerd... removing 'oci-layout' file to support containerd importing of images")
}
}
// create the archive
err = archives.Archive(ctx, ".", absOutputfile, compression, archival)
if err != nil {
return err
}
l.Infof("saved store [%s] -> [%s]", o.StoreDir, absOutputfile)
l.Infof("saving store [%s] to archive [%s]", o.StoreDir, o.FileName)
return nil
}
@@ -94,9 +111,9 @@ func writeExportsManifest(ctx context.Context, dir string, platformStr string) e
}
for _, desc := range imx.Manifests {
l.Debugf("descriptor [%s] >>> %s", desc.Digest.String(), desc.MediaType)
l.Debugf("descriptor [%s] = [%s]", desc.Digest.String(), desc.MediaType)
if artifactType := types.MediaType(desc.ArtifactType); artifactType != "" && !artifactType.IsImage() && !artifactType.IsIndex() {
l.Debugf("descriptor [%s] <<< SKIPPING ARTIFACT (%q)", desc.Digest.String(), desc.ArtifactType)
l.Debugf("descriptor [%s] <<< SKIPPING ARTIFACT [%q]", desc.Digest.String(), desc.ArtifactType)
continue
}
if desc.Annotations != nil {
@@ -110,11 +127,12 @@ func writeExportsManifest(ctx context.Context, dir string, platformStr string) e
return err
}
case consts.KindAnnotationIndex:
l.Debugf("index [%s]: digest=%s, type=%s, size=%d", refName, desc.Digest.String(), desc.MediaType, desc.Size)
l.Debugf("index [%s]: digest=[%s]... type=[%s]... size=[%d]", refName, desc.Digest.String(), desc.MediaType, desc.Size)
// when no platform is provided, warn the user of potential mismatch on import
// when no platform is inputted... warn the user of potential mismatch on import for docker
// required for docker to be able to interpret and load the image correctly
if platform.String() == "" {
l.Warnf("index [%s]: provide an export platform to prevent potential platform mismatch on import", refName)
l.Warnf("compatibility warning... docker... specify platform to prevent potential mismatch on import of index [%s]", refName)
}
iix, err := idx.ImageIndex(desc.Digest)
@@ -127,17 +145,17 @@ func writeExportsManifest(ctx context.Context, dir string, platformStr string) e
}
for _, ixd := range ixm.Manifests {
if ixd.MediaType.IsImage() {
// check if platform is provided, if so, skip anything that doesn't match
if platform.String() != "" {
if ixd.Platform.Architecture != platform.Architecture || ixd.Platform.OS != platform.OS {
l.Warnf("index [%s]: digest=%s, platform=%s/%s: does not match the supplied platform, skipping", refName, desc.Digest.String(), ixd.Platform.OS, ixd.Platform.Architecture)
l.Debugf("index [%s]: digest=[%s], platform=[%s/%s]: does not match the supplied platform... skipping...", refName, desc.Digest.String(), ixd.Platform.OS, ixd.Platform.Architecture)
continue
}
}
// skip 'unknown' platforms... docker hates
// skip any platforms of 'unknown/unknown'... docker hates
// required for docker to be able to interpret and load the image correctly
if ixd.Platform.Architecture == "unknown" && ixd.Platform.OS == "unknown" {
l.Warnf("index [%s]: digest=%s, platform=%s/%s: skipping 'unknown/unknown' platform", refName, desc.Digest.String(), ixd.Platform.OS, ixd.Platform.Architecture)
l.Debugf("index [%s]: digest=[%s], platform=[%s/%s]: matches unknown platform... skipping...", refName, desc.Digest.String(), ixd.Platform.OS, ixd.Platform.Architecture)
continue
}
@@ -147,7 +165,7 @@ func writeExportsManifest(ctx context.Context, dir string, platformStr string) e
}
}
default:
l.Debugf("descriptor [%s] <<< SKIPPING KIND (%q)", desc.Digest.String(), kind)
l.Debugf("descriptor [%s] <<< SKIPPING KIND [%q]", desc.Digest.String(), kind)
}
}
}
@@ -161,7 +179,7 @@ func writeExportsManifest(ctx context.Context, dir string, platformStr string) e
return err
}
return oci.WriteFile("manifest.json", buf.Bytes(), 0666)
return oci.WriteFile(consts.ImageManifestFile, buf.Bytes(), 0666)
}
func (x *exports) describe() tarball.Manifest {
@@ -191,7 +209,7 @@ func (x *exports) record(ctx context.Context, index libv1.ImageIndex, desc libv1
// record one export record per digest
x.digests = append(x.digests, digest)
xd = tarball.Descriptor{
Config: path.Join(imagev1.ImageBlobsDir, config.Algorithm, config.Hex),
Config: path.Join(ocispec.ImageBlobsDir, config.Algorithm, config.Hex),
RepoTags: []string{},
Layers: []string{},
}
@@ -205,7 +223,7 @@ func (x *exports) record(ctx context.Context, index libv1.ImageIndex, desc libv1
if err != nil {
return err
}
xd.Layers = append(xd.Layers[:], path.Join(imagev1.ImageBlobsDir, xl.Algorithm, xl.Hex))
xd.Layers = append(xd.Layers[:], path.Join(ocispec.ImageBlobsDir, xl.Algorithm, xl.Hex))
}
}

View File

@@ -2,7 +2,13 @@ package store
import (
"context"
"errors"
"fmt"
"io/fs"
"net/http"
"os"
"path/filepath"
"strings"
"github.com/distribution/distribution/v3/configuration"
dcontext "github.com/distribution/distribution/v3/context"
@@ -10,6 +16,7 @@ import (
_ "github.com/distribution/distribution/v3/registry/storage/driver/filesystem"
_ "github.com/distribution/distribution/v3/registry/storage/driver/inmemory"
"github.com/distribution/distribution/v3/version"
"gopkg.in/yaml.v3"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/internal/server"
@@ -17,23 +24,86 @@ import (
"hauler.dev/go/hauler/pkg/store"
)
func ServeRegistryCmd(ctx context.Context, o *flags.ServeRegistryOpts, s *store.Layout) error {
func validateStoreExists(s *store.Layout) error {
indexPath := filepath.Join(s.Root, "index.json")
_, err := os.Stat(indexPath)
if err == nil {
return nil
}
if errors.Is(err, fs.ErrNotExist) {
return fmt.Errorf(
"no store found at [%s]\n ↳ does the hauler store exist? (verify with `hauler store info`)",
s.Root,
)
}
return fmt.Errorf(
"unable to access store at [%s]: %w",
s.Root,
err,
)
}
func loadConfig(filename string) (*configuration.Configuration, error) {
f, err := os.Open(filename)
if err != nil {
return nil, err
}
return configuration.Parse(f)
}
func DefaultRegistryConfig(o *flags.ServeRegistryOpts, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *configuration.Configuration {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": o.RootDir},
"maintenance": configuration.Parameters{
"readonly": map[any]any{"enabled": o.ReadOnly},
},
},
}
if o.TLSCert != "" && o.TLSKey != "" {
cfg.HTTP.TLS.Certificate = o.TLSCert
cfg.HTTP.TLS.Key = o.TLSKey
}
cfg.HTTP.Addr = fmt.Sprintf(":%d", o.Port)
cfg.HTTP.Headers = http.Header{
"X-Content-Type-Options": []string{"nosniff"},
}
cfg.Log.Level = configuration.Loglevel(ro.LogLevel)
cfg.Validation.Manifests.URLs.Allow = []string{".+"}
return cfg
}
func ServeRegistryCmd(ctx context.Context, o *flags.ServeRegistryOpts, s *store.Layout, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
ctx = dcontext.WithVersion(ctx, version.Version)
if err := validateStoreExists(s); err != nil {
return err
}
tr := server.NewTempRegistry(ctx, o.RootDir)
if err := tr.Start(); err != nil {
return err
}
opts := &flags.CopyOpts{}
if err := CopyCmd(ctx, opts, s, "registry://"+tr.Registry()); err != nil {
if err := CopyCmd(ctx, opts, s, "registry://"+tr.Registry(), ro); err != nil {
return err
}
tr.Close()
cfg := o.DefaultRegistryConfig()
cfg := DefaultRegistryConfig(o, rso, ro)
if o.ConfigFile != "" {
ucfg, err := loadConfig(o.ConfigFile)
if err != nil {
@@ -43,6 +113,16 @@ func ServeRegistryCmd(ctx context.Context, o *flags.ServeRegistryOpts, s *store.
}
l.Infof("starting registry on port [%d]", o.Port)
yamlConfig, err := yaml.Marshal(cfg)
if err != nil {
l.Errorf("failed to validate/output registry configuration: %v", err)
} else {
l.Infof("using registry configuration... \n%s", strings.TrimSpace(string(yamlConfig)))
}
l.Debugf("detailed registry configuration: %+v", cfg)
r, err := server.NewRegistry(ctx, cfg)
if err != nil {
return err
@@ -55,12 +135,16 @@ func ServeRegistryCmd(ctx context.Context, o *flags.ServeRegistryOpts, s *store.
return nil
}
func ServeFilesCmd(ctx context.Context, o *flags.ServeFilesOpts, s *store.Layout) error {
func ServeFilesCmd(ctx context.Context, o *flags.ServeFilesOpts, s *store.Layout, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
ctx = dcontext.WithVersion(ctx, version.Version)
if err := validateStoreExists(s); err != nil {
return err
}
opts := &flags.CopyOpts{}
if err := CopyCmd(ctx, opts, s, "dir://"+o.RootDir); err != nil {
if err := CopyCmd(ctx, opts, s, "dir://"+o.RootDir, ro); err != nil {
return err
}
@@ -83,12 +167,3 @@ func ServeFilesCmd(ctx context.Context, o *flags.ServeFilesOpts, s *store.Layout
return nil
}
func loadConfig(filename string) (*configuration.Configuration, error) {
f, err := os.Open(filename)
if err != nil {
return nil, err
}
return configuration.Parse(f)
}

View File

@@ -5,7 +5,9 @@ import (
"context"
"fmt"
"io"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/mitchellh/go-homedir"
@@ -13,25 +15,41 @@ import (
"k8s.io/apimachinery/pkg/util/yaml"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
convert "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/convert"
v1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
v1alpha1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
tchart "hauler.dev/go/hauler/pkg/collection/chart"
"hauler.dev/go/hauler/pkg/collection/imagetxt"
"hauler.dev/go/hauler/pkg/collection/k3s"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/content"
"hauler.dev/go/hauler/pkg/cosign"
"hauler.dev/go/hauler/pkg/getter"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/reference"
"hauler.dev/go/hauler/pkg/store"
)
func SyncCmd(ctx context.Context, o *flags.SyncOpts, s *store.Layout) error {
func SyncCmd(ctx context.Context, o *flags.SyncOpts, s *store.Layout, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
// if passed products, check for a remote manifest to retrieve and use.
for _, product := range o.Products {
l.Infof("processing content file for product: '%s'", product)
parts := strings.Split(product, "=")
tempOverride := rso.TempOverride
if tempOverride == "" {
tempOverride = os.Getenv(consts.HaulerTempDir)
}
tempDir, err := os.MkdirTemp(tempOverride, consts.DefaultHaulerTempDirName)
if err != nil {
return err
}
defer os.RemoveAll(tempDir)
l.Debugf("using temporary directory at [%s]", tempDir)
// if passed products, check for a remote manifest to retrieve and use
for _, productName := range o.Products {
l.Infof("processing product manifest for [%s] to store [%s]", productName, o.StoreDir)
parts := strings.Split(productName, "=")
tag := strings.ReplaceAll(parts[1], "+", "-")
ProductRegistry := o.ProductRegistry // cli flag
@@ -41,11 +59,11 @@ func SyncCmd(ctx context.Context, o *flags.SyncOpts, s *store.Layout) error {
}
manifestLoc := fmt.Sprintf("%s/hauler/%s-manifest.yaml:%s", ProductRegistry, parts[0], tag)
l.Infof("retrieving product manifest from: '%s'", manifestLoc)
img := v1alpha1.Image{
l.Infof("fetching product manifest from [%s]", manifestLoc)
img := v1.Image{
Name: manifestLoc,
}
err := storeImage(ctx, s, img, o.Platform)
err := storeImage(ctx, s, img, o.Platform, rso, ro, "")
if err != nil {
return err
}
@@ -53,35 +71,73 @@ func SyncCmd(ctx context.Context, o *flags.SyncOpts, s *store.Layout) error {
if err != nil {
return err
}
filename := fmt.Sprintf("%s-manifest.yaml", parts[0])
fileName := fmt.Sprintf("%s-manifest.yaml", parts[0])
fi, err := os.Open(filename)
fi, err := os.Open(fileName)
if err != nil {
return err
}
err = processContent(ctx, fi, o, s)
err = processContent(ctx, fi, o, s, rso, ro)
if err != nil {
return err
}
l.Infof("processing completed successfully")
}
// if passed a local manifest, process it
for _, filename := range o.ContentFiles {
l.Debugf("processing content file: '%s'", filename)
fi, err := os.Open(filename)
// If passed a local manifest, process it
for _, fileName := range o.FileName {
l.Infof("processing manifest [%s] to store [%s]", fileName, o.StoreDir)
haulPath := fileName
if strings.HasPrefix(haulPath, "http://") || strings.HasPrefix(haulPath, "https://") {
l.Debugf("detected remote manifest... starting download... [%s]", haulPath)
h := getter.NewHttp()
parsedURL, err := url.Parse(haulPath)
if err != nil {
return err
}
rc, err := h.Open(ctx, parsedURL)
if err != nil {
return err
}
defer rc.Close()
fileName := h.Name(parsedURL)
if fileName == "" {
fileName = filepath.Base(parsedURL.Path)
}
haulPath = filepath.Join(tempDir, fileName)
out, err := os.Create(haulPath)
if err != nil {
return err
}
defer out.Close()
if _, err = io.Copy(out, rc); err != nil {
return err
}
}
fi, err := os.Open(haulPath)
if err != nil {
return err
}
err = processContent(ctx, fi, o, s)
defer fi.Close()
err = processContent(ctx, fi, o, s, rso, ro)
if err != nil {
return err
}
l.Infof("processing completed successfully")
}
return nil
}
func processContent(ctx context.Context, fi *os.File, o *flags.SyncOpts, s *store.Layout) error {
func processContent(ctx context.Context, fi *os.File, o *flags.SyncOpts, s *store.Layout, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
reader := yaml.NewYAMLReader(bufio.NewReader(fi))
@@ -95,179 +151,510 @@ func processContent(ctx context.Context, fi *os.File, o *flags.SyncOpts, s *stor
if err != nil {
return err
}
docs = append(docs, raw)
}
for _, doc := range docs {
obj, err := content.Load(doc)
if err != nil {
l.Debugf("skipping sync of unknown content")
l.Warnf("skipping syncing due to %v", err)
continue
}
l.Infof("syncing [%s] to store", obj.GroupVersionKind().String())
gvk := obj.GroupVersionKind()
l.Infof("syncing content [%s] with [kind=%s] to store [%s]", gvk.GroupVersion(), gvk.Kind, o.StoreDir)
// TODO: Should type switch instead...
switch obj.GroupVersionKind().Kind {
case v1alpha1.FilesContentKind:
var cfg v1alpha1.Files
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
switch gvk.Kind {
for _, f := range cfg.Spec.Files {
err := storeFile(ctx, s, f)
if err != nil {
case consts.FilesContentKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release...", gvk.Version)
var alphaCfg v1alpha1.Files
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
}
case v1alpha1.ImagesContentKind:
var cfg v1alpha1.Images
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
a := cfg.GetAnnotations()
for _, i := range cfg.Spec.Images {
// Check if the user provided a registry. If a registry is provided in the annotation, use it for the images that don't have a registry in their ref name.
if a[consts.ImageAnnotationRegistry] != "" || o.Registry != "" {
newRef, _ := reference.Parse(i.Name)
newReg := o.Registry // cli flag
// if no cli flag but there was an annotation, use the annotation.
if o.Registry == "" && a[consts.ImageAnnotationRegistry] != "" {
newReg = a[consts.ImageAnnotationRegistry]
var v1Cfg v1.Files
if err := convert.ConvertFiles(&alphaCfg, &v1Cfg); err != nil {
return err
}
for _, f := range v1Cfg.Spec.Files {
if err := storeFile(ctx, s, f); err != nil {
return err
}
if newRef.Context().RegistryStr() == "" {
newRef, err = reference.Relocate(i.Name, newReg)
if err != nil {
return err
}
}
i.Name = newRef.Name()
}
// Check if the user provided a key. The flag from the CLI takes precedence over the annotation. The individual image key takes precedence over both.
if a[consts.ImageAnnotationKey] != "" || o.Key != "" || i.Key != "" {
key := o.Key // cli flag
// if no cli flag but there was an annotation, use the annotation.
if o.Key == "" && a[consts.ImageAnnotationKey] != "" {
key, err = homedir.Expand(a[consts.ImageAnnotationKey])
if err != nil {
return err
}
case "v1":
var cfg v1.Files
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, f := range cfg.Spec.Files {
if err := storeFile(ctx, s, f); err != nil {
return err
}
// the individual image key trumps all
if i.Key != "" {
key, err = homedir.Expand(i.Key)
if err != nil {
return err
}
}
l.Debugf("key for image [%s]", key)
}
// verify signature using the provided key.
err := cosign.VerifySignature(ctx, s, key, i.Name)
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
case consts.ImagesContentKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release...", gvk.Version)
var alphaCfg v1alpha1.Images
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
var v1Cfg v1.Images
if err := convert.ConvertImages(&alphaCfg, &v1Cfg); err != nil {
return err
}
a := v1Cfg.GetAnnotations()
for _, i := range v1Cfg.Spec.Images {
if a[consts.ImageAnnotationRegistry] != "" || o.Registry != "" {
newRef, _ := reference.Parse(i.Name)
newReg := o.Registry
if o.Registry == "" && a[consts.ImageAnnotationRegistry] != "" {
newReg = a[consts.ImageAnnotationRegistry]
}
if newRef.Context().RegistryStr() == "" {
newRef, err = reference.Relocate(i.Name, newReg)
if err != nil {
return err
}
}
i.Name = newRef.Name()
}
hasAnnotationIdentityOptions := a[consts.ImageAnnotationCertIdentityRegexp] != "" || a[consts.ImageAnnotationCertIdentity] != ""
hasCliIdentityOptions := o.CertIdentityRegexp != "" || o.CertIdentity != ""
hasImageIdentityOptions := i.CertIdentityRegexp != "" || i.CertIdentity != ""
needsKeylessVerificaton := hasAnnotationIdentityOptions || hasCliIdentityOptions || hasImageIdentityOptions
needsPubKeyVerification := a[consts.ImageAnnotationKey] != "" || o.Key != "" || i.Key != ""
if needsPubKeyVerification {
key := o.Key
if o.Key == "" && a[consts.ImageAnnotationKey] != "" {
key, err = homedir.Expand(a[consts.ImageAnnotationKey])
if err != nil {
return err
}
}
if i.Key != "" {
key, err = homedir.Expand(i.Key)
if err != nil {
return err
}
}
l.Debugf("key for image [%s]", key)
tlog := o.Tlog
if !o.Tlog && a[consts.ImageAnnotationTlog] == "true" {
tlog = true
}
if i.Tlog {
tlog = i.Tlog
}
l.Debugf("transparency log for verification [%b]", tlog)
if err := cosign.VerifySignature(ctx, s, key, tlog, i.Name, rso, ro); err != nil {
l.Errorf("signature verification failed for image [%s]... skipping...\n%v", i.Name, err)
continue
}
l.Infof("signature verified for image [%s]", i.Name)
} else if needsKeylessVerificaton { //Keyless signature verification
certIdentityRegexp := o.CertIdentityRegexp
if o.CertIdentityRegexp == "" && a[consts.ImageAnnotationCertIdentityRegexp] != "" {
certIdentityRegexp = a[consts.ImageAnnotationCertIdentityRegexp]
}
if i.CertIdentityRegexp != "" {
certIdentityRegexp = i.CertIdentityRegexp
}
l.Debugf("certIdentityRegexp for image [%s]", certIdentityRegexp)
certIdentity := o.CertIdentity
if o.CertIdentity == "" && a[consts.ImageAnnotationCertIdentity] != "" {
certIdentity = a[consts.ImageAnnotationCertIdentity]
}
if i.CertIdentity != "" {
certIdentity = i.CertIdentity
}
l.Debugf("certIdentity for image [%s]", certIdentity)
certOidcIssuer := o.CertOidcIssuer
if o.CertOidcIssuer == "" && a[consts.ImageAnnotationCertOidcIssuer] != "" {
certOidcIssuer = a[consts.ImageAnnotationCertOidcIssuer]
}
if i.CertOidcIssuer != "" {
certOidcIssuer = i.CertOidcIssuer
}
l.Debugf("certOidcIssuer for image [%s]", certOidcIssuer)
certOidcIssuerRegexp := o.CertOidcIssuerRegexp
if o.CertOidcIssuerRegexp == "" && a[consts.ImageAnnotationCertOidcIssuerRegexp] != "" {
certOidcIssuerRegexp = a[consts.ImageAnnotationCertOidcIssuerRegexp]
}
if i.CertOidcIssuerRegexp != "" {
certOidcIssuerRegexp = i.CertOidcIssuerRegexp
}
l.Debugf("certOidcIssuerRegexp for image [%s]", certOidcIssuerRegexp)
certGithubWorkflowRepository := o.CertGithubWorkflowRepository
if o.CertGithubWorkflowRepository == "" && a[consts.ImageAnnotationCertGithubWorkflowRepository] != "" {
certGithubWorkflowRepository = a[consts.ImageAnnotationCertGithubWorkflowRepository]
}
if i.CertGithubWorkflowRepository != "" {
certGithubWorkflowRepository = i.CertGithubWorkflowRepository
}
l.Debugf("certGithubWorkflowRepository for image [%s]", certGithubWorkflowRepository)
tlog := o.Tlog
if !o.Tlog && a[consts.ImageAnnotationTlog] == "true" {
tlog = true
}
if i.Tlog {
tlog = i.Tlog
}
l.Debugf("transparency log for verification [%b]", tlog)
if err := cosign.VerifyKeylessSignature(ctx, s, certIdentity, certIdentityRegexp, certOidcIssuer, certOidcIssuerRegexp, certGithubWorkflowRepository, tlog, i.Name, rso, ro); err != nil {
l.Errorf("keyless signature verification failed for image [%s]... skipping...\n%v", i.Name, err)
continue
}
l.Infof("keyless signature verified for image [%s]", i.Name)
}
platform := o.Platform
if o.Platform == "" && a[consts.ImageAnnotationPlatform] != "" {
platform = a[consts.ImageAnnotationPlatform]
}
if i.Platform != "" {
platform = i.Platform
}
rewrite := ""
if i.Rewrite != "" {
rewrite = i.Rewrite
}
if err := storeImage(ctx, s, i, platform, rso, ro, rewrite); err != nil {
return err
}
}
s.CopyAll(ctx, s.OCI, nil)
case "v1":
var cfg v1.Images
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
a := cfg.GetAnnotations()
for _, i := range cfg.Spec.Images {
if a[consts.ImageAnnotationRegistry] != "" || o.Registry != "" {
newRef, _ := reference.Parse(i.Name)
newReg := o.Registry
if o.Registry == "" && a[consts.ImageAnnotationRegistry] != "" {
newReg = a[consts.ImageAnnotationRegistry]
}
if newRef.Context().RegistryStr() == "" {
newRef, err = reference.Relocate(i.Name, newReg)
if err != nil {
return err
}
}
i.Name = newRef.Name()
}
hasAnnotationIdentityOptions := a[consts.ImageAnnotationCertIdentityRegexp] != "" || a[consts.ImageAnnotationCertIdentity] != ""
hasCliIdentityOptions := o.CertIdentityRegexp != "" || o.CertIdentity != ""
hasImageIdentityOptions := i.CertIdentityRegexp != "" || i.CertIdentity != ""
needsKeylessVerificaton := hasAnnotationIdentityOptions || hasCliIdentityOptions || hasImageIdentityOptions
needsPubKeyVerification := a[consts.ImageAnnotationKey] != "" || o.Key != "" || i.Key != ""
if needsPubKeyVerification {
key := o.Key
if o.Key == "" && a[consts.ImageAnnotationKey] != "" {
key, err = homedir.Expand(a[consts.ImageAnnotationKey])
if err != nil {
return err
}
}
if i.Key != "" {
key, err = homedir.Expand(i.Key)
if err != nil {
return err
}
}
l.Debugf("key for image [%s]", key)
tlog := o.Tlog
if !o.Tlog && a[consts.ImageAnnotationTlog] == "true" {
tlog = true
}
if i.Tlog {
tlog = i.Tlog
}
l.Debugf("transparency log for verification [%b]", tlog)
if err := cosign.VerifySignature(ctx, s, key, tlog, i.Name, rso, ro); err != nil {
l.Errorf("signature verification failed for image [%s]... skipping...\n%v", i.Name, err)
continue
}
l.Infof("signature verified for image [%s]", i.Name)
} else if needsKeylessVerificaton { //Keyless signature verification
certIdentityRegexp := o.CertIdentityRegexp
if o.CertIdentityRegexp == "" && a[consts.ImageAnnotationCertIdentityRegexp] != "" {
certIdentityRegexp = a[consts.ImageAnnotationCertIdentityRegexp]
}
if i.CertIdentityRegexp != "" {
certIdentityRegexp = i.CertIdentityRegexp
}
l.Debugf("certIdentityRegexp for image [%s]", certIdentityRegexp)
certIdentity := o.CertIdentity
if o.CertIdentity == "" && a[consts.ImageAnnotationCertIdentity] != "" {
certIdentity = a[consts.ImageAnnotationCertIdentity]
}
if i.CertIdentity != "" {
certIdentity = i.CertIdentity
}
l.Debugf("certIdentity for image [%s]", certIdentity)
certOidcIssuer := o.CertOidcIssuer
if o.CertOidcIssuer == "" && a[consts.ImageAnnotationCertOidcIssuer] != "" {
certOidcIssuer = a[consts.ImageAnnotationCertOidcIssuer]
}
if i.CertOidcIssuer != "" {
certOidcIssuer = i.CertOidcIssuer
}
l.Debugf("certOidcIssuer for image [%s]", certOidcIssuer)
certOidcIssuerRegexp := o.CertOidcIssuerRegexp
if o.CertOidcIssuerRegexp == "" && a[consts.ImageAnnotationCertOidcIssuerRegexp] != "" {
certOidcIssuerRegexp = a[consts.ImageAnnotationCertOidcIssuerRegexp]
}
if i.CertOidcIssuerRegexp != "" {
certOidcIssuerRegexp = i.CertOidcIssuerRegexp
}
l.Debugf("certOidcIssuerRegexp for image [%s]", certOidcIssuerRegexp)
certGithubWorkflowRepository := o.CertGithubWorkflowRepository
if o.CertGithubWorkflowRepository == "" && a[consts.ImageAnnotationCertGithubWorkflowRepository] != "" {
certGithubWorkflowRepository = a[consts.ImageAnnotationCertGithubWorkflowRepository]
}
if i.CertGithubWorkflowRepository != "" {
certGithubWorkflowRepository = i.CertGithubWorkflowRepository
}
l.Debugf("certGithubWorkflowRepository for image [%s]", certGithubWorkflowRepository)
tlog := o.Tlog
if !o.Tlog && a[consts.ImageAnnotationTlog] == "true" {
tlog = true
}
if i.Tlog {
tlog = i.Tlog
}
l.Debugf("transparency log for verification [%b]", tlog)
if err := cosign.VerifyKeylessSignature(ctx, s, certIdentity, certIdentityRegexp, certOidcIssuer, certOidcIssuerRegexp, certGithubWorkflowRepository, tlog, i.Name, rso, ro); err != nil {
l.Errorf("keyless signature verification failed for image [%s]... skipping...\n%v", i.Name, err)
continue
}
l.Infof("keyless signature verified for image [%s]", i.Name)
}
platform := o.Platform
if o.Platform == "" && a[consts.ImageAnnotationPlatform] != "" {
platform = a[consts.ImageAnnotationPlatform]
}
if i.Platform != "" {
platform = i.Platform
}
rewrite := ""
if i.Rewrite != "" {
rewrite = i.Rewrite
}
if err := storeImage(ctx, s, i, platform, rso, ro, rewrite); err != nil {
return err
}
}
s.CopyAll(ctx, s.OCI, nil)
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
case consts.ChartsContentKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release...", gvk.Version)
var alphaCfg v1alpha1.Charts
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
var v1Cfg v1.Charts
if err := convert.ConvertCharts(&alphaCfg, &v1Cfg); err != nil {
return err
}
for _, ch := range v1Cfg.Spec.Charts {
if err := storeChart(ctx, s, ch,
&flags.AddChartOpts{
ChartOpts: &action.ChartPathOptions{
RepoURL: ch.RepoURL,
Version: ch.Version,
},
},
rso, ro,
"",
); err != nil {
return err
}
}
case "v1":
var cfg v1.Charts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
registry := o.Registry
if registry == "" {
annotation := cfg.GetAnnotations()
if annotation != nil {
registry = annotation[consts.ImageAnnotationRegistry]
}
}
for i, ch := range cfg.Spec.Charts {
if err := storeChart(ctx, s, ch,
&flags.AddChartOpts{
ChartOpts: &action.ChartPathOptions{
RepoURL: ch.RepoURL,
Version: ch.Version,
},
AddImages: ch.AddImages,
AddDependencies: ch.AddDependencies,
Registry: registry,
Platform: o.Platform,
},
rso, ro,
cfg.Spec.Charts[i].Rewrite,
); err != nil {
return err
}
}
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
case consts.ChartsCollectionKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release...", gvk.Version)
var alphaCfg v1alpha1.ThickCharts
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
var v1Cfg v1.ThickCharts
if err := convert.ConvertThickCharts(&alphaCfg, &v1Cfg); err != nil {
return err
}
for _, chObj := range v1Cfg.Spec.Charts {
tc, err := tchart.NewThickChart(chObj, &action.ChartPathOptions{
RepoURL: chObj.RepoURL,
Version: chObj.Version,
})
if err != nil {
l.Errorf("signature verification failed for image [%s]. ** hauler will skip adding this image to the store **:\n%v", i.Name, err)
continue
return err
}
if _, err := s.AddOCICollection(ctx, tc); err != nil {
return err
}
l.Infof("signature verified for image [%s]", i.Name)
}
// Check if the user provided a platform. The flag from the CLI takes precedence over the annotation. The individual image platform takes precedence over both.
platform := o.Platform // cli flag
// if no cli flag but there was an annotation, use the annotation.
if o.Platform == "" && a[consts.ImageAnnotationPlatform] != "" {
platform = a[consts.ImageAnnotationPlatform]
}
// the individual image platform trumps all
if i.Platform != "" {
platform = i.Platform
}
err = storeImage(ctx, s, i, platform)
if err != nil {
case "v1":
var cfg v1.ThickCharts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
}
// sync with local index
s.CopyAll(ctx, s.OCI, nil)
for _, chObj := range cfg.Spec.Charts {
tc, err := tchart.NewThickChart(chObj, &action.ChartPathOptions{
RepoURL: chObj.RepoURL,
Version: chObj.Version,
})
if err != nil {
return err
}
if _, err := s.AddOCICollection(ctx, tc); err != nil {
return err
}
}
case v1alpha1.ChartsContentKind:
var cfg v1alpha1.Charts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
for _, ch := range cfg.Spec.Charts {
// TODO: Provide a way to configure syncs
err := storeChart(ctx, s, ch, &action.ChartPathOptions{})
if err != nil {
case consts.ImageTxtsContentKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release...", gvk.Version)
var alphaCfg v1alpha1.ImageTxts
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
}
case v1alpha1.K3sCollectionKind:
var cfg v1alpha1.K3s
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
k, err := k3s.NewK3s(cfg.Spec.Version)
if err != nil {
return err
}
if _, err := s.AddOCICollection(ctx, k); err != nil {
return err
}
case v1alpha1.ChartsCollectionKind:
var cfg v1alpha1.ThickCharts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, cfg := range cfg.Spec.Charts {
tc, err := tchart.NewThickChart(cfg, &action.ChartPathOptions{
RepoURL: cfg.RepoURL,
Version: cfg.Version,
})
if err != nil {
var v1Cfg v1.ImageTxts
if err := convert.ConvertImageTxts(&alphaCfg, &v1Cfg); err != nil {
return err
}
for _, cfgIt := range v1Cfg.Spec.ImageTxts {
it, err := imagetxt.New(cfgIt.Ref,
imagetxt.WithIncludeSources(cfgIt.Sources.Include...),
imagetxt.WithExcludeSources(cfgIt.Sources.Exclude...),
)
if err != nil {
return fmt.Errorf("convert ImageTxt %s: %v", v1Cfg.Name, err)
}
if _, err := s.AddOCICollection(ctx, it); err != nil {
return fmt.Errorf("add ImageTxt %s to store: %v", v1Cfg.Name, err)
}
}
if _, err := s.AddOCICollection(ctx, tc); err != nil {
case "v1":
var cfg v1.ImageTxts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
}
case v1alpha1.ImageTxtsContentKind:
var cfg v1alpha1.ImageTxts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, cfgIt := range cfg.Spec.ImageTxts {
it, err := imagetxt.New(cfgIt.Ref,
imagetxt.WithIncludeSources(cfgIt.Sources.Include...),
imagetxt.WithExcludeSources(cfgIt.Sources.Exclude...),
)
if err != nil {
return fmt.Errorf("convert ImageTxt %s: %v", cfg.Name, err)
for _, cfgIt := range cfg.Spec.ImageTxts {
it, err := imagetxt.New(cfgIt.Ref,
imagetxt.WithIncludeSources(cfgIt.Sources.Include...),
imagetxt.WithExcludeSources(cfgIt.Sources.Exclude...),
)
if err != nil {
return fmt.Errorf("convert ImageTxt %s: %v", cfg.Name, err)
}
if _, err := s.AddOCICollection(ctx, it); err != nil {
return fmt.Errorf("add ImageTxt %s to store: %v", cfg.Name, err)
}
}
if _, err := s.AddOCICollection(ctx, it); err != nil {
return fmt.Errorf("add ImageTxt %s to store: %v", cfg.Name, err)
}
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
default:
return fmt.Errorf("unrecognized content/collection type: %s", obj.GroupVersionKind().String())
return fmt.Errorf("unsupported kind [%s]... valid kinds are [Files, Images, Charts, ThickCharts, ImageTxts]", gvk.Kind)
}
}
return nil

View File

@@ -5,11 +5,12 @@ import (
"github.com/spf13/cobra"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/internal/version"
)
func addVersion(parent *cobra.Command) {
var json bool
func addVersion(parent *cobra.Command, ro *flags.CliRootOpts) {
o := &flags.VersionOpts{}
cmd := &cobra.Command{
Use: "version",
@@ -22,7 +23,7 @@ func addVersion(parent *cobra.Command) {
v.FontName = "starwars"
cmd.SetOut(cmd.OutOrStdout())
if json {
if o.JSON {
out, err := v.JSONString()
if err != nil {
return fmt.Errorf("unable to generate JSON from version info: %w", err)
@@ -34,7 +35,7 @@ func addVersion(parent *cobra.Command) {
return nil
},
}
cmd.Flags().BoolVar(&json, "json", false, "toggle output in JSON")
o.AddFlags(cmd)
parent.AddCommand(cmd)
}

View File

@@ -2,17 +2,13 @@ package main
import (
"context"
"embed"
"os"
"hauler.dev/go/hauler/cmd/hauler/cli"
"hauler.dev/go/hauler/pkg/cosign"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/log"
)
//go:embed binaries/*
var binaries embed.FS
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@@ -20,13 +16,7 @@ func main() {
logger := log.NewLogger(os.Stdout)
ctx = logger.WithContext(ctx)
// ensure cosign binary is available
if err := cosign.EnsureBinaryExists(ctx, binaries); err != nil {
logger.Errorf("%v", err)
os.Exit(1)
}
if err := cli.New().ExecuteContext(ctx); err != nil {
if err := cli.New(ctx, &flags.CliRootOpts{}).ExecuteContext(ctx); err != nil {
logger.Errorf("%v", err)
cancel()
os.Exit(1)

417
go.mod
View File

@@ -1,175 +1,362 @@
module hauler.dev/go/hauler
go 1.23
go 1.25.5
replace github.com/sigstore/cosign/v3 => github.com/hauler-dev/cosign/v3 v3.0.5-0.20260212234448-00b85d677dfc
replace github.com/distribution/distribution/v3 => github.com/distribution/distribution/v3 v3.0.0-20221208165359-362910506bc2
replace github.com/docker/cli => github.com/docker/cli v28.5.1+incompatible // needed to keep oras v1.2.7 working, which depends on docker/cli v28.5.1+incompatible
require (
github.com/common-nighthawk/go-figure v0.0.0-20210622060536-734e95fb86be
github.com/containerd/containerd v1.7.11
github.com/distribution/distribution/v3 v3.0.0-20221208165359-362910506bc2
github.com/docker/go-metrics v0.0.1
github.com/google/go-containerregistry v0.16.1
github.com/gorilla/handlers v1.5.1
github.com/gorilla/mux v1.8.0
github.com/mholt/archiver/v3 v3.5.1
github.com/containerd/containerd v1.7.29
github.com/distribution/distribution/v3 v3.0.0
github.com/google/go-containerregistry v0.20.7
github.com/gorilla/handlers v1.5.2
github.com/gorilla/mux v1.8.1
github.com/mholt/archives v0.1.5
github.com/mitchellh/go-homedir v1.1.0
github.com/olekukonko/tablewriter v0.0.5
github.com/olekukonko/tablewriter v1.1.2
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc6
github.com/opencontainers/image-spec v1.1.1
github.com/pkg/errors v0.9.1
github.com/rs/zerolog v1.31.0
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.10.0
github.com/spf13/cobra v1.8.0
golang.org/x/sync v0.6.0
helm.sh/helm/v3 v3.14.2
k8s.io/apimachinery v0.29.0
k8s.io/client-go v0.29.0
oras.land/oras-go v1.2.5
github.com/rs/zerolog v1.34.0
github.com/sigstore/cosign/v3 v3.0.2
github.com/sirupsen/logrus v1.9.4
github.com/spf13/afero v1.15.0
github.com/spf13/cobra v1.10.2
golang.org/x/sync v0.19.0
gopkg.in/yaml.v3 v3.0.1
helm.sh/helm/v3 v3.19.0
k8s.io/apimachinery v0.35.0
k8s.io/client-go v0.35.0
oras.land/oras-go v1.2.7
)
require (
cloud.google.com/go/auth v0.18.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
cloud.google.com/go/compute/metadata v0.9.0 // indirect
cuelabs.dev/go/oci/ociregistry v0.0.0-20250722084951-074d06050084 // indirect
cuelang.org/go v0.15.3 // indirect
dario.cat/mergo v1.0.1 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/BurntSushi/toml v1.3.2 // indirect
github.com/AliyunContainerService/ack-ram-tool/pkg/credentials/provider v0.14.0 // indirect
github.com/Azure/azure-sdk-for-go v68.0.0+incompatible // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.29 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.23 // indirect
github.com/Azure/go-autorest/autorest/azure/auth v0.5.12 // indirect
github.com/Azure/go-autorest/autorest/azure/cli v0.4.6 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.2.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.3 // indirect
github.com/Masterminds/semver/v3 v3.4.0 // indirect
github.com/Masterminds/sprig/v3 v3.3.0 // indirect
github.com/Masterminds/squirrel v1.5.4 // indirect
github.com/Microsoft/hcsshim v0.11.4 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/STARRY-S/zip v0.2.3 // indirect
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d // indirect
github.com/andybalholm/brotli v1.0.1 // indirect
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 // indirect
github.com/ThalesIgnite/crypto11 v1.2.5 // indirect
github.com/agnivade/levenshtein v1.2.1 // indirect
github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4 // indirect
github.com/alibabacloud-go/cr-20160607 v1.0.1 // indirect
github.com/alibabacloud-go/cr-20181201 v1.0.10 // indirect
github.com/alibabacloud-go/darabonba-openapi v0.2.1 // indirect
github.com/alibabacloud-go/debug v1.0.0 // indirect
github.com/alibabacloud-go/endpoint-util v1.1.1 // indirect
github.com/alibabacloud-go/openapi-util v0.1.0 // indirect
github.com/alibabacloud-go/tea v1.2.1 // indirect
github.com/alibabacloud-go/tea-utils v1.4.5 // indirect
github.com/alibabacloud-go/tea-xml v1.1.3 // indirect
github.com/aliyun/credentials-go v1.3.2 // indirect
github.com/andybalholm/brotli v1.2.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/aws/aws-sdk-go-v2 v1.41.0 // indirect
github.com/aws/aws-sdk-go-v2/config v1.32.5 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.19.5 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.16 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
github.com/aws/aws-sdk-go-v2/service/ecr v1.51.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ecrpublic v1.38.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 // indirect
github.com/aws/aws-sdk-go-v2/service/signin v1.0.4 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.7 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.12 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.5 // indirect
github.com/aws/smithy-go v1.24.0 // indirect
github.com/awslabs/amazon-ecr-credential-helper/ecr-login v0.11.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver v3.5.1+incompatible // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/bodgit/plumbing v1.3.0 // indirect
github.com/bodgit/sevenzip v1.6.1 // indirect
github.com/bodgit/windows v1.0.1 // indirect
github.com/bshuster-repo/logrus-logstash-hook v1.0.0 // indirect
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd // indirect
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b // indirect
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/buildkite/agent/v3 v3.115.2 // indirect
github.com/buildkite/go-pipeline v0.16.0 // indirect
github.com/buildkite/interpolate v0.1.5 // indirect
github.com/buildkite/roko v1.4.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/chrismellard/docker-credential-acr-env v0.0.0-20230304212654-82a0ddb27589 // indirect
github.com/chzyer/readline v1.5.1 // indirect
github.com/clbanning/mxj/v2 v2.7.0 // indirect
github.com/clipperhouse/displaywidth v0.6.0 // indirect
github.com/clipperhouse/stringish v0.1.1 // indirect
github.com/clipperhouse/uax29/v2 v2.3.0 // indirect
github.com/cockroachdb/apd/v3 v3.2.1 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/distribution/reference v0.5.0 // indirect
github.com/docker/cli v25.0.1+incompatible // indirect
github.com/containerd/platforms v1.0.0-rc.2 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.18.1 // indirect
github.com/coreos/go-oidc/v3 v3.17.0 // indirect
github.com/cyberphone/json-canonicalization v0.0.0-20241213102144-19d51d7fe467 // indirect
github.com/cyphar/filepath-securejoin v0.4.1 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/digitorus/pkcs7 v0.0.0-20230818184609-3a137a874352 // indirect
github.com/digitorus/timestamp v0.0.0-20231217203849-220c5c2851b7 // indirect
github.com/dimchansky/utfbom v1.1.1 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/cli v29.0.3+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker v25.0.6+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/docker v28.5.2+incompatible // indirect
github.com/docker/docker-credential-helpers v0.9.4 // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1 // indirect
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v5.7.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/felixge/httpsnoop v1.0.3 // indirect
github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/emicklei/go-restful/v3 v3.13.0 // indirect
github.com/emicklei/proto v1.14.2 // indirect
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/go-chi/chi/v5 v5.2.4 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-gorp/gorp/v3 v3.1.0 // indirect
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-openapi/analysis v0.24.1 // indirect
github.com/go-openapi/errors v0.22.6 // indirect
github.com/go-openapi/jsonpointer v0.22.4 // indirect
github.com/go-openapi/jsonreference v0.21.4 // indirect
github.com/go-openapi/loads v0.23.2 // indirect
github.com/go-openapi/runtime v0.29.2 // indirect
github.com/go-openapi/spec v0.22.3 // indirect
github.com/go-openapi/strfmt v0.25.0 // indirect
github.com/go-openapi/swag v0.25.4 // indirect
github.com/go-openapi/swag/cmdutils v0.25.4 // indirect
github.com/go-openapi/swag/conv v0.25.4 // indirect
github.com/go-openapi/swag/fileutils v0.25.4 // indirect
github.com/go-openapi/swag/jsonname v0.25.4 // indirect
github.com/go-openapi/swag/jsonutils v0.25.4 // indirect
github.com/go-openapi/swag/loading v0.25.4 // indirect
github.com/go-openapi/swag/mangling v0.25.4 // indirect
github.com/go-openapi/swag/netutils v0.25.4 // indirect
github.com/go-openapi/swag/stringutils v0.25.4 // indirect
github.com/go-openapi/swag/typeutils v0.25.4 // indirect
github.com/go-openapi/swag/yamlutils v0.25.4 // indirect
github.com/go-openapi/validate v0.25.1 // indirect
github.com/go-piv/piv-go/v2 v2.4.0 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/gomodule/redigo v1.8.2 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/certificate-transparency-go v1.3.2 // indirect
github.com/google/gnostic-models v0.7.0 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/go-github/v73 v73.0.0 // indirect
github.com/google/go-querystring v1.2.0 // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.9 // indirect
github.com/googleapis/gax-go/v2 v2.16.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/huandu/xstrings v1.4.0 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/hashicorp/go-retryablehttp v0.7.8 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/in-toto/attestation v1.1.2 // indirect
github.com/in-toto/in-toto-golang v0.9.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmoiron/sqlx v1.3.5 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/jedisct1/go-minisign v0.0.0-20230811132847-661be99b8267 // indirect
github.com/jmoiron/sqlx v1.4.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.16.5 // indirect
github.com/klauspost/pgzip v1.2.5 // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lestrrat-go/blackmagic v1.0.4 // indirect
github.com/lestrrat-go/dsig v1.0.0 // indirect
github.com/lestrrat-go/dsig-secp256k1 v1.0.0 // indirect
github.com/lestrrat-go/httpcc v1.0.1 // indirect
github.com/lestrrat-go/httprc/v3 v3.0.1 // indirect
github.com/lestrrat-go/jwx/v3 v3.0.12 // indirect
github.com/lestrrat-go/option v1.0.1 // indirect
github.com/lestrrat-go/option/v2 v2.0.0 // indirect
github.com/letsencrypt/boulder v0.20251110.0 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/manifoldco/promptui v0.9.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.19 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mikelolasagasti/xz v1.0.1 // indirect
github.com/minio/minlz v1.0.1 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/mozillazg/docker-credential-acr-helper v0.4.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/nwaples/rardecode v1.1.0 // indirect
github.com/nozzle/throttler v0.0.0-20180817012639-2ea982251481 // indirect
github.com/nwaples/rardecode/v2 v2.2.1 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/oleiade/reflections v1.1.0 // indirect
github.com/olekukonko/cat v0.0.0-20250911104152-50322a0618f6 // indirect
github.com/olekukonko/errors v1.1.0 // indirect
github.com/olekukonko/ll v0.1.3 // indirect
github.com/open-policy-agent/opa v1.12.1 // indirect
github.com/pborman/uuid v1.2.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pierrec/lz4/v4 v4.1.2 // indirect
github.com/prometheus/client_golang v1.16.0 // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.10.1 // indirect
github.com/rubenv/sql-migrate v1.5.2 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.67.5 // indirect
github.com/prometheus/procfs v0.17.0 // indirect
github.com/protocolbuffers/txtpbfmt v0.0.0-20251016062345-16587c79cd91 // indirect
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
github.com/rubenv/sql-migrate v1.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/shopspring/decimal v1.3.1 // indirect
github.com/spf13/cast v1.5.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/ulikunitz/xz v0.5.9 // indirect
github.com/vbatts/tar-split v0.11.3 // indirect
github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/santhosh-tekuri/jsonschema/v6 v6.0.2 // indirect
github.com/sassoftware/relic v7.2.1+incompatible // indirect
github.com/secure-systems-lab/go-securesystemslib v0.10.0 // indirect
github.com/segmentio/asm v1.2.1 // indirect
github.com/shibumi/go-pathspec v1.3.0 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/sigstore/fulcio v1.8.5 // indirect
github.com/sigstore/protobuf-specs v0.5.0 // indirect
github.com/sigstore/rekor v1.5.0 // indirect
github.com/sigstore/rekor-tiles/v2 v2.0.1 // indirect
github.com/sigstore/sigstore v1.10.4 // indirect
github.com/sigstore/sigstore-go v1.1.4 // indirect
github.com/sigstore/timestamp-authority/v2 v2.0.4 // indirect
github.com/sorairolake/lzip-go v0.3.8 // indirect
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/spf13/viper v1.21.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.6.0 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d // indirect
github.com/tchap/go-patricia/v2 v2.3.3 // indirect
github.com/thales-e-security/pool v0.0.2 // indirect
github.com/theupdateframework/go-tuf v0.7.0 // indirect
github.com/theupdateframework/go-tuf/v2 v2.3.1 // indirect
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 // indirect
github.com/tjfoc/gmsm v1.4.1 // indirect
github.com/transparency-dev/formats v0.0.0-20251017110053-404c0d5b696c // indirect
github.com/transparency-dev/merkle v0.0.2 // indirect
github.com/ulikunitz/xz v0.5.15 // indirect
github.com/valyala/fastjson v1.6.4 // indirect
github.com/vbatts/tar-split v0.12.2 // indirect
github.com/vektah/gqlparser/v2 v2.5.31 // indirect
github.com/withfig/autocomplete-tools/integrations/cobra v1.2.1 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
github.com/yashtewari/glob-intersection v0.2.0 // indirect
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 // indirect
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50 // indirect
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.45.0 // indirect
go.opentelemetry.io/otel v1.19.0 // indirect
go.opentelemetry.io/otel/metric v1.19.0 // indirect
go.opentelemetry.io/otel/trace v1.19.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/net v0.23.0 // indirect
golang.org/x/oauth2 v0.10.0 // indirect
golang.org/x/sys v0.18.0 // indirect
golang.org/x/term v0.18.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.3.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d // indirect
google.golang.org/grpc v1.58.3 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gitlab.com/gitlab-org/api/client-go v1.11.0 // indirect
go.mongodb.org/mongo-driver v1.17.6 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect
go.opentelemetry.io/otel/sdk v1.39.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.1 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
go4.org v0.0.0-20230225012048-214862532bf5 // indirect
golang.org/x/crypto v0.46.0 // indirect
golang.org/x/mod v0.31.0 // indirect
golang.org/x/net v0.48.0 // indirect
golang.org/x/oauth2 v0.34.0 // indirect
golang.org/x/sys v0.39.0 // indirect
golang.org/x/term v0.38.0 // indirect
golang.org/x/text v0.32.0 // indirect
golang.org/x/time v0.14.0 // indirect
google.golang.org/api v0.260.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b // indirect
google.golang.org/grpc v1.78.0 // indirect
google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/api v0.29.0 // indirect
k8s.io/apiextensions-apiserver v0.29.0 // indirect
k8s.io/apiserver v0.29.0 // indirect
k8s.io/cli-runtime v0.29.0 // indirect
k8s.io/component-base v0.29.0 // indirect
k8s.io/klog/v2 v2.110.1 // indirect
k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 // indirect
k8s.io/kubectl v0.29.0 // indirect
k8s.io/utils v0.0.0-20230726121419-3b25d923346b // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/kustomize/kyaml v0.14.3-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
k8s.io/api v0.35.0 // indirect
k8s.io/apiextensions-apiserver v0.34.0 // indirect
k8s.io/apiserver v0.34.0 // indirect
k8s.io/cli-runtime v0.34.0 // indirect
k8s.io/component-base v0.34.0 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 // indirect
k8s.io/kubectl v0.34.0 // indirect
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect
oras.land/oras-go/v2 v2.6.0 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
sigs.k8s.io/kustomize/api v0.20.1 // indirect
sigs.k8s.io/kustomize/kyaml v0.20.1 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/release-utils v0.12.3 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
)

1412
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,10 @@
# - curl -sfL https://get.hauler.dev | HAULER_INSTALL_DIR=/usr/local/bin bash
# - HAULER_INSTALL_DIR=/usr/local/bin ./install.sh
#
# Set Hauler Directory
# - curl -sfL https://get.hauler.dev | HAULER_DIR=$HOME/.hauler bash
# - HAULER_DIR=$HOME/.hauler ./install.sh
#
# Debug Usage:
# - curl -sfL https://get.hauler.dev | HAULER_DEBUG=true bash
# - HAULER_DEBUG=true ./install.sh
@@ -65,7 +69,7 @@ done
# set install directory from argument or environment variable
HAULER_INSTALL_DIR=${HAULER_INSTALL_DIR:-/usr/local/bin}
# ensure install directory exists
# ensure install directory exists and/or create it
if [ ! -d "${HAULER_INSTALL_DIR}" ]; then
mkdir -p "${HAULER_INSTALL_DIR}" || fatal "Failed to Create Install Directory: ${HAULER_INSTALL_DIR}"
fi
@@ -82,8 +86,8 @@ if [ "${HAULER_UNINSTALL}" = "true" ]; then
# remove the hauler binary
rm -rf "${HAULER_INSTALL_DIR}/hauler" || fatal "Failed to Remove Hauler from ${HAULER_INSTALL_DIR}"
# remove the working directory
rm -rf "$HOME/.hauler" || fatal "Failed to Remove Hauler Directory: $HOME/.hauler"
# remove the hauler directory
rm -rf "${HAULER_DIR}" || fatal "Failed to Remove Hauler Directory: ${HAULER_DIR}"
info "Successfully Uninstalled Hauler" && echo
exit 0
@@ -110,7 +114,7 @@ case $PLATFORM in
PLATFORM="darwin"
;;
*)
fatal "Unsupported Platform: $PLATFORM"
fatal "Unsupported Platform: ${PLATFORM}"
;;
esac
@@ -124,29 +128,33 @@ case $ARCH in
ARCH="arm64"
;;
*)
fatal "Unsupported Architecture: $ARCH"
fatal "Unsupported Architecture: ${ARCH}"
;;
esac
# set hauler directory from argument or environment variable
HAULER_DIR=${HAULER_DIR:-$HOME/.hauler}
# start hauler installation
info "Starting Installation..."
# display the version, platform, and architecture
verbose "- Version: v${HAULER_VERSION}"
verbose "- Platform: $PLATFORM"
verbose "- Architecture: $ARCH"
verbose "- Platform: ${PLATFORM}"
verbose "- Architecture: ${ARCH}"
verbose "- Install Directory: ${HAULER_INSTALL_DIR}"
verbose "- Hauler Directory: ${HAULER_DIR}"
# check working directory and/or create it
if [ ! -d "$HOME/.hauler" ]; then
mkdir -p "$HOME/.hauler" || fatal "Failed to Create Directory: $HOME/.hauler"
# ensure hauler directory exists and/or create it
if [ ! -d "${HAULER_DIR}" ]; then
mkdir -p "${HAULER_DIR}" || fatal "Failed to Create Hauler Directory: ${HAULER_DIR}"
fi
# update permissions of working directory
chmod -R 777 "$HOME/.hauler" || fatal "Failed to Update Permissions of Directory: $HOME/.hauler"
# ensure hauler directory is writable (by user or root privileges)
chmod -R 777 "${HAULER_DIR}" || fatal "Failed to Update Permissions of Hauler Directory: ${HAULER_DIR}"
# change to working directory
cd "$HOME/.hauler" || fatal "Failed to Change Directory: $HOME/.hauler"
# change to hauler directory
cd "${HAULER_DIR}" || fatal "Failed to Change Directory to Hauler Directory: ${HAULER_DIR}"
# start hauler artifacts download
info "Starting Download..."

View File

@@ -7,15 +7,29 @@ import (
type AddImageOpts struct {
*StoreRootOpts
Name string
Key string
Platform string
Name string
Key string
CertOidcIssuer string
CertOidcIssuerRegexp string
CertIdentity string
CertIdentityRegexp string
CertGithubWorkflowRepository string
Tlog bool
Platform string
Rewrite string
}
func (o *AddImageOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Key, "key", "k", "", "(Optional) Location of public key to use for signature verification")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specifiy the platform of the image... i.e. linux/amd64 (defaults to all)")
f.StringVar(&o.CertIdentity, "certificate-identity", "", "(Optional) Cosign certificate-identity (either --certificate-identity or --certificate-identity-regexp required for keyless verification)")
f.StringVar(&o.CertIdentityRegexp, "certificate-identity-regexp", "", "(Optional) Cosign certificate-identity-regexp (either --certificate-identity or --certificate-identity-regexp required for keyless verification)")
f.StringVar(&o.CertOidcIssuer, "certificate-oidc-issuer", "", "(Optional) Cosign option to validate oidc issuer")
f.StringVar(&o.CertOidcIssuerRegexp, "certificate-oidc-issuer-regexp", "", "(Optional) Cosign option to validate oidc issuer with regex")
f.StringVar(&o.CertGithubWorkflowRepository, "certificate-github-workflow-repository", "", "(Optional) Cosign certificate-github-workflow-repository option")
f.BoolVar(&o.Tlog, "use-tlog-verify", false, "(Optional) Allow transparency log verification (defaults to false)")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specify the platform of the image... i.e. linux/amd64 (defaults to all)")
f.StringVar(&o.Rewrite, "rewrite", "", "(EXPERIMENTAL & Optional) Rewrite artifact path to specified string")
}
type AddFileOpts struct {
@@ -31,19 +45,37 @@ func (o *AddFileOpts) AddFlags(cmd *cobra.Command) {
type AddChartOpts struct {
*StoreRootOpts
ChartOpts *action.ChartPathOptions
ChartOpts *action.ChartPathOptions
Rewrite string
AddDependencies bool
AddImages bool
HelmValues string
Platform string
Registry string
KubeVersion string
}
func (o *AddChartOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVar(&o.ChartOpts.RepoURL, "repo", "", "Location of the chart (https:// | http:// | oci://)")
f.StringVar(&o.ChartOpts.Version, "version", "", "(Optional) Specifiy the version of the chart (v1.0.0 | 2.0.0 | ^2.0.0)")
f.StringVar(&o.ChartOpts.Version, "version", "", "(Optional) Specify the version of the chart (v1.0.0 | 2.0.0 | ^2.0.0)")
f.BoolVar(&o.ChartOpts.Verify, "verify", false, "(Optional) Verify the chart before fetching it")
f.StringVar(&o.ChartOpts.Username, "username", "", "(Optional) Username to use for authentication")
f.StringVar(&o.ChartOpts.Password, "password", "", "(Optional) Password to use for authentication")
f.StringVar(&o.ChartOpts.CertFile, "cert-file", "", "(Optional) Location of the TLS Certificate to use for client authenication")
f.StringVar(&o.ChartOpts.KeyFile, "key-file", "", "(Optional) Location of the TLS Key to use for client authenication")
f.StringVar(&o.ChartOpts.CertFile, "cert-file", "", "(Optional) Location of the TLS Certificate to use for client authentication")
f.StringVar(&o.ChartOpts.KeyFile, "key-file", "", "(Optional) Location of the TLS Key to use for client authentication")
f.BoolVar(&o.ChartOpts.InsecureSkipTLSverify, "insecure-skip-tls-verify", false, "(Optional) Skip TLS certificate verification")
f.StringVar(&o.ChartOpts.CaFile, "ca-file", "", "(Optional) Location of CA Bundle to enable certification verification")
f.StringVar(&o.Rewrite, "rewrite", "", "(EXPERIMENTAL & Optional) Rewrite artifact path to specified string")
cmd.MarkFlagsRequiredTogether("username", "password")
cmd.MarkFlagsRequiredTogether("cert-file", "key-file", "ca-file")
cmd.Flags().BoolVar(&o.AddDependencies, "add-dependencies", false, "(EXPERIMENTAL & Optional) Fetch dependent helm charts")
f.BoolVar(&o.AddImages, "add-images", false, "(EXPERIMENTAL & Optional) Fetch images referenced in helm charts")
f.StringVar(&o.HelmValues, "values", "", "(EXPERIMENTAL & Optional) Specify helm chart values when fetching images")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specify the platform of the image, e.g. linux/amd64")
f.StringVarP(&o.Registry, "registry", "g", "", "(Optional) Specify the registry of the image for images that do not alredy define one")
f.StringVar(&o.KubeVersion, "kube-version", "v1.34.1", "(EXPERIMENTAL & Optional) Override the kubernetes version for helm template rendering")
}

View File

@@ -1,5 +1,17 @@
package flags
import "github.com/spf13/cobra"
type CliRootOpts struct {
LogLevel string
LogLevel string
HaulerDir string
IgnoreErrors bool
}
func AddRootFlags(cmd *cobra.Command, ro *CliRootOpts) {
pf := cmd.PersistentFlags()
pf.StringVarP(&ro.LogLevel, "log-level", "l", "info", "Set the logging level (i.e. info, debug, warn)")
pf.StringVarP(&ro.HaulerDir, "haulerdir", "d", "", "Set the location of the hauler directory (default $HOME/.hauler)")
pf.BoolVar(&ro.IgnoreErrors, "ignore-errors", false, "Ignore/Bypass errors (i.e. warn on error) (defaults false)")
}

View File

@@ -9,13 +9,30 @@ type CopyOpts struct {
Password string
Insecure bool
PlainHTTP bool
Only string
}
func (o *CopyOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Username, "username", "u", "", "(Optional) Username to use for authentication")
f.StringVarP(&o.Password, "password", "p", "", "(Optional) Password to use for authentication")
f.StringVarP(&o.Username, "username", "u", "", "(Deprecated) Please use 'hauler login'")
f.StringVarP(&o.Password, "password", "p", "", "(Deprecated) Please use 'hauler login'")
f.BoolVar(&o.Insecure, "insecure", false, "(Optional) Allow insecure connections")
f.BoolVar(&o.PlainHTTP, "plain-http", false, "(Optional) Allow plain HTTP connections")
f.StringVarP(&o.Only, "only", "o", "", "(Optional) Custom string array to only copy specific 'image' items")
cmd.MarkFlagsRequiredTogether("username", "password")
if err := f.MarkDeprecated("username", "please use 'hauler login'"); err != nil {
panic(err)
}
if err := f.MarkDeprecated("password", "please use 'hauler login'"); err != nil {
panic(err)
}
if err := f.MarkHidden("username"); err != nil {
panic(err)
}
if err := f.MarkHidden("password"); err != nil {
panic(err)
}
}

View File

@@ -10,5 +10,5 @@ type ExtractOpts struct {
func (o *ExtractOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.DestinationDir, "output", "o", "", "(Optional) Specify the directory to output (defaults to current directory)")
f.StringVarP(&o.DestinationDir, "output", "o", "", "(Optional) Set the directory to output (defaults to current directory)")
}

View File

@@ -9,12 +9,14 @@ type InfoOpts struct {
TypeFilter string
SizeUnit string
ListRepos bool
ShowDigests bool
}
func (o *InfoOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.OutputFormat, "output", "o", "table", "(Optional) Specify the output format (table | json)")
f.StringVarP(&o.TypeFilter, "type", "t", "all", "(Optional) Filter on content type (image | chart | file | sigs | atts | sbom)")
f.StringVar(&o.TypeFilter, "type", "all", "(Optional) Filter on content type (image | chart | file | sigs | atts | sbom)")
f.BoolVar(&o.ListRepos, "list-repos", false, "(Optional) List all repository names")
f.BoolVar(&o.ShowDigests, "digests", false, "(Optional) Show digests of each artifact in the output table")
}

View File

@@ -1,18 +1,17 @@
package flags
import "github.com/spf13/cobra"
import (
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
)
type LoadOpts struct {
*StoreRootOpts
TempOverride string
FileName []string
}
func (o *LoadOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
// On Unix systems, the default is $TMPDIR if non-empty, else /tmp.
// On Windows, the default is GetTempPath, returning the first non-empty
// value from %TMP%, %TEMP%, %USERPROFILE%, or the Windows directory.
// On Plan 9, the default is /tmp.
f.StringVarP(&o.TempOverride, "tempdir", "t", "", "(Optional) Override the default temporary directiory determined by the OS")
f.StringSliceVarP(&o.FileName, "filename", "f", []string{consts.DefaultHaulerArchiveName}, "(Optional) Specify the name of inputted haul(s)")
}

View File

@@ -1,16 +0,0 @@
package flags
import "github.com/spf13/cobra"
type LoginOpts struct {
Username string
Password string
PasswordStdin bool
}
func (o *LoginOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Username, "username", "u", "", "(Optional) Username to use for authentication")
f.StringVarP(&o.Password, "password", "p", "", "(Optional) Password to use for authentication")
f.BoolVar(&o.PasswordStdin, "password-stdin", false, "(Optional) Password to use for authentication (from stdin)")
}

11
internal/flags/remove.go Normal file
View File

@@ -0,0 +1,11 @@
package flags
import "github.com/spf13/cobra"
type RemoveOpts struct {
Force bool // skip remove confirmation
}
func (o *RemoveOpts) AddFlags(cmd *cobra.Command) {
cmd.Flags().BoolVarP(&o.Force, "force", "f", false, "(Optional) Remove artifact(s) without confirmation")
}

View File

@@ -1,16 +1,22 @@
package flags
import "github.com/spf13/cobra"
import (
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
)
type SaveOpts struct {
*StoreRootOpts
FileName string
Platform string
FileName string
Platform string
ContainerdCompatibility bool
}
func (o *SaveOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.FileName, "filename", "f", "haul.tar.zst", "(Optional) Specify the name of outputted archive")
f.StringVarP(&o.FileName, "filename", "f", consts.DefaultHaulerArchiveName, "(Optional) Specify the name of outputted haul")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specify the platform for runtime imports... i.e. linux/amd64 (unspecified implies all)")
f.BoolVar(&o.ContainerdCompatibility, "containerd", false, "(Optional) Enable import compatibility with containerd... removes oci-layout from the haul")
}

View File

@@ -1,11 +1,8 @@
package flags
import (
"fmt"
"net/http"
"github.com/distribution/distribution/v3/configuration"
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
)
type ServeRegistryOpts struct {
@@ -23,8 +20,8 @@ type ServeRegistryOpts struct {
func (o *ServeRegistryOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.IntVarP(&o.Port, "port", "p", 5000, "(Optional) Specify the port to use for incoming connections")
f.StringVar(&o.RootDir, "directory", "registry", "(Optional) Directory to use for backend. Defaults to $PWD/registry")
f.IntVarP(&o.Port, "port", "p", consts.DefaultRegistryPort, "(Optional) Set the port to use for incoming connections")
f.StringVar(&o.RootDir, "directory", consts.DefaultRegistryRootDir, "(Optional) Directory to use for backend. Defaults to $PWD/registry")
f.StringVarP(&o.ConfigFile, "config", "c", "", "(Optional) Location of config file (overrides all flags)")
f.BoolVar(&o.ReadOnly, "readonly", true, "(Optional) Run the registry as readonly")
@@ -34,34 +31,6 @@ func (o *ServeRegistryOpts) AddFlags(cmd *cobra.Command) {
cmd.MarkFlagsRequiredTogether("tls-cert", "tls-key")
}
func (o *ServeRegistryOpts) DefaultRegistryConfig() *configuration.Configuration {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": o.RootDir},
"maintenance": configuration.Parameters{
"readonly": map[any]any{"enabled": o.ReadOnly},
},
},
}
if o.TLSCert != "" && o.TLSKey != "" {
cfg.HTTP.TLS.Certificate = o.TLSCert
cfg.HTTP.TLS.Key = o.TLSKey
}
cfg.HTTP.Addr = fmt.Sprintf(":%d", o.Port)
cfg.HTTP.Headers = http.Header{
"X-Content-Type-Options": []string{"nosniff"},
}
cfg.Log.Level = "info"
cfg.Validation.Manifests.URLs.Allow = []string{".+"}
return cfg
}
type ServeFilesOpts struct {
*StoreRootOpts
@@ -76,9 +45,9 @@ type ServeFilesOpts struct {
func (o *ServeFilesOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.IntVarP(&o.Port, "port", "p", 8080, "(Optional) Specify the port to use for incoming connections")
f.IntVarP(&o.Timeout, "timeout", "t", 60, "(Optional) Timeout duration for HTTP Requests in seconds for both reads/writes")
f.StringVar(&o.RootDir, "directory", "fileserver", "(Optional) Directory to use for backend. Defaults to $PWD/fileserver")
f.IntVarP(&o.Port, "port", "p", consts.DefaultFileserverPort, "(Optional) Set the port to use for incoming connections")
f.IntVar(&o.Timeout, "timeout", consts.DefaultFileserverTimeout, "(Optional) Timeout duration for HTTP Requests in seconds for both reads/writes")
f.StringVar(&o.RootDir, "directory", consts.DefaultFileserverRootDir, "(Optional) Directory to use for backend. Defaults to $PWD/fileserver")
f.StringVar(&o.TLSCert, "tls-cert", "", "(Optional) Location of the TLS Certificate to use for server authenication")
f.StringVar(&o.TLSKey, "tls-key", "", "(Optional) Location of the TLS Key to use for server authenication")

View File

@@ -13,29 +13,42 @@ import (
)
type StoreRootOpts struct {
StoreDir string
CacheDir string
StoreDir string
Retries int
TempOverride string
}
func (o *StoreRootOpts) AddFlags(cmd *cobra.Command) {
pf := cmd.PersistentFlags()
pf.StringVarP(&o.StoreDir, "store", "s", consts.DefaultStoreName, "(Optional) Specify the directory to use for the content store")
pf.StringVar(&o.CacheDir, "cache", "", "(deprecated flag and currently not used)")
pf.StringVarP(&o.StoreDir, "store", "s", "", "Set the directory to use for the content store")
pf.IntVarP(&o.Retries, "retries", "r", consts.DefaultRetries, "Set the number of retries for operations")
pf.StringVarP(&o.TempOverride, "tempdir", "t", "", "(Optional) Override the default temporary directory determined by the OS")
}
func (o *StoreRootOpts) Store(ctx context.Context) (*store.Layout, error) {
l := log.FromContext(ctx)
dir := o.StoreDir
abs, err := filepath.Abs(dir)
storeDir := o.StoreDir
if storeDir == "" {
storeDir = os.Getenv(consts.HaulerStoreDir)
}
if storeDir == "" {
storeDir = consts.DefaultStoreName
}
abs, err := filepath.Abs(storeDir)
if err != nil {
return nil, err
}
l.Debugf("using store at %s", abs)
o.StoreDir = abs
l.Debugf("using store at [%s]", abs)
if _, err := os.Stat(abs); errors.Is(err, os.ErrNotExist) {
err := os.Mkdir(abs, os.ModePerm)
if err != nil {
if err := os.MkdirAll(abs, os.ModePerm); err != nil {
return nil, err
}
} else if err != nil {

View File

@@ -1,24 +1,41 @@
package flags
import "github.com/spf13/cobra"
import (
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
)
type SyncOpts struct {
*StoreRootOpts
ContentFiles []string
Key string
Products []string
Platform string
Registry string
ProductRegistry string
FileName []string
Key string
CertOidcIssuer string
CertOidcIssuerRegexp string
CertIdentity string
CertIdentityRegexp string
CertGithubWorkflowRepository string
Products []string
Platform string
Registry string
ProductRegistry string
Tlog bool
Rewrite string
}
func (o *SyncOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringSliceVarP(&o.ContentFiles, "files", "f", []string{}, "Location of content manifests (files)... i.e. --files ./rke2-files.yaml")
f.StringSliceVarP(&o.FileName, "filename", "f", []string{consts.DefaultHaulerManifestName}, "Specify the name of manifest(s) to sync")
f.StringVarP(&o.Key, "key", "k", "", "(Optional) Location of public key to use for signature verification")
f.StringSliceVar(&o.Products, "products", []string{}, "(Optional) Specify the product name to fetch collections from the product registry i.e. rancher=v2.8.5,rke2=v1.28.11+rke2r1")
f.StringVar(&o.CertIdentity, "certificate-identity", "", "(Optional) Cosign certificate-identity (either --certificate-identity or --certificate-identity-regexp required for keyless verification)")
f.StringVar(&o.CertIdentityRegexp, "certificate-identity-regexp", "", "(Optional) Cosign certificate-identity-regexp (either --certificate-identity or --certificate-identity-regexp required for keyless verification)")
f.StringVar(&o.CertOidcIssuer, "certificate-oidc-issuer", "", "(Optional) Cosign option to validate oidc issuer")
f.StringVar(&o.CertOidcIssuerRegexp, "certificate-oidc-issuer-regexp", "", "(Optional) Cosign option to validate oidc issuer with regex")
f.StringVar(&o.CertGithubWorkflowRepository, "certificate-github-workflow-repository", "", "(Optional) Cosign certificate-github-workflow-repository option")
f.StringSliceVar(&o.Products, "products", []string{}, "(Optional) Specify the product name to fetch collections from the product registry i.e. rancher=v2.10.1,rke2=v1.31.5+rke2r1")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specify the platform of the image... i.e linux/amd64 (defaults to all)")
f.StringVarP(&o.Registry, "registry", "r", "", "(Optional) Specify the registry of the image for images that do not alredy define one")
f.StringVarP(&o.Registry, "registry", "g", "", "(Optional) Specify the registry of the image for images that do not alredy define one")
f.StringVarP(&o.ProductRegistry, "product-registry", "c", "", "(Optional) Specify the product registry. Defaults to RGS Carbide Registry (rgcrprod.azurecr.us)")
f.BoolVar(&o.Tlog, "use-tlog-verify", false, "(Optional) Allow transparency log verification (defaults to false)")
f.StringVar(&o.Rewrite, "rewrite", "", "(EXPERIMENTAL & Optional) Rewrite artifact path to specified string")
}

12
internal/flags/version.go Normal file
View File

@@ -0,0 +1,12 @@
package flags
import "github.com/spf13/cobra"
type VersionOpts struct {
JSON bool
}
func (o *VersionOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.BoolVar(&o.JSON, "json", false, "Set the output format to JSON")
}

View File

@@ -36,7 +36,7 @@ func Images() map[string]Fn {
m := make(map[string]Fn)
manifestMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
return "manifest.json", nil
return consts.ImageManifestFile, nil
})
for _, l := range []string{consts.DockerManifestSchema2, consts.DockerManifestListSchema2, consts.OCIManifestSchema1} {
@@ -52,7 +52,7 @@ func Images() map[string]Fn {
}
configMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
return "config.json", nil
return consts.ImageConfigFile, nil
})
for _, l := range []string{consts.DockerConfigJSON} {

View File

@@ -10,6 +10,7 @@ import (
"github.com/gorilla/handlers"
"github.com/gorilla/mux"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/consts"
)
// NewFile returns a fileserver
@@ -22,11 +23,11 @@ func NewFile(ctx context.Context, cfg flags.ServeFilesOpts) (Server, error) {
}
if cfg.Port == 0 {
cfg.Port = 8080
cfg.Port = consts.DefaultFileserverPort
}
if cfg.Timeout == 0 {
cfg.Timeout = 60
cfg.Timeout = consts.DefaultFileserverTimeout
}
srv := &http.Server{

View File

@@ -11,7 +11,6 @@ import (
"github.com/distribution/distribution/v3/configuration"
"github.com/distribution/distribution/v3/registry"
"github.com/distribution/distribution/v3/registry/handlers"
"github.com/docker/go-metrics"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -22,14 +21,6 @@ func NewRegistry(ctx context.Context, cfg *configuration.Configuration) (*regist
return nil, err
}
if cfg.HTTP.Debug.Prometheus.Enabled {
path := cfg.HTTP.Debug.Prometheus.Path
if path == "" {
path = "/metrics"
}
http.Handle(path, metrics.Handler())
}
return r, nil
}
@@ -45,7 +36,7 @@ func NewTempRegistry(ctx context.Context, root string) *tmpRegistryServer {
"filesystem": configuration.Parameters{"rootdirectory": root},
},
}
// Add validation configuration
cfg.Validation.Manifests.URLs.Allow = []string{".+"}
cfg.Log.Level = "error"

View File

@@ -28,10 +28,9 @@ import (
"time"
"github.com/common-nighthawk/go-figure"
"hauler.dev/go/hauler/pkg/consts"
)
const unknown = "unknown"
// Base version information.
//
// This is the fallback data used when version information from git is not
@@ -41,19 +40,19 @@ var (
// branch should be tagged using the correct versioning strategy.
gitVersion = "devel"
// SHA1 from git, output of $(git rev-parse HEAD)
gitCommit = unknown
gitCommit = consts.Unknown
// State of git tree, either "clean" or "dirty"
gitTreeState = unknown
gitTreeState = consts.Unknown
// Build date in ISO8601 format, output of $(date -u +'%Y-%m-%dT%H:%M:%SZ')
buildDate = unknown
buildDate = consts.Unknown
// flag to print the ascii name banner
asciiName = "true"
// goVersion is the used golang version.
goVersion = unknown
goVersion = consts.Unknown
// compiler is the used golang compiler.
compiler = unknown
compiler = consts.Unknown
// platform is the used os/arch identifier.
platform = unknown
platform = consts.Unknown
once sync.Once
info = Info{}
@@ -84,7 +83,7 @@ func getBuildInfo() *debug.BuildInfo {
func getGitVersion(bi *debug.BuildInfo) string {
if bi == nil {
return unknown
return consts.Unknown
}
// TODO: remove this when the issue https://github.com/golang/go/issues/29228 is fixed
@@ -107,28 +106,28 @@ func getDirty(bi *debug.BuildInfo) string {
if modified == "false" {
return "clean"
}
return unknown
return consts.Unknown
}
func getBuildDate(bi *debug.BuildInfo) string {
buildTime := getKey(bi, "vcs.time")
t, err := time.Parse("2006-01-02T15:04:05Z", buildTime)
if err != nil {
return unknown
return consts.Unknown
}
return t.Format("2006-01-02T15:04:05")
}
func getKey(bi *debug.BuildInfo, key string) string {
if bi == nil {
return unknown
return consts.Unknown
}
for _, iter := range bi.Settings {
if iter.Key == key {
return iter.Value
}
}
return unknown
return consts.Unknown
}
// GetVersionInfo represents known information on how this binary was built.
@@ -136,27 +135,27 @@ func GetVersionInfo() Info {
once.Do(func() {
buildInfo := getBuildInfo()
gitVersion = getGitVersion(buildInfo)
if gitCommit == unknown {
if gitCommit == consts.Unknown {
gitCommit = getCommit(buildInfo)
}
if gitTreeState == unknown {
if gitTreeState == consts.Unknown {
gitTreeState = getDirty(buildInfo)
}
if buildDate == unknown {
if buildDate == consts.Unknown {
buildDate = getBuildDate(buildInfo)
}
if goVersion == unknown {
if goVersion == consts.Unknown {
goVersion = runtime.Version()
}
if compiler == unknown {
if compiler == consts.Unknown {
compiler = runtime.Compiler
}
if platform == unknown {
if platform == consts.Unknown {
platform = fmt.Sprintf("%s/%s", runtime.GOOS, runtime.GOARCH)
}

View File

@@ -0,0 +1,121 @@
package v1alpha1
import (
"fmt"
v1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
v1alpha1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
)
// converts v1alpha1.Files -> v1.Files
func ConvertFiles(in *v1alpha1.Files, out *v1.Files) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.Files = make([]v1.File, len(in.Spec.Files))
for i := range in.Spec.Files {
out.Spec.Files[i].Name = in.Spec.Files[i].Name
out.Spec.Files[i].Path = in.Spec.Files[i].Path
}
return nil
}
// converts v1alpha1.Images -> v1.Images
func ConvertImages(in *v1alpha1.Images, out *v1.Images) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.Images = make([]v1.Image, len(in.Spec.Images))
for i := range in.Spec.Images {
out.Spec.Images[i].Name = in.Spec.Images[i].Name
out.Spec.Images[i].Platform = in.Spec.Images[i].Platform
out.Spec.Images[i].Key = in.Spec.Images[i].Key
}
return nil
}
// converts v1alpha1.Charts -> v1.Charts
func ConvertCharts(in *v1alpha1.Charts, out *v1.Charts) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.Charts = make([]v1.Chart, len(in.Spec.Charts))
for i := range in.Spec.Charts {
out.Spec.Charts[i].Name = in.Spec.Charts[i].Name
out.Spec.Charts[i].RepoURL = in.Spec.Charts[i].RepoURL
out.Spec.Charts[i].Version = in.Spec.Charts[i].Version
}
return nil
}
// converts v1alpha1.ThickCharts -> v1.ThickCharts
func ConvertThickCharts(in *v1alpha1.ThickCharts, out *v1.ThickCharts) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.Charts = make([]v1.ThickChart, len(in.Spec.Charts))
for i := range in.Spec.Charts {
out.Spec.Charts[i].Chart.Name = in.Spec.Charts[i].Chart.Name
out.Spec.Charts[i].Chart.RepoURL = in.Spec.Charts[i].Chart.RepoURL
out.Spec.Charts[i].Chart.Version = in.Spec.Charts[i].Chart.Version
}
return nil
}
// converts v1alpha1.ImageTxts -> v1.ImageTxts
func ConvertImageTxts(in *v1alpha1.ImageTxts, out *v1.ImageTxts) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.ImageTxts = make([]v1.ImageTxt, len(in.Spec.ImageTxts))
for i := range in.Spec.ImageTxts {
out.Spec.ImageTxts[i].Ref = in.Spec.ImageTxts[i].Ref
out.Spec.ImageTxts[i].Sources.Include = append(
out.Spec.ImageTxts[i].Sources.Include,
in.Spec.ImageTxts[i].Sources.Include...,
)
out.Spec.ImageTxts[i].Sources.Exclude = append(
out.Spec.ImageTxts[i].Sources.Exclude,
in.Spec.ImageTxts[i].Sources.Exclude...,
)
}
return nil
}
// convert v1alpha1 object to v1 object
func ConvertObject(in interface{}) (interface{}, error) {
switch src := in.(type) {
case *v1alpha1.Files:
dst := &v1.Files{}
if err := ConvertFiles(src, dst); err != nil {
return nil, err
}
return dst, nil
case *v1alpha1.Images:
dst := &v1.Images{}
if err := ConvertImages(src, dst); err != nil {
return nil, err
}
return dst, nil
case *v1alpha1.Charts:
dst := &v1.Charts{}
if err := ConvertCharts(src, dst); err != nil {
return nil, err
}
return dst, nil
case *v1alpha1.ThickCharts:
dst := &v1.ThickCharts{}
if err := ConvertThickCharts(src, dst); err != nil {
return nil, err
}
return dst, nil
case *v1alpha1.ImageTxts:
dst := &v1.ImageTxts{}
if err := ConvertImageTxts(src, dst); err != nil {
return nil, err
}
return dst, nil
}
return nil, fmt.Errorf("unsupported object type [%T]", in)
}

View File

@@ -0,0 +1,46 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Charts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ChartSpec `json:"spec,omitempty"`
}
type ChartSpec struct {
Charts []Chart `json:"charts,omitempty"`
}
type Chart struct {
Name string `json:"name,omitempty"`
RepoURL string `json:"repoURL,omitempty"`
Version string `json:"version,omitempty"`
Rewrite string `json:"rewrite,omitempty"`
AddImages bool `json:"add-images,omitempty"`
AddDependencies bool `json:"add-dependencies,omitempty"`
}
type ThickCharts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ThickChartSpec `json:"spec,omitempty"`
}
type ThickChartSpec struct {
Charts []ThickChart `json:"charts,omitempty"`
}
type ThickChart struct {
Chart `json:",inline,omitempty"`
ExtraImages []ChartImage `json:"extraImages,omitempty"`
}
type ChartImage struct {
Reference string `json:"ref"`
}

View File

@@ -0,0 +1,17 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Driver struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec DriverSpec `json:"spec"`
}
type DriverSpec struct {
Type string `json:"type"`
Version string `json:"version"`
}

View File

@@ -0,0 +1,25 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Files struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec FileSpec `json:"spec,omitempty"`
}
type FileSpec struct {
Files []File `json:"files,omitempty"`
}
type File struct {
// Path is the path to the file contents, can be a local or remote path
Path string `json:"path"`
// Name is an optional field specifying the name of the file when specified,
// it will override any dynamic name discovery from Path
Name string `json:"name,omitempty"`
}

View File

@@ -0,0 +1,12 @@
package v1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"hauler.dev/go/hauler/pkg/consts"
)
var (
ContentGroupVersion = schema.GroupVersion{Group: consts.ContentGroup, Version: "v1"}
CollectionGroupVersion = schema.GroupVersion{Group: consts.CollectionGroup, Version: "v1"}
)

View File

@@ -0,0 +1,41 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Images struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ImageSpec `json:"spec,omitempty"`
}
type ImageSpec struct {
Images []Image `json:"images,omitempty"`
}
type Image struct {
// Name is the full location for the image, can be referenced by tags or digests
Name string `json:"name"`
// Path is the path to the cosign public key used for verifying image signatures
//Key string `json:"key,omitempty"`
Key string `json:"key"`
// Path is the path to the cosign public key used for verifying image signatures
//Tlog string `json:"use-tlog-verify,omitempty"`
Tlog bool `json:"use-tlog-verify"`
// cosign keyless validation options
CertIdentity string `json:"certificate-identity"`
CertIdentityRegexp string `json:"certificate-identity-regexp"`
CertOidcIssuer string `json:"certificate-oidc-issuer"`
CertOidcIssuerRegexp string `json:"certificate-oidc-issuer-regexp"`
CertGithubWorkflowRepository string `json:"certificate-github-workflow-repository"`
// Platform of the image to be pulled. If not specified, all platforms will be pulled.
//Platform string `json:"key,omitempty"`
Platform string `json:"platform"`
Rewrite string `json:"rewrite"`
}

View File

@@ -0,0 +1,26 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type ImageTxts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ImageTxtsSpec `json:"spec,omitempty"`
}
type ImageTxtsSpec struct {
ImageTxts []ImageTxt `json:"imageTxts,omitempty"`
}
type ImageTxt struct {
Ref string `json:"ref,omitempty"`
Sources ImageTxtSources `json:"sources,omitempty"`
}
type ImageTxtSources struct {
Include []string `json:"include,omitempty"`
Exclude []string `json:"exclude,omitempty"`
}

View File

@@ -4,11 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
ChartsContentKind = "Charts"
ChartsCollectionKind = "ThickCharts"
)
type Charts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -4,10 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
DriverContentKind = "Driver"
)
type Driver struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -4,8 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const FilesContentKind = "Files"
type Files struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -2,17 +2,11 @@ package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
)
const (
Version = "v1alpha1"
ContentGroup = "content.hauler.cattle.io"
CollectionGroup = "collection.hauler.cattle.io"
"hauler.dev/go/hauler/pkg/consts"
)
var (
ContentGroupVersion = schema.GroupVersion{Group: ContentGroup, Version: Version}
// SchemeBuilder = &scheme.Builder{GroupVersion: ContentGroupVersion}
CollectionGroupVersion = schema.GroupVersion{Group: CollectionGroup, Version: Version}
ContentGroupVersion = schema.GroupVersion{Group: consts.ContentGroup, Version: "v1alpha1"}
CollectionGroupVersion = schema.GroupVersion{Group: consts.CollectionGroup, Version: "v1alpha1"}
)

View File

@@ -4,8 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const ImagesContentKind = "Images"
type Images struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
@@ -25,7 +23,19 @@ type Image struct {
//Key string `json:"key,omitempty"`
Key string `json:"key"`
// Path is the path to the cosign public key used for verifying image signatures
//Tlog string `json:"use-tlog-verify,omitempty"`
Tlog bool `json:"use-tlog-verify"`
// cosign keyless validation options
CertIdentity string `json:"certificate-identity"`
CertIdentityRegexp string `json:"certificate-identity-regexp"`
CertOidcIssuer string `json:"certificate-oidc-issuer"`
CertOidcIssuerRegexp string `json:"certificate-oidc-issuer-regexp"`
CertGithubWorkflowRepository string `json:"certificate-github-workflow-repository"`
// Platform of the image to be pulled. If not specified, all platforms will be pulled.
//Platform string `json:"key,omitempty"`
Platform string `json:"platform"`
Rewrite string `json:"rewrite"`
}

View File

@@ -4,10 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
ImageTxtsContentKind = "ImageTxts"
)
type ImageTxts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -1,19 +0,0 @@
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const K3sCollectionKind = "K3s"
type K3s struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec K3sSpec `json:"spec,omitempty"`
}
type K3sSpec struct {
Version string `json:"version"`
Arch string `json:"arch"`
}

104
pkg/archives/archiver.go Normal file
View File

@@ -0,0 +1,104 @@
package archives
import (
"context"
"fmt"
"os"
"path/filepath"
"github.com/mholt/archives"
"hauler.dev/go/hauler/pkg/log"
)
// maps to handle compression types
var CompressionMap = map[string]archives.Compression{
"gz": archives.Gz{},
"bz2": archives.Bz2{},
"xz": archives.Xz{},
"zst": archives.Zstd{},
"lz4": archives.Lz4{},
"br": archives.Brotli{},
}
// maps to handle archival types
var ArchivalMap = map[string]archives.Archival{
"tar": archives.Tar{},
"zip": archives.Zip{},
}
// check if a path exists
func isExist(path string) bool {
_, statErr := os.Stat(path)
return !os.IsNotExist(statErr)
}
// archives the files in a directory
// dir: the directory to Archive
// outfile: the output file
// compression: the compression to use (gzip, bzip2, etc.)
// archival: the archival to use (tar, zip, etc.)
func Archive(ctx context.Context, dir, outfile string, compression archives.Compression, archival archives.Archival) error {
l := log.FromContext(ctx)
l.Debugf("starting the archival process for [%s]", dir)
// remove outfile
l.Debugf("removing existing output file: [%s]", outfile)
if err := os.RemoveAll(outfile); err != nil {
errMsg := fmt.Errorf("failed to remove existing output file [%s]: %w", outfile, err)
l.Debugf(errMsg.Error())
return errMsg
}
if !isExist(dir) {
errMsg := fmt.Errorf("directory [%s] does not exist, cannot proceed with archival", dir)
l.Debugf(errMsg.Error())
return errMsg
}
// map files on disk to their paths in the archive
l.Debugf("mapping files in directory [%s]", dir)
archiveDirName := filepath.Base(filepath.Clean(dir))
if dir == "." {
archiveDirName = ""
}
files, err := archives.FilesFromDisk(context.Background(), nil, map[string]string{
dir: archiveDirName,
})
if err != nil {
errMsg := fmt.Errorf("error mapping files from directory [%s]: %w", dir, err)
l.Debugf(errMsg.Error())
return errMsg
}
l.Debugf("successfully mapped files for directory [%s]", dir)
// create the output file we'll write to
l.Debugf("creating output file [%s]", outfile)
outf, err := os.Create(outfile)
if err != nil {
errMsg := fmt.Errorf("error creating output file [%s]: %w", outfile, err)
l.Debugf(errMsg.Error())
return errMsg
}
defer func() {
l.Debugf("closing output file [%s]", outfile)
outf.Close()
}()
// define the archive format
l.Debugf("defining the archive format: [%T]/[%T]", archival, compression)
format := archives.CompressedArchive{
Compression: compression,
Archival: archival,
}
// create the archive
l.Debugf("starting archive for [%s]", outfile)
err = format.Archive(context.Background(), outf, files)
if err != nil {
errMsg := fmt.Errorf("error during archive creation for output file [%s]: %w", outfile, err)
l.Debugf(errMsg.Error())
return errMsg
}
l.Debugf("archive created successfully [%s]", outfile)
return nil
}

158
pkg/archives/unarchiver.go Normal file
View File

@@ -0,0 +1,158 @@
package archives
import (
"context"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/mholt/archives"
"hauler.dev/go/hauler/pkg/log"
)
const (
dirPermissions = 0o700 // default directory permissions
filePermissions = 0o600 // default file permissions
)
// ensures the path is safely relative to the target directory
func securePath(basePath, relativePath string) (string, error) {
relativePath = filepath.Clean("/" + relativePath)
relativePath = strings.TrimPrefix(relativePath, string(os.PathSeparator))
dstPath := filepath.Join(basePath, relativePath)
if !strings.HasPrefix(filepath.Clean(dstPath)+string(os.PathSeparator), filepath.Clean(basePath)+string(os.PathSeparator)) {
return "", fmt.Errorf("illegal file path: %s", dstPath)
}
return dstPath, nil
}
// creates a directory with specified permissions
func createDirWithPermissions(ctx context.Context, path string, mode os.FileMode) error {
l := log.FromContext(ctx)
l.Debugf("creating directory [%s]", path)
if err := os.MkdirAll(path, mode); err != nil {
return fmt.Errorf("failed to mkdir: %w", err)
}
return nil
}
// sets permissions to a file or directory
func setPermissions(path string, mode os.FileMode) error {
if err := os.Chmod(path, mode); err != nil {
return fmt.Errorf("failed to chmod: %w", err)
}
return nil
}
// handles the extraction of a file from the archive.
func handleFile(ctx context.Context, f archives.FileInfo, dst string) error {
l := log.FromContext(ctx)
l.Debugf("handling file [%s]", f.NameInArchive)
// validate and construct the destination path
dstPath, pathErr := securePath(dst, f.NameInArchive)
if pathErr != nil {
return pathErr
}
// ensure the parent directory exists
parentDir := filepath.Dir(dstPath)
if dirErr := createDirWithPermissions(ctx, parentDir, dirPermissions); dirErr != nil {
return dirErr
}
// handle directories
if f.IsDir() {
// create the directory with permissions from the archive
if dirErr := createDirWithPermissions(ctx, dstPath, f.Mode()); dirErr != nil {
return fmt.Errorf("failed to create directory: %w", dirErr)
}
l.Debugf("successfully created directory [%s]", dstPath)
return nil
}
// ignore symlinks (or hardlinks)
if f.LinkTarget != "" {
l.Debugf("skipping symlink [%s] to [%s]", dstPath, f.LinkTarget)
return nil
}
// check and handle parent directory permissions
originalMode, statErr := os.Stat(parentDir)
if statErr != nil {
return fmt.Errorf("failed to stat parent directory: %w", statErr)
}
// if parent directory is read only, temporarily make it writable
if originalMode.Mode().Perm()&0o200 == 0 {
l.Debugf("parent directory is read only... temporarily making it writable [%s]", parentDir)
if chmodErr := os.Chmod(parentDir, originalMode.Mode()|0o200); chmodErr != nil {
return fmt.Errorf("failed to chmod parent directory: %w", chmodErr)
}
defer func() {
// restore the original permissions after writing
if chmodErr := os.Chmod(parentDir, originalMode.Mode()); chmodErr != nil {
l.Debugf("failed to restore original permissions for [%s]: %v", parentDir, chmodErr)
}
}()
}
// handle regular files
reader, openErr := f.Open()
if openErr != nil {
return fmt.Errorf("failed to open file: %w", openErr)
}
defer reader.Close()
dstFile, createErr := os.OpenFile(dstPath, os.O_CREATE|os.O_WRONLY, f.Mode())
if createErr != nil {
return fmt.Errorf("failed to create file: %w", createErr)
}
defer dstFile.Close()
if _, copyErr := io.Copy(dstFile, reader); copyErr != nil {
return fmt.Errorf("failed to copy: %w", copyErr)
}
l.Debugf("successfully extracted file [%s]", dstPath)
return nil
}
// unarchives a tarball to a directory, symlinks, and hardlinks are ignored
func Unarchive(ctx context.Context, tarball, dst string) error {
l := log.FromContext(ctx)
l.Debugf("unarchiving temporary archive [%s] to temporary store [%s]", tarball, dst)
archiveFile, openErr := os.Open(tarball)
if openErr != nil {
return fmt.Errorf("failed to open tarball %s: %w", tarball, openErr)
}
defer archiveFile.Close()
format, input, identifyErr := archives.Identify(context.Background(), tarball, archiveFile)
if identifyErr != nil {
return fmt.Errorf("failed to identify format: %w", identifyErr)
}
extractor, ok := format.(archives.Extractor)
if !ok {
return fmt.Errorf("unsupported format for extraction")
}
if dirErr := createDirWithPermissions(ctx, dst, dirPermissions); dirErr != nil {
return fmt.Errorf("failed to create destination directory: %w", dirErr)
}
handler := func(ctx context.Context, f archives.FileInfo) error {
return handleFile(ctx, f, dst)
}
if extractErr := extractor.Extract(context.Background(), input, handler); extractErr != nil {
return fmt.Errorf("failed to extract: %w", extractErr)
}
l.Infof("unarchiving completed successfully")
return nil
}

View File

@@ -8,8 +8,8 @@ import (
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts/file/getter"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/getter"
)
// interface guard

View File

@@ -14,8 +14,8 @@ import (
"github.com/spf13/afero"
"hauler.dev/go/hauler/pkg/artifacts/file"
"hauler.dev/go/hauler/pkg/artifacts/file/getter"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/getter"
)
var (

View File

@@ -2,7 +2,7 @@ package file
import (
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts/file/getter"
"hauler.dev/go/hauler/pkg/getter"
)
type Option func(*File)

View File

@@ -3,7 +3,7 @@ package chart
import (
"helm.sh/helm/v3/pkg/action"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts/image"
"hauler.dev/go/hauler/pkg/content/chart"
@@ -15,13 +15,13 @@ var _ artifacts.OCICollection = (*tchart)(nil)
// tchart is a thick chart that includes all the dependent images as well as the chart itself
type tchart struct {
chart *chart.Chart
config v1alpha1.ThickChart
config v1.ThickChart
computed bool
contents map[string]artifacts.OCI
}
func NewThickChart(cfg v1alpha1.ThickChart, opts *action.ChartPathOptions) (artifacts.OCICollection, error) {
func NewThickChart(cfg v1.ThickChart, opts *action.ChartPathOptions) (artifacts.OCICollection, error) {
o, err := chart.NewChart(cfg.Chart.Name, opts)
if err != nil {
return nil, err

View File

@@ -15,7 +15,7 @@ import (
"k8s.io/apimachinery/pkg/util/yaml"
"k8s.io/client-go/util/jsonpath"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
)
var defaultKnownImagePaths = []string{
@@ -29,13 +29,13 @@ var defaultKnownImagePaths = []string{
}
// ImagesInChart will render a chart and identify all dependent images from it
func ImagesInChart(c *helmchart.Chart) (v1alpha1.Images, error) {
func ImagesInChart(c *helmchart.Chart) (v1.Images, error) {
docs, err := template(c)
if err != nil {
return v1alpha1.Images{}, err
return v1.Images{}, err
}
var images []v1alpha1.Image
var images []v1.Image
reader := yaml.NewYAMLReader(bufio.NewReader(strings.NewReader(docs)))
for {
raw, err := reader.Read()
@@ -43,17 +43,17 @@ func ImagesInChart(c *helmchart.Chart) (v1alpha1.Images, error) {
break
}
if err != nil {
return v1alpha1.Images{}, err
return v1.Images{}, err
}
found := find(raw, defaultKnownImagePaths...)
for _, f := range found {
images = append(images, v1alpha1.Image{Name: f})
images = append(images, v1.Image{Name: f})
}
}
ims := v1alpha1.Images{
Spec: v1alpha1.ImageSpec{
ims := v1.Images{
Spec: v1.ImageSpec{
Images: images,
},
}

View File

@@ -12,8 +12,8 @@ import (
"github.com/google/go-containerregistry/pkg/name"
artifact "hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts/file/getter"
"hauler.dev/go/hauler/pkg/artifacts/image"
"hauler.dev/go/hauler/pkg/getter"
"hauler.dev/go/hauler/pkg/log"
)

View File

@@ -1,175 +0,0 @@
package k3s
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"path"
"strings"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts/file"
"hauler.dev/go/hauler/pkg/artifacts/file/getter"
"hauler.dev/go/hauler/pkg/artifacts/image"
"hauler.dev/go/hauler/pkg/reference"
)
var _ artifacts.OCICollection = (*k3s)(nil)
const (
releaseUrl = "https://github.com/k3s-io/k3s/releases/download"
channelUrl = "https://update.k3s.io/v1-release/channels"
bootstrapUrl = "https://get.k3s.io"
)
var (
ErrImagesNotFound = errors.New("k3s dependent images not found")
ErrFetchingImages = errors.New("failed to fetch k3s dependent images")
ErrExecutableNotfound = errors.New("k3s executable not found")
ErrChannelNotFound = errors.New("desired k3s channel not found")
)
type k3s struct {
version string
arch string
computed bool
contents map[string]artifacts.OCI
channels map[string]string
client *getter.Client
}
func NewK3s(version string) (artifacts.OCICollection, error) {
return &k3s{
version: version,
contents: make(map[string]artifacts.OCI),
}, nil
}
func (k *k3s) Contents() (map[string]artifacts.OCI, error) {
if err := k.compute(); err != nil {
return nil, err
}
return k.contents, nil
}
func (k *k3s) compute() error {
if k.computed {
return nil
}
if err := k.fetchChannels(); err == nil {
if version, ok := k.channels[k.version]; ok {
k.version = version
}
}
if err := k.images(); err != nil {
return err
}
if err := k.executable(); err != nil {
return err
}
if err := k.bootstrap(); err != nil {
return err
}
k.computed = true
return nil
}
func (k *k3s) executable() error {
n := "k3s"
if k.arch != "" && k.arch != "amd64" {
n = fmt.Sprintf("name-%s", k.arch)
}
fref := k.releaseUrl(n)
resp, err := http.Head(fref)
if resp.StatusCode != http.StatusOK || err != nil {
return ErrExecutableNotfound
}
f := file.NewFile(fref)
ref := fmt.Sprintf("%s/k3s:%s", reference.DefaultNamespace, k.dnsCompliantVersion())
k.contents[ref] = f
return nil
}
func (k *k3s) bootstrap() error {
c := getter.NewClient(getter.ClientOptions{NameOverride: "k3s-init.sh"})
f := file.NewFile(bootstrapUrl, file.WithClient(c))
ref := fmt.Sprintf("%s/k3s-init.sh:%s", reference.DefaultNamespace, reference.DefaultTag)
k.contents[ref] = f
return nil
}
func (k *k3s) images() error {
resp, err := http.Get(k.releaseUrl("k3s-images.txt"))
if resp.StatusCode != http.StatusOK {
return ErrFetchingImages
} else if err != nil {
return ErrImagesNotFound
}
defer resp.Body.Close()
scanner := bufio.NewScanner(resp.Body)
for scanner.Scan() {
reference := scanner.Text()
o, err := image.NewImage(reference)
if err != nil {
return err
}
k.contents[reference] = o
}
return nil
}
func (k *k3s) releaseUrl(artifact string) string {
u, _ := url.Parse(releaseUrl)
complete := []string{u.Path}
u.Path = path.Join(append(complete, []string{k.version, artifact}...)...)
return u.String()
}
func (k *k3s) dnsCompliantVersion() string {
return strings.ReplaceAll(k.version, "+", "-")
}
func (k *k3s) fetchChannels() error {
resp, err := http.Get(channelUrl)
if err != nil {
return err
}
var c channel
if err := json.NewDecoder(resp.Body).Decode(&c); err != nil {
return err
}
channels := make(map[string]string)
for _, ch := range c.Data {
channels[ch.Name] = ch.Latest
}
k.channels = channels
return nil
}
type channel struct {
Data []channelData `json:"data"`
}
type channelData struct {
ID string `json:"id"`
Name string `json:"name"`
Latest string `json:"latest"`
}

View File

@@ -1,60 +1,102 @@
package consts
const (
// container media types
OCIManifestSchema1 = "application/vnd.oci.image.manifest.v1+json"
DockerManifestSchema2 = "application/vnd.docker.distribution.manifest.v2+json"
DockerManifestListSchema2 = "application/vnd.docker.distribution.manifest.list.v2+json"
OCIImageIndexSchema = "application/vnd.oci.image.index.v1+json"
DockerConfigJSON = "application/vnd.docker.container.image.v1+json"
DockerLayer = "application/vnd.docker.image.rootfs.diff.tar.gzip"
DockerForeignLayer = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
DockerUncompressedLayer = "application/vnd.docker.image.rootfs.diff.tar"
OCILayer = "application/vnd.oci.image.layer.v1.tar+gzip"
OCIArtifact = "application/vnd.oci.empty.v1+json"
DockerConfigJSON = "application/vnd.docker.container.image.v1+json"
DockerLayer = "application/vnd.docker.image.rootfs.diff.tar.gzip"
DockerForeignLayer = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
DockerUncompressedLayer = "application/vnd.docker.image.rootfs.diff.tar"
OCILayer = "application/vnd.oci.image.layer.v1.tar+gzip"
OCIArtifact = "application/vnd.oci.empty.v1+json"
// ChartConfigMediaType is the reserved media type for the Helm chart manifest config
// helm chart media types
ChartConfigMediaType = "application/vnd.cncf.helm.config.v1+json"
ChartLayerMediaType = "application/vnd.cncf.helm.chart.content.v1.tar+gzip"
ProvLayerMediaType = "application/vnd.cncf.helm.chart.provenance.v1.prov"
// ChartLayerMediaType is the reserved media type for Helm chart package content
ChartLayerMediaType = "application/vnd.cncf.helm.chart.content.v1.tar+gzip"
// ProvLayerMediaType is the reserved media type for Helm chart provenance files
ProvLayerMediaType = "application/vnd.cncf.helm.chart.provenance.v1.prov"
// FileLayerMediaType is the reserved media type for File content layers
FileLayerMediaType = "application/vnd.content.hauler.file.layer.v1"
// FileLocalConfigMediaType is the reserved media type for File config
// file media types
FileLayerMediaType = "application/vnd.content.hauler.file.layer.v1"
FileLocalConfigMediaType = "application/vnd.content.hauler.file.local.config.v1+json"
FileDirectoryConfigMediaType = "application/vnd.content.hauler.file.directory.config.v1+json"
FileHttpConfigMediaType = "application/vnd.content.hauler.file.http.config.v1+json"
// MemoryConfigMediaType is the reserved media type for Memory config for a generic set of bytes stored in memory
// memory media types
MemoryConfigMediaType = "application/vnd.content.hauler.memory.config.v1+json"
// WasmArtifactLayerMediaType is the reserved media type for WASM artifact layers
// wasm media types
WasmArtifactLayerMediaType = "application/vnd.wasm.content.layer.v1+wasm"
WasmConfigMediaType = "application/vnd.wasm.config.v1+json"
// WasmConfigMediaType is the reserved media type for WASM configs
WasmConfigMediaType = "application/vnd.wasm.config.v1+json"
// unknown media types
UnknownManifest = "application/vnd.hauler.cattle.io.unknown.v1+json"
UnknownLayer = "application/vnd.content.hauler.unknown.layer"
Unknown = "unknown"
// vendor prefixes
OCIVendorPrefix = "vnd.oci"
DockerVendorPrefix = "vnd.docker"
HaulerVendorPrefix = "vnd.hauler"
OCIImageIndexFile = "index.json"
KindAnnotationName = "kind"
KindAnnotationImage = "dev.cosignproject.cosign/image"
KindAnnotationIndex = "dev.cosignproject.cosign/imageIndex"
CarbideRegistry = "rgcrprod.azurecr.us"
// annotation keys
ContainerdImageNameKey = "io.containerd.image.name"
KindAnnotationName = "kind"
KindAnnotationImage = "dev.cosignproject.cosign/image"
KindAnnotationIndex = "dev.cosignproject.cosign/imageIndex"
ImageAnnotationKey = "hauler.dev/key"
ImageAnnotationPlatform = "hauler.dev/platform"
ImageAnnotationRegistry = "hauler.dev/registry"
ImageAnnotationTlog = "hauler.dev/use-tlog-verify"
ImageAnnotationRewrite = "hauler.dev/rewrite"
ImageRefKey = "org.opencontainers.image.ref.name"
DefaultStoreName = "store"
// cosign keyless validation options
ImageAnnotationCertIdentity = "hauler.dev/certificate-identity"
ImageAnnotationCertIdentityRegexp = "hauler.dev/certificate-identity-regexp"
ImageAnnotationCertOidcIssuer = "hauler.dev/certificate-oidc-issuer"
ImageAnnotationCertOidcIssuerRegexp = "hauler.dev/certificate-oidc-issuer-regexp"
ImageAnnotationCertGithubWorkflowRepository = "hauler.dev/certificate-github-workflow-repository"
// content kinds
ImagesContentKind = "Images"
ChartsContentKind = "Charts"
FilesContentKind = "Files"
DriverContentKind = "Driver"
ImageTxtsContentKind = "ImageTxts"
ChartsCollectionKind = "ThickCharts"
// content groups
ContentGroup = "content.hauler.cattle.io"
CollectionGroup = "collection.hauler.cattle.io"
// environment variables
HaulerDir = "HAULER_DIR"
HaulerTempDir = "HAULER_TEMP_DIR"
HaulerStoreDir = "HAULER_STORE_DIR"
HaulerIgnoreErrors = "HAULER_IGNORE_ERRORS"
// container files and directories
ImageManifestFile = "manifest.json"
ImageConfigFile = "config.json"
// other constraints
CarbideRegistry = "rgcrprod.azurecr.us"
DefaultNamespace = "hauler"
DefaultTag = "latest"
DefaultStoreName = "store"
DefaultHaulerDirName = ".hauler"
DefaultHaulerTempDirName = "hauler"
DefaultRegistryRootDir = "registry"
DefaultRegistryPort = 5000
DefaultFileserverRootDir = "fileserver"
DefaultFileserverPort = 8080
DefaultFileserverTimeout = 60
DefaultHaulerArchiveName = "haul.tar.zst"
DefaultHaulerManifestName = "hauler-manifest.yaml"
DefaultRetries = 3
RetriesInterval = 5
CustomTimeFormat = "2006-01-02 15:04:05"
)

View File

@@ -33,14 +33,13 @@ var (
settings = cli.New()
)
// Chart implements the OCI interface for Chart API objects. API spec values are
// stored into the Repo, Name, and Version fields.
// chart implements the oci interface for chart api objects... api spec values are stored into the name, repo, and version fields
type Chart struct {
path string
annotations map[string]string
}
// NewChart is a helper method that returns NewLocalChart or NewRemoteChart depending on v1alpha1.Chart contents
// newchart is a helper method that returns newlocalchart or newremotechart depending on chart contents
func NewChart(name string, opts *action.ChartPathOptions) (*Chart, error) {
chartRef := name
actionConfig := new(action.Configuration)
@@ -60,13 +59,31 @@ func NewChart(name string, opts *action.ChartPathOptions) (*Chart, error) {
client.SetRegistryClient(registryClient)
if registry.IsOCI(opts.RepoURL) {
chartRef = opts.RepoURL + "/" + name
} else if isUrl(opts.RepoURL) { // OCI Protocol registers as a valid URL
} else if isUrl(opts.RepoURL) { // oci protocol registers as a valid url
client.ChartPathOptions.RepoURL = opts.RepoURL
} else { // Handles cases like grafana/loki
} else { // handles cases like grafana and loki
chartRef = opts.RepoURL + "/" + name
}
// suppress helm downloader oci logs (stdout/stderr)
oldStdout := os.Stdout
oldStderr := os.Stderr
rOut, wOut, _ := os.Pipe()
rErr, wErr, _ := os.Pipe()
os.Stdout = wOut
os.Stderr = wErr
chartPath, err := client.ChartPathOptions.LocateChart(chartRef, settings)
wOut.Close()
wErr.Close()
os.Stdout = oldStdout
os.Stderr = oldStderr
_, _ = io.Copy(io.Discard, rOut)
_, _ = io.Copy(io.Discard, rErr)
rOut.Close()
rErr.Close()
if err != nil {
return nil, err
}
@@ -151,9 +168,8 @@ func (h *Chart) RawChartData() ([]byte, error) {
return os.ReadFile(h.path)
}
// chartData loads the chart contents into memory and returns a NopCloser for the contents
//
// Normally we avoid loading into memory, but charts sizes are strictly capped at ~1MB
// chartdata loads the chart contents into memory and returns a NopCloser for the contents
// normally we avoid loading into memory, but charts sizes are strictly capped at ~1MB
func (h *Chart) chartData() (gv1.Layer, error) {
info, err := os.Stat(h.path)
if err != nil {
@@ -256,14 +272,14 @@ func newDefaultRegistryClient(plainHTTP bool) (*registry.Client, error) {
opts := []registry.ClientOption{
registry.ClientOptDebug(settings.Debug),
registry.ClientOptEnableCache(true),
registry.ClientOptWriter(os.Stderr),
registry.ClientOptWriter(io.Discard),
registry.ClientOptCredentialsFile(settings.RegistryConfig),
}
if plainHTTP {
opts = append(opts, registry.ClientOptPlainHTTP())
}
// Create a new registry client
// create a new registry client
registryClient, err := registry.NewClient(opts...)
if err != nil {
return nil, err
@@ -272,12 +288,21 @@ func newDefaultRegistryClient(plainHTTP bool) (*registry.Client, error) {
}
func newRegistryClientWithTLS(certFile, keyFile, caFile string, insecureSkipTLSverify bool) (*registry.Client, error) {
// Create a new registry client
registryClient, err := registry.NewRegistryClientWithTLS(os.Stderr, certFile, keyFile, caFile, insecureSkipTLSverify,
settings.RegistryConfig, settings.Debug,
// create a new registry client
registryClient, err := registry.NewRegistryClientWithTLS(
io.Discard,
certFile, keyFile, caFile,
insecureSkipTLSverify,
settings.RegistryConfig,
settings.Debug,
)
if err != nil {
return nil, err
}
return registryClient, nil
}
// path returns the local filesystem path to the chart archive or directory
func (h *Chart) Path() string {
return h.path
}

View File

@@ -14,11 +14,11 @@ import (
)
func TestNewChart(t *testing.T) {
tmpdir, err := os.MkdirTemp("", "hauler")
tempDir, err := os.MkdirTemp("", "hauler")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpdir)
defer os.RemoveAll(tempDir)
type args struct {
name string
@@ -53,7 +53,7 @@ func TestNewChart(t *testing.T) {
// {
// name: "should create from a chart directory",
// args: args{
// path: filepath.Join(tmpdir, "podinfo"),
// path: filepath.Join(tempDir, "podinfo"),
// },
// want: want,
// wantErr: false,

View File

@@ -7,17 +7,31 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/yaml"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
v1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
v1alpha1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
)
func Load(data []byte) (schema.ObjectKind, error) {
var tm metav1.TypeMeta
if err := yaml.Unmarshal(data, &tm); err != nil {
return nil, err
return nil, fmt.Errorf("failed to parse manifest: %w", err)
}
if tm.GroupVersionKind().GroupVersion() != v1alpha1.ContentGroupVersion && tm.GroupVersionKind().GroupVersion() != v1alpha1.CollectionGroupVersion {
return nil, fmt.Errorf("unrecognized content/collection type: %s", tm.GroupVersionKind().String())
if tm.APIVersion == "" {
return nil, fmt.Errorf("missing required manifest field [apiVersion]")
}
if tm.Kind == "" {
return nil, fmt.Errorf("missing required manifest field [kind]")
}
gv := tm.GroupVersionKind().GroupVersion()
// allow v1 and v1alpha1 content/collection
if gv != v1.ContentGroupVersion &&
gv != v1.CollectionGroupVersion &&
gv != v1alpha1.ContentGroupVersion &&
gv != v1alpha1.CollectionGroupVersion {
return nil, fmt.Errorf("unrecognized content/collection [%s] with [kind=%s]", tm.APIVersion, tm.Kind)
}
return &tm, nil

View File

@@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"fmt"
"github.com/google/go-containerregistry/pkg/name"
"io"
"os"
"path/filepath"
@@ -12,6 +11,8 @@ import (
"strings"
"sync"
"github.com/google/go-containerregistry/pkg/name"
ccontent "github.com/containerd/containerd/content"
"github.com/containerd/containerd/remotes"
"github.com/opencontainers/image-spec/specs-go"
@@ -65,7 +66,7 @@ func (o *OCI) AddIndex(desc ocispec.Descriptor) error {
// LoadIndex will load the index from disk
func (o *OCI) LoadIndex() error {
path := o.path(consts.OCIImageIndexFile)
path := o.path(ocispec.ImageIndexFile)
idx, err := os.Open(path)
if err != nil {
if !os.IsNotExist(err) {
@@ -137,7 +138,7 @@ func (o *OCI) SaveIndex() error {
if err != nil {
return err
}
return os.WriteFile(o.path(consts.OCIImageIndexFile), data, 0644)
return os.WriteFile(o.path(ocispec.ImageIndexFile), data, 0644)
}
// Resolve attempts to resolve the reference into a name and descriptor.
@@ -228,7 +229,7 @@ func (o *OCI) Walk(fn func(reference string, desc ocispec.Descriptor) error) err
return true
})
if errst != nil {
return fmt.Errorf(strings.Join(errst, "; "))
return fmt.Errorf("%s", strings.Join(errst, "; "))
}
return nil
}
@@ -258,7 +259,7 @@ func (o *OCI) blobWriterAt(desc ocispec.Descriptor) (*os.File, error) {
}
func (o *OCI) ensureBlob(alg string, hex string) (string, error) {
dir := o.path("blobs", alg)
dir := o.path(ocispec.ImageBlobsDir, alg)
if err := os.MkdirAll(dir, os.ModePerm); err != nil && !os.IsExist(err) {
return "", err
}
@@ -311,3 +312,7 @@ func (p *ociPusher) Push(ctx context.Context, d ocispec.Descriptor) (ccontent.Wr
w := content.NewIoContentWriter(f, content.WithInputHash(d.Digest), content.WithOutputHash(d.Digest))
return w, nil
}
func (o *OCI) RemoveFromIndex(ref string) {
o.nameMap.Delete(ref)
}

View File

@@ -1,55 +1,103 @@
package cosign
import (
"bufio"
"context"
"embed"
"fmt"
"os"
"os/exec"
"os/user"
"path/filepath"
"runtime"
"strings"
"time"
"oras.land/oras-go/pkg/content"
"github.com/sigstore/cosign/v3/cmd/cosign/cli"
"github.com/sigstore/cosign/v3/cmd/cosign/cli/options"
"github.com/sigstore/cosign/v3/cmd/cosign/cli/verify"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/artifacts/image"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/store"
"oras.land/oras-go/pkg/content"
)
const maxRetries = 3
const retryDelay = time.Second * 5
// VerifyFileSignature verifies the digital signature of a file using Sigstore/Cosign.
func VerifySignature(ctx context.Context, s *store.Layout, keyPath string, ref string) error {
// VerifySignature verifies the digital signature of a file using Sigstore/Cosign.
func VerifySignature(ctx context.Context, s *store.Layout, keyPath string, useTlog bool, ref string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
operation := func() error {
cosignBinaryPath, err := getCosignPath()
if err != nil {
return err
v := &verify.VerifyCommand{
KeyRef: keyPath,
IgnoreTlog: true, // Ignore transparency log by default.
NewBundleFormat: true,
}
cmd := exec.Command(cosignBinaryPath, "verify", "--insecure-ignore-tlog", "--key", keyPath, ref)
output, err := cmd.CombinedOutput()
// if the user wants to use the transparency log, set the flag to false
if useTlog {
v.IgnoreTlog = false
}
err := log.CaptureOutput(l, true, func() error {
return v.Exec(ctx, []string{ref})
})
if err != nil {
return fmt.Errorf("error verifying signature: %v, output: %s", err, output)
return err
}
return nil
}
return RetryOperation(ctx, operation)
return RetryOperation(ctx, rso, ro, operation)
}
// VerifyKeylessSignature verifies the digital signature of a file using Sigstore/Cosign.
func VerifyKeylessSignature(ctx context.Context, s *store.Layout, identity string, identityRegexp string, oidcIssuer string, oidcIssuerRegexp string, ghWorkflowRepository string, useTlog bool, ref string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
operation := func() error {
certVerifyOptions := options.CertVerifyOptions{
CertOidcIssuer: oidcIssuer,
CertOidcIssuerRegexp: oidcIssuerRegexp,
CertIdentity: identity,
CertIdentityRegexp: identityRegexp,
CertGithubWorkflowRepository: ghWorkflowRepository,
}
v := &verify.VerifyCommand{
CertVerifyOptions: certVerifyOptions,
IgnoreTlog: false, // Ignore transparency log is set to false by default for keyless signature verification
CertGithubWorkflowRepository: ghWorkflowRepository,
NewBundleFormat: true,
}
// if the user wants to use the transparency log, set the flag to false
if useTlog {
v.IgnoreTlog = false
}
err := log.CaptureOutput(l, true, func() error {
return v.Exec(ctx, []string{ref})
})
if err != nil {
return err
}
return nil
}
return RetryOperation(ctx, rso, ro, operation)
}
// SaveImage saves image and any signatures/attestations to the store.
func SaveImage(ctx context.Context, s *store.Layout, ref string, platform string) error {
func SaveImage(ctx context.Context, s *store.Layout, ref string, platform string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
if !ro.IgnoreErrors {
envVar := os.Getenv(consts.HaulerIgnoreErrors)
if envVar == "true" {
ro.IgnoreErrors = true
}
}
operation := func() error {
cosignBinaryPath, err := getCosignPath()
if err != nil {
return err
o := &options.SaveOptions{
Directory: s.Root,
}
// check to see if the image is multi-arch
@@ -57,114 +105,59 @@ func SaveImage(ctx context.Context, s *store.Layout, ref string, platform string
if err != nil {
return err
}
l.Debugf("multi-arch image: %v", isMultiArch)
l.Debugf("multi-arch image [%v]", isMultiArch)
cmd := exec.Command(cosignBinaryPath, "save", ref, "--dir", s.Root)
// Conditionally add platform.
if platform != "" && isMultiArch {
l.Debugf("platform for image [%s]", platform)
cmd.Args = append(cmd.Args, "--platform", platform)
o.Platform = platform
}
stdout, err := cmd.StdoutPipe()
if err != nil {
return err
}
stderr, err := cmd.StderrPipe()
if err != nil {
return err
}
// start the command after having set up the pipe
if err := cmd.Start(); err != nil {
return err
}
// read command's stdout line by line
output := bufio.NewScanner(stdout)
for output.Scan() {
l.Debugf(output.Text()) // write each line to your log, or anything you need
}
if err := output.Err(); err != nil {
cmd.Wait()
return err
}
// read command's stderr line by line
errors := bufio.NewScanner(stderr)
for errors.Scan() {
l.Warnf(errors.Text()) // write each line to your log, or anything you need
}
if err := errors.Err(); err != nil {
cmd.Wait()
return err
}
// Wait for the command to finish
err = cmd.Wait()
err = cli.SaveCmd(ctx, *o, ref)
if err != nil {
return err
}
return nil
}
return RetryOperation(ctx, operation)
return RetryOperation(ctx, rso, ro, operation)
}
// LoadImage loads store to a remote registry.
func LoadImages(ctx context.Context, s *store.Layout, registry string, ropts content.RegistryOptions) error {
func LoadImages(ctx context.Context, s *store.Layout, registry string, only string, ropts content.RegistryOptions, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
cosignBinaryPath, err := getCosignPath()
if err != nil {
return err
o := &options.LoadOptions{
Directory: s.Root,
Registry: options.RegistryOptions{
Name: registry,
},
}
cmd := exec.Command(cosignBinaryPath, "load", "--registry", registry, "--dir", s.Root)
// Conditionally add extra flags.
if len(only) > 0 {
o.LoadOnly = only
}
// Conditionally add extra registry flags.
if ropts.Insecure {
cmd.Args = append(cmd.Args, "--allow-insecure-registry=true")
o.Registry.AllowInsecure = true
}
if ropts.PlainHTTP {
cmd.Args = append(cmd.Args, "--allow-http-registry=true")
o.Registry.AllowHTTPRegistry = true
}
stdout, err := cmd.StdoutPipe()
if err != nil {
return err
}
stderr, err := cmd.StderrPipe()
if err != nil {
return err
}
// start the command after having set up the pipe
if err := cmd.Start(); err != nil {
return err
if ropts.Username != "" {
o.Registry.AuthConfig.Username = ropts.Username
o.Registry.AuthConfig.Password = ropts.Password
}
// read command's stdout line by line
output := bufio.NewScanner(stdout)
for output.Scan() {
l.Infof(output.Text()) // write each line to your log, or anything you need
}
if err := output.Err(); err != nil {
cmd.Wait()
return err
}
// read command's stderr line by line
errors := bufio.NewScanner(stderr)
for errors.Scan() {
l.Errorf(errors.Text()) // write each line to your log, or anything you need
}
if err := errors.Err(); err != nil {
cmd.Wait()
return err
}
// Wait for the command to finish
err = cmd.Wait()
// execute the cosign load and capture the output in our logger
err := log.CaptureOutput(l, false, func() error {
return cli.LoadCmd(ctx, *o, "")
})
if err != nil {
return err
}
@@ -172,109 +165,49 @@ func LoadImages(ctx context.Context, s *store.Layout, registry string, ropts con
return nil
}
// RegistryLogin - performs cosign login
func RegistryLogin(ctx context.Context, s *store.Layout, registry string, ropts content.RegistryOptions) error {
log := log.FromContext(ctx)
cosignBinaryPath, err := getCosignPath()
if err != nil {
return err
}
cmd := exec.Command(cosignBinaryPath, "login", registry, "-u", ropts.Username, "-p", ropts.Password)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("error logging into registry: %v, output: %s", err, output)
}
log.Infof(strings.Trim(string(output), "\n"))
return nil
}
func RetryOperation(ctx context.Context, operation func() error) error {
func RetryOperation(ctx context.Context, rso *flags.StoreRootOpts, ro *flags.CliRootOpts, operation func() error) error {
l := log.FromContext(ctx)
for attempt := 1; attempt <= maxRetries; attempt++ {
if !ro.IgnoreErrors {
envVar := os.Getenv(consts.HaulerIgnoreErrors)
if envVar == "true" {
ro.IgnoreErrors = true
}
}
// Validate retries and fall back to a default
retries := rso.Retries
if retries <= 0 {
retries = consts.DefaultRetries
}
for attempt := 1; attempt <= rso.Retries; attempt++ {
err := operation()
if err == nil {
// If the operation succeeds, return nil (no error).
// If the operation succeeds, return nil (no error)
return nil
}
// Log the error for the current attempt.
l.Warnf("error (attempt %d/%d): %v", attempt, maxRetries, err)
if ro.IgnoreErrors {
if strings.HasPrefix(err.Error(), "function execution failed: no matching signatures: rekor client not provided for online verification") {
l.Warnf("warning (attempt %d/%d)... failed tlog verification", attempt, rso.Retries)
} else {
l.Warnf("warning (attempt %d/%d)... %v", attempt, rso.Retries, err)
}
} else {
if strings.HasPrefix(err.Error(), "function execution failed: no matching signatures: rekor client not provided for online verification") {
l.Errorf("error (attempt %d/%d)... failed tlog verification", attempt, rso.Retries)
} else {
l.Errorf("error (attempt %d/%d)... %v", attempt, rso.Retries, err)
}
}
// If this is not the last attempt, wait before retrying.
if attempt < maxRetries {
time.Sleep(retryDelay)
// If this is not the last attempt, wait before retrying
if attempt < rso.Retries {
time.Sleep(time.Second * consts.RetriesInterval)
}
}
// If all attempts fail, return an error.
return fmt.Errorf("operation failed after %d attempts", maxRetries)
}
func EnsureBinaryExists(ctx context.Context, bin embed.FS) error {
// Set up a path for the binary to be copied.
binaryPath, err := getCosignPath()
if err != nil {
return fmt.Errorf("error: %v", err)
}
// Determine the architecture so that we pull the correct embedded binary.
arch := runtime.GOARCH
rOS := runtime.GOOS
binaryName := "cosign"
if rOS == "windows" {
binaryName = fmt.Sprintf("cosign-%s-%s.exe", rOS, arch)
} else {
binaryName = fmt.Sprintf("cosign-%s-%s", rOS, arch)
}
// retrieve the embedded binary
f, err := bin.ReadFile(fmt.Sprintf("binaries/%s", binaryName))
if err != nil {
return fmt.Errorf("error: %v", err)
}
// write the binary to the filesystem
err = os.WriteFile(binaryPath, f, 0755)
if err != nil {
return fmt.Errorf("error: %v", err)
}
return nil
}
// getCosignPath returns the binary path
func getCosignPath() (string, error) {
// Get the current user's information
currentUser, err := user.Current()
if err != nil {
return "", fmt.Errorf("error: %v", err)
}
// Get the user's home directory
homeDir := currentUser.HomeDir
// Construct the path to the .hauler directory
haulerDir := filepath.Join(homeDir, ".hauler")
// Create the .hauler directory if it doesn't exist
if _, err := os.Stat(haulerDir); os.IsNotExist(err) {
// .hauler directory does not exist, create it
if err := os.MkdirAll(haulerDir, 0755); err != nil {
return "", fmt.Errorf("error creating .hauler directory: %v", err)
}
}
// Determine the binary name.
rOS := runtime.GOOS
binaryName := "cosign"
if rOS == "windows" {
binaryName = "cosign.exe"
}
// construct path to binary
binaryPath := filepath.Join(haulerDir, binaryName)
return binaryPath, nil
// If all attempts fail, return an error
return fmt.Errorf("operation unsuccessful after %d attempts", rso.Retries)
}

View File

@@ -6,7 +6,7 @@ import (
"path/filepath"
"testing"
"hauler.dev/go/hauler/pkg/artifacts/file/getter"
"hauler.dev/go/hauler/pkg/getter"
)
func TestClient_Detect(t *testing.T) {

View File

@@ -7,6 +7,8 @@ import (
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"hauler.dev/go/hauler/pkg/consts"
)
// Logger provides an interface for all used logger features regardless of logging backend
@@ -30,9 +32,8 @@ type Fields map[string]string
// NewLogger returns a new Logger
func NewLogger(out io.Writer) Logger {
customTimeFormat := "2006-01-02 15:04:05"
zerolog.TimeFieldFormat = customTimeFormat
output := zerolog.ConsoleWriter{Out: os.Stdout, TimeFormat: customTimeFormat}
zerolog.TimeFieldFormat = consts.CustomTimeFormat
output := zerolog.ConsoleWriter{Out: os.Stdout, TimeFormat: consts.CustomTimeFormat}
l := log.Output(output)
return &logger{
zl: l.With().Timestamp().Logger(),

99
pkg/log/logcapture.go Normal file
View File

@@ -0,0 +1,99 @@
package log
import (
"bufio"
"fmt"
"io"
"os"
"strings"
"sync"
)
// CustomWriter forwards log messages to the application's logger
type CustomWriter struct {
logger Logger
level string
}
func (cw *CustomWriter) Write(p []byte) (n int, err error) {
message := strings.TrimSpace(string(p))
if message != "" {
if cw.level == "error" {
cw.logger.Errorf("%s", message)
} else if cw.level == "info" {
cw.logger.Infof("%s", message)
} else {
cw.logger.Debugf("%s", message)
}
}
return len(p), nil
}
// logStream reads lines from a reader and writes them to the provided writer
func logStream(reader io.Reader, customWriter *CustomWriter, wg *sync.WaitGroup) {
defer wg.Done()
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
customWriter.Write([]byte(scanner.Text()))
}
if err := scanner.Err(); err != nil && err != io.EOF {
customWriter.logger.Errorf("error reading log stream: %v", err)
}
}
// CaptureOutput redirects stdout and stderr to custom loggers and executes the provided function
func CaptureOutput(logger Logger, debug bool, fn func() error) error {
// Create pipes for capturing stdout and stderr
stdoutReader, stdoutWriter, err := os.Pipe()
if err != nil {
return fmt.Errorf("failed to create stdout pipe: %w", err)
}
stderrReader, stderrWriter, err := os.Pipe()
if err != nil {
return fmt.Errorf("failed to create stderr pipe: %w", err)
}
// Save original stdout and stderr
origStdout := os.Stdout
origStderr := os.Stderr
// Redirect stdout and stderr
os.Stdout = stdoutWriter
os.Stderr = stderrWriter
// Use WaitGroup to wait for logging goroutines to finish
var wg sync.WaitGroup
wg.Add(2)
// Start logging goroutines
if !debug {
go logStream(stdoutReader, &CustomWriter{logger: logger, level: "info"}, &wg)
go logStream(stderrReader, &CustomWriter{logger: logger, level: "error"}, &wg)
} else {
go logStream(stdoutReader, &CustomWriter{logger: logger, level: "debug"}, &wg)
go logStream(stderrReader, &CustomWriter{logger: logger, level: "debug"}, &wg)
}
// Run the provided function in a separate goroutine
fnErr := make(chan error, 1)
go func() {
fnErr <- fn()
stdoutWriter.Close() // Close writers to signal EOF to readers
stderrWriter.Close()
}()
// Wait for logging goroutines to finish
wg.Wait()
// Restore original stdout and stderr
os.Stdout = origStdout
os.Stderr = origStderr
// Check for errors from the function
if err := <-fnErr; err != nil {
return fmt.Errorf("function execution failed: %w", err)
}
return nil
}

View File

@@ -8,11 +8,8 @@ import (
"strings"
gname "github.com/google/go-containerregistry/pkg/name"
)
const (
DefaultNamespace = "hauler"
DefaultTag = "latest"
"hauler.dev/go/hauler/pkg/consts"
)
type Reference interface {
@@ -36,14 +33,14 @@ func NewTagged(n string, tag string) (gname.Reference, error) {
// Parse will parse a reference and return a name.Reference namespaced with DefaultNamespace if necessary
func Parse(ref string) (gname.Reference, error) {
r, err := gname.ParseReference(ref, gname.WithDefaultRegistry(""), gname.WithDefaultTag(DefaultTag))
r, err := gname.ParseReference(ref, gname.WithDefaultRegistry(""), gname.WithDefaultTag(consts.DefaultTag))
if err != nil {
return nil, err
}
if !strings.ContainsRune(r.String(), '/') {
ref = DefaultNamespace + "/" + r.String()
return gname.ParseReference(ref, gname.WithDefaultRegistry(""), gname.WithDefaultTag(DefaultTag))
ref = consts.DefaultNamespace + "/" + r.String()
return gname.ParseReference(ref, gname.WithDefaultRegistry(""), gname.WithDefaultTag(consts.DefaultTag))
}
return r, nil

View File

@@ -3,6 +3,7 @@ package store
import (
"context"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
@@ -151,17 +152,17 @@ func (l *Layout) AddOCICollection(ctx context.Context, collection artifacts.OCIC
// This can be a highly destructive operation if the store's directory happens to be inline with other non-store contents
// To reduce the blast radius and likelihood of deleting things we don't own, Flush explicitly deletes oci-layout content only
func (l *Layout) Flush(ctx context.Context) error {
blobs := filepath.Join(l.Root, "blobs")
blobs := filepath.Join(l.Root, ocispec.ImageBlobsDir)
if err := os.RemoveAll(blobs); err != nil {
return err
}
index := filepath.Join(l.Root, "index.json")
index := filepath.Join(l.Root, ocispec.ImageIndexFile)
if err := os.RemoveAll(index); err != nil {
return err
}
layout := filepath.Join(l.Root, "oci-layout")
layout := filepath.Join(l.Root, ocispec.ImageLayoutFile)
if err := os.RemoveAll(layout); err != nil {
return err
}
@@ -240,7 +241,7 @@ func (l *Layout) writeLayer(layer v1.Layer) error {
return err
}
dir := filepath.Join(l.Root, "blobs", d.Algorithm)
dir := filepath.Join(l.Root, ocispec.ImageBlobsDir, d.Algorithm)
if err := os.MkdirAll(dir, os.ModePerm); err != nil && !os.IsExist(err) {
return err
}
@@ -260,3 +261,125 @@ func (l *Layout) writeLayer(layer v1.Layer) error {
_, err = io.Copy(w, r)
return err
}
// Remove artifact reference from the store
func (l *Layout) RemoveArtifact(ctx context.Context, reference string, desc ocispec.Descriptor) error {
if err := l.OCI.LoadIndex(); err != nil {
return err
}
l.OCI.RemoveFromIndex(reference)
return l.OCI.SaveIndex()
}
func (l *Layout) CleanUp(ctx context.Context) (int, int64, error) {
referencedDigests := make(map[string]bool)
if err := l.OCI.LoadIndex(); err != nil {
return 0, 0, fmt.Errorf("failed to load index: %w", err)
}
var processManifest func(desc ocispec.Descriptor) error
processManifest = func(desc ocispec.Descriptor) error {
if desc.Digest.Validate() != nil {
return nil
}
// mark digest as referenced by existing artifact
referencedDigests[desc.Digest.Hex()] = true
// fetch and parse manifests for layer digests
rc, err := l.OCI.Fetch(ctx, desc)
if err != nil {
return nil // skip if can't be read
}
defer rc.Close()
var manifest struct {
Config struct {
Digest digest.Digest `json:"digest"`
} `json:"config"`
Layers []struct {
digest.Digest `json:"digest"`
} `json:"layers"`
Manifests []struct {
Digest digest.Digest `json:"digest"`
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
} `json:"manifests"`
}
if err := json.NewDecoder(rc).Decode(&manifest); err != nil {
return nil
}
// handle image manifest
if manifest.Config.Digest.Validate() == nil {
referencedDigests[manifest.Config.Digest.Hex()] = true
}
for _, layer := range manifest.Layers {
if layer.Digest.Validate() == nil {
referencedDigests[layer.Digest.Hex()] = true
}
}
// handle manifest list
for _, m := range manifest.Manifests {
if m.Digest.Validate() == nil {
// mark manifest
referencedDigests[m.Digest.Hex()] = true
// process manifest for layers
manifestDesc := ocispec.Descriptor{
MediaType: m.MediaType,
Digest: m.Digest,
Size: m.Size,
}
processManifest(manifestDesc) // calls helper func on manifests in list
}
}
return nil
}
// walk through artifacts
if err := l.OCI.Walk(func(reference string, desc ocispec.Descriptor) error {
return processManifest(desc)
}); err != nil {
return 0, 0, fmt.Errorf("failed to walk artifacts: %w", err)
}
// read all entries
blobsPath := filepath.Join(l.Root, "blobs", "sha256")
entries, err := os.ReadDir(blobsPath)
if err != nil {
return 0, 0, fmt.Errorf("failed to read blobs directory: %w", err)
}
// track count and size of deletions
deletedCount := 0
var deletedSize int64
// scan blobs
for _, entry := range entries {
if entry.IsDir() {
continue
}
digest := entry.Name()
if !referencedDigests[digest] {
blobPath := filepath.Join(blobsPath, digest)
if info, err := entry.Info(); err == nil {
deletedSize += info.Size()
}
if err := os.Remove(blobPath); err != nil {
return deletedCount, deletedSize, fmt.Errorf("failed to remove blob %s: %w", digest, err)
}
deletedCount++
}
}
return deletedCount, deletedSize, nil
}

View File

@@ -63,16 +63,16 @@ func TestLayout_AddOCI(t *testing.T) {
}
func setup(t *testing.T) func() error {
tmpdir, err := os.MkdirTemp("", "hauler")
tempDir, err := os.MkdirTemp("", "hauler")
if err != nil {
t.Fatal(err)
}
root = tmpdir
root = tempDir
ctx = context.Background()
return func() error {
os.RemoveAll(tmpdir)
os.RemoveAll(tempDir)
return nil
}
}

Binary file not shown.

View File

@@ -1,11 +1,65 @@
# v1 manifests
apiVersion: content.hauler.cattle.io/v1
kind: Images
metadata:
name: hauler-content-images-example
spec:
images:
- name: ghcr.io/hauler-dev/library/busybox
- name: ghcr.io/hauler-dev/library/busybox:stable
platform: linux/amd64
- name: gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
---
apiVersion: content.hauler.cattle.io/v1
kind: Charts
metadata:
name: hauler-content-charts-example
spec:
charts:
- name: rancher
repoURL: https://releases.rancher.com/server-charts/stable
- name: rancher
repoURL: https://releases.rancher.com/server-charts/stable
version: 2.8.4
- name: rancher
repoURL: https://releases.rancher.com/server-charts/stable
version: 2.8.3
- name: hauler-helm
repoURL: oci://ghcr.io/hauler-dev
- name: hauler-helm
repoURL: oci://ghcr.io/hauler-dev
version: 1.0.6
- name: hauler-helm
repoURL: oci://ghcr.io/hauler-dev
version: 1.0.4
- name: rancher-cluster-templates-0.5.2.tgz
repoURL: testdata
- name: chart-with-file-dependency-chart-1.0.0.tgz
repoURL: testdata
add-dependencies: true
---
apiVersion: content.hauler.cattle.io/v1
kind: Files
metadata:
name: hauler-content-files-example
spec:
files:
- path: https://get.rke2.io/install.sh
- path: https://get.rke2.io/install.sh
name: rke2-install.sh
- path: testdata/hauler-manifest.yaml
- path: testdata/hauler-manifest.yaml
name: hauler-manifest-local.yaml
---
# v1alpha1 manifests
apiVersion: content.hauler.cattle.io/v1alpha1
kind: Images
metadata:
name: hauler-content-images-example
spec:
images:
- name: busybox
- name: busybox:stable
- name: ghcr.io/hauler-dev/library/busybox
- name: ghcr.io/hauler-dev/library/busybox:stable
platform: linux/amd64
- name: gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
---
@@ -32,7 +86,7 @@ spec:
repoURL: oci://ghcr.io/hauler-dev
version: 1.0.4
- name: rancher-cluster-templates-0.5.2.tgz
repoURL: .
repoURL: testdata
---
apiVersion: content.hauler.cattle.io/v1alpha1
kind: Files

View File

@@ -1,11 +1,46 @@
# v1 manifests
apiVersion: content.hauler.cattle.io/v1
kind: Images
metadata:
name: hauler-content-images-example
spec:
images:
- name: ghcr.io/hauler-dev/library/busybox
- name: ghcr.io/hauler-dev/library/busybox:stable
platform: linux/amd64
- name: gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
---
apiVersion: content.hauler.cattle.io/v1
kind: Charts
metadata:
name: hauler-content-charts-example
spec:
charts:
- name: rancher
repoURL: https://releases.rancher.com/server-charts/stable
version: 2.8.5
- name: hauler-helm
repoURL: oci://ghcr.io/hauler-dev
---
apiVersion: content.hauler.cattle.io/v1
kind: Files
metadata:
name: hauler-content-files-example
spec:
files:
- path: https://get.rke2.io
name: install.sh
- path: testdata/hauler-manifest.yaml
---
# v1alpha1 manifests
apiVersion: content.hauler.cattle.io/v1alpha1
kind: Images
metadata:
name: hauler-content-images-example
spec:
images:
- name: busybox
- name: busybox:stable
- name: ghcr.io/hauler-dev/library/busybox
- name: ghcr.io/hauler-dev/library/busybox:stable
platform: linux/amd64
- name: gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
---