Compare commits

...

197 Commits

Author SHA1 Message Date
Zack Brady
e089c31879 minor tests updates (#437) 2025-05-01 11:07:00 -04:00
dependabot[bot]
b7b599e6ed bump golang.org/x/net in the go_modules group across 1 directory (#433)
bumps the go_modules group with 1 update in the / directory: [golang.org/x/net](https://github.com/golang/net).

Updates `golang.org/x/net` from 0.37.0 to 0.38.0
- [Commits](https://github.com/golang/net/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2025-04-30 22:21:37 -04:00
Adam Martin
ea53002f3a formatting and flag text updates
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-04-22 12:43:04 -04:00
Nathaniel Churchill
4d0f779ae6 add keyless signature verification (#434) 2025-04-21 15:27:17 -04:00
dependabot[bot]
4d0b407452 bump helm.sh/helm/v3 in the go_modules group across 1 directory (#430)
bumps the go_modules group with 1 update in the / directory: [helm.sh/helm/v3](https://github.com/helm/helm).


Updates `helm.sh/helm/v3` from 3.17.0 to 3.17.3
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.17.0...v3.17.3)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-version: 3.17.3
  dependency-type: direct:production
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-11 00:16:05 -05:00
Adam Martin
3b96a95a94 add --only flag to hauler store copy (for images) (#429)
* bump cosign fork and tidy

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* add --only to hauler store copy

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2025-04-08 09:27:12 -04:00
Adam Toy
f9a188259f fix tlog verification error/warning output (#428)
* fix tlog verification error/warning output
* extend error msg to avoid ambiguity
2025-04-04 16:51:26 -05:00
Adam Martin
5021f3ab6b cleanup new tlog flag typos and add shorthand (#426)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-03-28 13:15:48 -04:00
Adam Martin
db065a1088 default public transparency log verification to false to be airgap friendly but allow override (#425)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-03-27 09:13:23 -04:00
dependabot[bot]
01bf58de03 bump github.com/golang-jwt/jwt/v4 (#423)
bumps the go_modules group with 1 update in the / directory: [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt).


Updates `github.com/golang-jwt/jwt/v4` from 4.5.1 to 4.5.2
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.1...v4.5.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 15:09:34 -04:00
dependabot[bot]
38b979d0c5 bump the go_modules group across 1 directory with 2 updates (#422)
bumps the go_modules group with 2 updates in the / directory: [github.com/containerd/containerd](https://github.com/containerd/containerd) and [golang.org/x/net](https://github.com/golang/net).


Updates `github.com/containerd/containerd` from 1.7.24 to 1.7.27
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.7.24...v1.7.27)

Updates `golang.org/x/net` from 0.33.0 to 0.36.0
- [Commits](https://github.com/golang/net/compare/v0.33.0...v0.36.0)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-type: direct:production
  dependency-group: go_modules
- dependency-name: golang.org/x/net
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-24 11:52:25 -04:00
dependabot[bot]
7de20a1f15 bump github.com/go-jose/go-jose/v3 (#417)
bumps the go_modules group with 1 update in the / directory: [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose).

Updates `github.com/go-jose/go-jose/v3` from 3.0.3 to 3.0.4
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.3...v3.0.4)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-19 23:14:27 -04:00
dependabot[bot]
088fde5aa9 bump github.com/go-jose/go-jose/v4 (#415)
bumps the go_modules group with 1 update in the / directory: [github.com/go-jose/go-jose/v4](https://github.com/go-jose/go-jose).

updates `github.com/go-jose/go-jose/v4` from 4.0.4 to 4.0.5
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v4.0.4...v4.0.5)

---

updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v4
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-28 03:29:02 -05:00
Adam Martin
eb275b9690 clear default manifest name if product flag used with sync (#412)
* clear default manifest name if product flag used with sync
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* only clear the default if the user did NOT explicitly set --filename
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2025-02-07 13:46:23 -05:00
Zack Brady
7d28df1949 updates for v1.2.0 (#408)
* updates for v1.2.0
* fixed readme typo
2025-02-05 13:26:34 -05:00
Zack Brady
08f566fb28 fixed remote code (#407) 2025-02-05 11:25:11 -05:00
Zack Brady
c465d2c143 added remote file fetch to load (#406)
* updated load flag and related logs [breaking change]
* fixed load flag typo
* added remote file fetch to load
* fixed merge/rebase typo

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-02-05 09:32:35 -05:00
Zack Brady
39325585eb added remote and multiple file fetch to sync (#405)
* updated sync flag and related logs [breaking change]
* fixed typo in sync flag name
* and the last typo fix...
* added remote and multiple file fetch to sync

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-02-05 09:13:38 -05:00
Zack Brady
535a82c1b5 updated save flag and related logs (#404)
Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-02-05 09:06:13 -05:00
Zack Brady
53cf953750 updated load flag and related logs [breaking change] (#403)
* updated load flag and related logs [breaking change]
* fixed load flag typo

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-02-05 09:01:18 -05:00
Zack Brady
ff144b1180 updated sync flag and related logs [breaking change] (#402)
* updated sync flag and related logs [breaking change]
* fixed typo in sync flag name
* and the last typo fix...
2025-02-05 08:54:54 -05:00
Zack Brady
938914ba5c upgraded api update to v1/updated dependencies (#400)
* initial api update to v1
* updated dependencies
* updated manifests
* last bit of updates
* fixed manifest typo
* added deprecation warning
2025-02-04 21:36:20 -05:00
Zack Brady
603249dea9 fixed consts for oci declarations (#398) 2025-02-03 08:14:44 -05:00
Adam Martin
37032f5379 fix for correctly grabbing platform post cosign 2.4 updates (#393) 2025-01-27 18:44:11 -05:00
Adam Martin
ec9ac48476 use cosign v2.4.1+carbide.2 to address containerd annotation in index.json (#390) 2025-01-19 10:32:12 -05:00
dependabot[bot]
5f5cd64c2f Bump the go_modules group across 1 directory with 2 updates (#385) 2025-01-10 19:59:36 -05:00
Adam Martin
882713b725 replace mholt/archiver with mholt/archives (#384) 2025-01-10 19:15:00 -05:00
Adam Martin
a20d7bf950 forked cosign bump to 2.4.1 and use as a library vs embedded binary (#383)
* only ignore project-root/store not all store paths
* remove embedded cosign binary in favor of fork library
2025-01-10 17:14:44 -05:00
Zack Brady
e97adcdfed cleaned up registry and improved logging (#378)
* cleaned up registry and improved logging
* last bit of logging improvements/import clean up

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2025-01-09 15:21:03 -05:00
dependabot[bot]
cc17b030a9 Bump golang.org/x/crypto in the go_modules group across 1 directory (#377)
Bumps the go_modules group with 1 update in the / directory: [golang.org/x/crypto](https://github.com/golang/crypto).

Updates `golang.org/x/crypto` from 0.28.0 to 0.31.0
- [Commits](https://github.com/golang/crypto/compare/v0.28.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-14 01:02:58 -05:00
Zack Brady
090f4dc905 fixed cli desc for store env var (#374) 2024-12-09 08:13:33 -05:00
Zack Brady
74aa40c69b updated versions for go/k8s/helm (#373) 2024-12-04 16:10:36 -05:00
Zack Brady
f5e3b38a6d updated version flag to internal/flags (#369) 2024-12-04 14:29:51 -05:00
Zack Brady
01faf396bb renamed incorrectly named consts (#371) 2024-12-04 14:29:26 -05:00
Zack Brady
235218cfff added store env var (#370)
* added strictmode flag and consts
* updated code with strictmode
* added flag for retryoperation
* updated registry short flag
* added store env var
* added/fixed more env var code

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2024-12-04 14:26:53 -05:00
Zack Brady
4270a27819 adding ignore errors and retries for continue on error/fail on error (#368)
* added strictmode flag and consts
* updated code with strictmode
* added flag for retryoperation
* updated registry short flag
* updated strictmode to ignore errors
* fixed command description
* cleaned up error/debug logs
2024-12-02 17:18:58 -05:00
Zack Brady
1b77295438 updated/fixed hauler directory (#354)
* added env variables for haulerDir/tempDir
* updated hauler directory for the install script
* cleanup/fixes for install script
* updated variables based on feedback
* revert "updated variables based on feedback"
  * reverts commit 54f7a4d695
* minor restructure to root flags
* updated logic to include haulerdir flag
* cleaned up/formatted new logic
* more cleanup and formatting
2024-11-14 21:37:25 -05:00
Zack Brady
38c7d1b17a standardize consts (#353)
* removed k3s code
* standardize and formatted consts
* fixed consts for sync.go
* trying another fix for sync

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2024-11-06 18:31:06 -05:00
Zack Brady
2fa6c36208 removed cachedir code (#355)
co-authored-by: Jacob Blain Christen <jacob.blain.christen@ranchergovernment.com>
2024-11-06 14:56:33 -05:00
Zack Brady
dd50ed9dba removed k3s code (#352) 2024-11-06 10:46:38 -07:00
Zack Brady
fb100a27ac updated dependencies for go, helm, and k8s (#351)
* updated dependencies for go, helm, and k8s
* fixed dependency version for k8s
2024-11-01 17:06:13 -04:00
Jacob Blain Christen
3406d5453d [feature] build with boring crypto where available (#344) 2024-10-04 15:09:20 -07:00
Zack Brady
991f5b6bc1 updated workflow to goreleaser builds (#341)
* updated workflow to goreleaser builds
2024-10-02 11:12:32 -07:00
Zack Brady
0595ab043a added timeout to goreleaser workflow (#340) 2024-10-01 21:18:19 -04:00
Zack Brady
73e5c1ec8b trying new workflow build processes (#337)
* trying new workflow build processes
* added last bit to new try
2024-10-01 20:07:02 -04:00
Zack Brady
25d8cb83b2 improved workflow performance (#336) 2024-10-01 16:10:37 -04:00
Adam Martin
9f7229a36b have extract use proper ref (#335)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2024-10-01 16:04:06 -04:00
Zack Brady
b294b6f026 yet another workflow goreleaser fix (#334) 2024-10-01 13:21:16 -04:00
Zack Brady
ebd3fd66c8 even more workflow fixes (#333)
* reverted build hooks
* updated goreleaser arguments
2024-10-01 12:26:34 -04:00
Zack Brady
6373a476b5 added more fixes to github workflow (#332) 2024-10-01 09:23:30 -04:00
Zack Brady
2c7aacd105 fixed typo in hauler store save (#331)
* fixed typo in hauler store save
* update internal/flags/save.go

Co-authored-by: Jacob Blain Christen <dweomer5@gmail.com>
Signed-off-by: Zack Brady <zackbrady123@gmail.com>

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
Co-authored-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-09-30 18:06:10 -04:00
Zack Brady
bbcbe0239a updates to fix build processes (#330) 2024-09-30 16:51:47 -04:00
Zack Brady
8a53a26a58 added integration tests for non hauler tarballs (#325)
* added tests for tarballs
* updated tests for tarball changes
* fixed tests/build for latest changes
2024-09-27 16:38:39 -04:00
Jacob Blain Christen
41d88954c6 bump: golang >= 1.23.1 (#328)
Signed-off-by: Jacob Blain Christen <jacob.blain.christen@ranchergovernment.com>
2024-09-24 11:27:28 -07:00
Adam Martin
caaed30297 add platform flag to store save (#329)
* add platform flag to store save

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* use platform parser from go-cr

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2024-09-24 11:18:19 -04:00
Jacob Blain Christen
aee296d48d Update feature_request.md
Remove RFE from prompt.

Signed-off-by: Jacob Blain Christen <jacob.blain.christen@ranchergovernment.com>
2024-09-23 10:19:11 -07:00
Zack Brady
407ed94a0b updated/standardize command descriptions (#313)
* initial desc formatting/updates
* fixed typos
* updated commands base on feedback
* more updates based on feedback

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2024-09-20 18:19:49 -04:00
Adam Martin
15a9e1a3c4 use new annotation for 'store save' manifest.json (#324)
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2024-09-20 17:14:27 -04:00
Jacob Blain Christen
6510947bb9 enable docker load for hauler tarballs (#320)
- fixes #276

Signed-off-by: Jacob Blain Christen <jacob.blain.christen@ranchergovernment.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2024-09-20 13:32:49 -04:00
Adam Martin
01eebd54af bump to cosign v2.2.3-carbide.3 for new annotation (#322)
* bump cosign to v2.2.3-carbide.3
* use new annotation in 'store info' when listing images

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
2024-09-18 09:56:22 -07:00
Adam Martin
5aa55e9eda continue on error when adding images to store (#317)
* continue on error when adding images to store

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

* Update cmd/hauler/cli/store/add.go

Co-authored-by: Jacob Blain Christen <dweomer5@gmail.com>
Signed-off-by: Adam Martin <42001113+amartin120@users.noreply.github.com>

---------

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
Signed-off-by: Adam Martin <42001113+amartin120@users.noreply.github.com>
Co-authored-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-09-04 14:11:07 -04:00
Brandon
6f8cd04a32 Update README.md (#318)
Removing the weird GA "disclaimer"

Signed-off-by: Brandon <bgulla@users.noreply.github.com>
2024-09-04 11:10:29 -07:00
Zack Brady
02231d716f fixed completion commands (#312) 2024-08-29 19:10:59 -04:00
Jacob Blain Christen
16fa03fec8 github.com/rancherfederal/hauler => hauler.dev/go/hauler (#311) 2024-08-26 13:54:06 -07:00
Jacob Blain Christen
51fe531c64 pages: enable go install hauler.dev/go/hauler (#310)
should fix:
```
go install hauler.dev/go/hauler@latest
go: hauler.dev/go/hauler@latest: unrecognized import path "hauler.dev/go/hauler": reading https://hauler.dev/go/hauler?go-get=1: 404 Not Found
```

Signed-off-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-08-26 12:12:59 -07:00
Jacob Blain Christen
1a6ce4290f Create CNAME
Signed-off-by: Jacob Blain Christen <jacob.blain.christen@ranchergovernment.com>
2024-08-26 11:32:55 -07:00
Jacob Blain Christen
e4ec7bed76 pages: initial workflow (#309)
Signed-off-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-08-26 11:25:54 -07:00
Zack Brady
cb81823487 testing and linting updates (#305)
* updated makefile operations

* upgraded testdata and related tests

* initial updates for tests

* fixed errors with testdata

* last bit of updates to tests

* cleaned and commented makefile

* updated tests for latest merges

---------

Signed-off-by: Zack Brady <zackbrady123@gmail.com>
2024-08-26 12:27:18 -04:00
will
2d930b5653 feat-273: TLS Flags (#303)
* fix: move constant and flags to prevent loop

* feat: add tls cert to serve

* fix: add tls cli description

* fix: remove unnecessary code

* small updates/fixed unit test errors

* fix: migrate all flags, use exported vars

* fix: standardize to AddFlags

---------

Signed-off-by: will <30413278+wcrum@users.noreply.github.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2024-08-25 16:16:37 -04:00
Zack Brady
bd0cd8f428 added list-repos flag (#298)
Co-authored-by: Adam Toy <atoy3731@gmail.com>
2024-08-23 09:25:02 -04:00
Zack Brady
d6b3c94920 fixed hauler login typo (#299)
* updated hauler login typo

* updated hauler login descs
2024-08-23 09:12:46 -04:00
Allen Conlon
20958826ef updated cobra function for shell completion (#304) 2024-08-22 22:06:16 -04:00
Zack Brady
d633eeffcc updated install.sh to remove github api (#293)
* updated install.sh to remove github api

Signed-off-by: Jacob Blain Christen <dweomer5@gmail.com>

---------

Signed-off-by: Jacob Blain Christen <dweomer5@gmail.com>
Co-authored-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-08-14 12:27:46 -07:00
Adam Martin
c592551a37 fix image ref keys getting squashed when containing sigs/atts (#291)
* fix image ref keys getting squashed when containing sigs/atts

Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>

---------

Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
Signed-off-by: Adam Martin <adam.martin@ranchergovernment.com>
Co-authored-by: Adam Martin <adam.martin@rancherfederal.com>
2024-08-13 12:18:10 -07:00
Jacob Blain Christen
ef3eb05fce fix missing versin info in release build (#283)
Signed-off-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-08-05 10:11:40 -07:00
dependabot[bot]
3f64914097 bump github.com/docker/docker in the go_modules group across 1 directory (#281)
Bumps the go_modules group with 1 update in the / directory: [github.com/docker/docker](https://github.com/docker/docker).

Updates `github.com/docker/docker` from 25.0.5+incompatible to 25.0.6+incompatible
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.5...v25.0.6)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
2024-08-01 17:51:34 -04:00
Zack Brady
6a74668e2c updated install script (install.sh) (#280)
* removed sudo requirement from install script
* updated install script to specify install directory
* cleaned up install script
* a bit more changes and updates to the install script
* updated install script variable syntax
* added missing logic to install script
2024-08-01 17:48:26 -04:00
Kamin Fay
0c5cf20e87 fix digest images being lost on load of hauls (Signed). (#259)
* Adding oci tests
* Fixed the oci pusher to split on the correct '@' symbol
* Reverted Pusher changes and adjusted nameMap references in Index

---------

Co-authored-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-07-30 23:44:47 -04:00
will
513719bc9e feat: add readonly flag (#277)
* updated flag from writeable to readonly (default=true)

---------

Signed-off-by: Jacob Blain Christen <dweomer5@gmail.com>
Co-authored-by: Zack Brady <zackbrady123@gmail.com>
Co-authored-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-07-30 11:26:39 -07:00
Zack Brady
047b7a7003 fixed makefile for goreleaser v2 changes (#278) 2024-07-30 13:18:17 -04:00
Zack Brady
a4685169c6 updated goreleaser versioning defaults (#279) 2024-07-30 13:17:40 -04:00
Jacob Blain Christen
47549615c4 update feature_request.md (#274)
- `[RFE]` ➡️ `[feature]`

Signed-off-by: Jacob Blain Christen <dweomer5@gmail.com>
2024-07-26 14:48:08 -04:00
Zack Brady
2d725026dc merge pull request #271 from zackbradys/removed-some-references
removed some old references
2024-07-23 13:56:59 -04:00
Zack Brady
60667b7116 updated old references 2024-07-22 18:50:11 -04:00
Zack Brady
7d62a1c98e updated actions workflow user 2024-07-22 18:46:53 -04:00
Zack Brady
894ffb1533 merge pull request #269 from zackbradys/add-dockerhub-support
added dockerhub to github actions workflow
2024-07-20 01:18:08 -04:00
Zack Brady
78b3442d23 added dockerhub to github actions workflow 2024-07-20 01:12:59 -04:00
Zack Brady
cd46febb6b merge pull request #268 from zackbradys/remove-helm-chart
removed helm chart
2024-07-20 01:10:37 -04:00
Zack Brady
0957a930dd removed helm chart 2024-07-20 00:57:25 -04:00
Zack Brady
a6bc6308d9 merge pull request #267 from zackbradys/main
added debug container and workflow
2024-07-19 22:21:38 -04:00
Zack Brady
1304cf6c76 added debug container and workflow 2024-07-17 00:01:24 -04:00
Zack Brady
f2e02c80c0 merge pull request #262 from zackbradys/main
updated products flag description
2024-07-12 22:52:43 -04:00
Zack Brady
25806e993e updated products flag description 2024-07-12 01:25:05 -04:00
Zack Brady
05e67bc750 merge pull request #255 from zackbradys/main
updated chart for release
2024-06-25 23:27:24 -04:00
Zack Brady
b43ed0503a updated chart for release 2024-06-25 23:14:44 -04:00
Zack Brady
27e2fc9de0 merge pull request #254 from zackbradys/main
fixed workflow errors/warnings
2024-06-25 22:48:30 -04:00
Zack Brady
d32d75b93e fixed workflow errors/warnings 2024-06-25 22:43:31 -04:00
Zack Brady
ceb77601d0 merge pull request #252 from zackbradys/main
overhauling github actions and workflows
2024-06-24 22:52:08 -04:00
Zack Brady
d90545a9e4 fixed permissions on testdata 2024-06-17 22:23:03 -04:00
Zack Brady
bef141ab67 updated chart versions (will need to update again) 2024-06-17 19:45:18 -04:00
Zack Brady
385d767c2a last bit of fixes to workflow 2024-06-17 19:42:20 -04:00
Zack Brady
22edc77506 updated unit test workflow 2024-06-14 20:56:37 -04:00
Zack Brady
9058797bbc updated goreleaser deprecations 2024-06-14 20:43:32 -04:00
Zack Brady
35e2f655da added helm chart release job 2024-06-14 20:37:58 -04:00
Zack Brady
f5c0f6f0ae updated github template names 2024-06-14 20:04:01 -04:00
Zack Brady
0ec77b4168 merge pull request #248 from zackbradys/main
formatted all code with `go fmt`
2024-06-14 16:46:21 -04:00
Zack Brady
7a7906b8ea updated imports (and go fmt) 2024-06-13 23:44:06 -04:00
Zack Brady
f4774445f6 merge pull request #240 from pat-earl/doc_updates
added some documentation text to sync command
2024-06-10 18:13:09 -04:00
Zack Brady
d59b29bfce formatted gitignore to match dockerignore 2024-06-05 14:40:50 -04:00
Zack Brady
fd702202ac formatted all code (go fmt) 2024-06-05 08:25:45 -04:00
Adam Martin
9e9565717b Merge pull request #245 from ethanchowell/ehowell/helm-client
Configure chart commands to use helm clients for OCI and private regi…
2024-06-04 13:16:42 -04:00
Zack Brady
bfe47ae141 updated chart tests for new features 2024-06-03 23:31:50 -04:00
Zack Brady
ebab7f38a0 merge pull request #247 from kaminfay/dev/add-fileserver-timeout-flag
adding the timeout flag for fileserver command and setting default timeout to 60 seconds
2024-06-03 22:33:36 -04:00
Kamin Fay
f0cba3c2c6 Adding the timeout flag for fileserver command 2024-05-28 15:28:20 -04:00
Ethan Howell
286120da50 Configure chart commands to use helm clients for OCI and private registry support 2024-05-24 12:06:16 -04:00
Patrick Earl
dcdeb93518 Added some documentation text to sync command 2024-05-02 14:17:38 -04:00
Adam Martin
f7c24f6129 Merge pull request #235 from rancherfederal/dependabot/go_modules/golang.org/x/net-0.23.0
Bump golang.org/x/net from 0.17.0 to 0.23.0
2024-04-23 16:40:45 -04:00
Adam Martin
fe88d7033c Merge pull request #234 from amartin120/dup-digest-bugfix
fix for dup digest smashing in cosign
2024-04-23 15:49:54 -04:00
dependabot[bot]
ef31984c97 Bump golang.org/x/net from 0.17.0 to 0.23.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.17.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.17.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-23 19:46:05 +00:00
Adam Martin
2889f30275 fix for dup digest smashing in cosign
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-04-23 14:06:40 -04:00
Zack Brady
0674e0ab30 merge pull request #229 from zackbradys/vagrant
removed vagrant scripts
2024-04-15 09:36:24 -04:00
Zack Brady
d645c52135 removed vagrant scripts 2024-04-14 08:45:51 -04:00
Zack Brady
44baab3213 merge pull request #227 from zackbradys/helmifying
helmifying hauler
2024-04-11 22:02:37 -04:00
Zack Brady
1a317b0172 merge pull request #226 from zackbradys/testdata
updated hauler testdata
2024-04-11 21:57:37 -04:00
Zack Brady
128cb3b252 last bit of updates and formatting of chart 2024-04-07 00:23:49 -04:00
Zack Brady
91ff998634 updated hauler testdata 2024-04-06 15:19:53 -04:00
Zack Brady
8ac1ecaf29 adding functionality and cleaning up 2024-04-06 02:06:33 -04:00
Zack Brady
7447aad20a added initial helm chart 2024-04-06 00:28:59 -04:00
Zack Brady
003456d8ab merge pull request #225 from zackbradys/main
removed tag in release workflow
2024-04-05 21:36:26 -04:00
Zack Brady
f44b8b93af removed tag in release workflow 2024-04-05 21:32:58 -04:00
Zack Brady
e405840642 merge pull request #224 from zackbradys/main
fixed image ref in release workflow
2024-04-05 20:45:53 -04:00
Zack Brady
8c9aa909b0 updated/fixed image ref in release workflow 2024-04-05 20:43:36 -04:00
Zack Brady
8670489520 merge pull request #223 from zackbradys/main
fixed platforms in release workflow
2024-04-05 20:04:57 -04:00
Zack Hodgson Brady
f20d4052a4 updated/fixed platforms in release workflow 2024-04-05 20:02:00 -04:00
Zack Brady
c84bca43d2 updated/cleaned github actions (#222)
* cleaned up unit test workflow
* updated/cleaned up release workflow
* fixed workflow typo
2024-04-05 16:12:49 -07:00
Claus Albøge
6863d91f69 Make Product Registry configurable (#194)
Co-authored-by: Claus Albøge <ca@netic.dk>
2024-04-05 14:40:26 -07:00
Zack Brady
16eea6ac2a updated fileserver directory name (#219) 2024-04-05 14:36:39 -07:00
Adam Martin
f6f227567c Merge pull request #221 from amartin120/fix-file-logging
fix logging for files
2024-04-01 12:45:58 -04:00
Adam Martin
eb810c16f5 fix logging for files
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-04-01 12:23:05 -04:00
Adam Martin
b18f55ea60 Merge pull request #220 from amartin120/temp-override
temp dir override for `hauler store load`
2024-04-01 10:40:31 -04:00
Adam Martin
4bbe622073 add extra info for the tempdir override flag
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-04-01 09:39:34 -04:00
Adam Martin
ea5bcb36ae tempdir override flag for load
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-04-01 09:34:26 -04:00
Adam Martin
5c7daddfef Merge pull request #218 from amartin120/remove-cache-flag
deprecate misleading cache flag from store
2024-03-29 13:48:51 -04:00
Adam Martin
7083f3a4f3 deprecate the cache flag instead of remove
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-03-29 13:40:24 -04:00
Adam Martin
8541d73a0d Merge pull request #217 from amartin120/logging-updates
better logging when adding to store
2024-03-29 11:36:45 -04:00
Adam Martin
49d705d14c Merge pull request #216 from amartin120/cosign-update
update to v2.2.3 of RGS cosign fork
2024-03-29 11:36:29 -04:00
Clayton Castro
722851d809 Merge pull request #215 from clanktron/container
add container definition
2024-03-28 16:20:35 -07:00
clayton
82aedc867a switch to using bci-golang as builder image 2024-03-28 13:53:22 -07:00
clayton
e8fb37c6ed fix: ensure /tmp for hauler store load 2024-03-28 13:49:01 -07:00
Adam Martin
545b3f8acd added the copy back for now
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-03-28 16:16:05 -04:00
Adam Martin
3ae92fe20a remove copy at the image sync not needed with cosign update
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-03-28 15:00:00 -04:00
Adam Martin
35538bf45a removed misleading cache flag
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-03-28 14:24:37 -04:00
Adam Martin
b6701bbfbc better logging when adding to store
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-03-28 14:16:22 -04:00
Adam Martin
14738c3cd6 update to v2.2.3 of our cosign fork
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-03-28 13:11:26 -04:00
clayton
0657fd80fe add: dockerignore 2024-03-27 02:25:12 -07:00
clayton
d132e8b8e0 add: Dockerfile 2024-03-27 02:25:06 -07:00
Adam Martin
29367c152e Merge pull request #213 from rancherfederal/dependabot/go_modules/google.golang.org/protobuf-1.33.0
Bump google.golang.org/protobuf from 1.31.0 to 1.33.0
2024-03-26 14:58:08 -04:00
Adam Martin
185ae6bd74 Merge pull request #212 from rancherfederal/dependabot/go_modules/github.com/docker/docker-25.0.5incompatible
Bump github.com/docker/docker from 25.0.1+incompatible to 25.0.5+incompatible
2024-03-26 14:57:57 -04:00
dependabot[bot]
b6c78d3925 Bump google.golang.org/protobuf from 1.31.0 to 1.33.0
Bumps google.golang.org/protobuf from 1.31.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-26 13:04:41 +00:00
dependabot[bot]
e718d40744 Bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 25.0.1+incompatible to 25.0.5+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v25.0.1...v25.0.5)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-26 13:03:05 +00:00
Zack Brady
1505bfb3af merge pull request #207 from zackbradys/main
updated and added new logos
2024-03-20 08:17:00 -04:00
Zack Hodgson Brady
e27b5b3cd1 updated and added new logos 2024-03-19 18:22:21 -04:00
Zack Brady
0472c8fc65 merge pull request #203 from zackbradys/main
updated github files
2024-03-11 09:28:08 -04:00
Zack Hodgson Brady
70a48f2efe updated github files 2024-03-10 16:45:41 -04:00
Adam Martin
bb2a8bfbec Merge pull request #197 from fgiudici/file_name_option
Fix --name option in "store add file" command
2024-02-27 08:33:09 -05:00
Adam Martin
2779c649c2 Merge pull request #195 from rancherfederal/dependabot/go_modules/helm.sh/helm/v3-3.14.2
Bump helm.sh/helm/v3 from 3.14.1 to 3.14.2
2024-02-27 08:00:57 -05:00
Francesco Giudici
8120537af2 Fix --name option in "store add file" command
Fixes: #196

Signed-off-by: Francesco Giudici <francesco.giudici@suse.com>
2024-02-27 09:54:48 +01:00
dependabot[bot]
9cdab516f0 Bump helm.sh/helm/v3 from 3.14.1 to 3.14.2
Bumps [helm.sh/helm/v3](https://github.com/helm/helm) from 3.14.1 to 3.14.2.
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.14.1...v3.14.2)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-27 00:21:10 +00:00
Adam Martin
d136d1bfd2 Merge pull request #191 from atanasdinov/cosign-error-code
Exit with status code 1 if cosign is not configured
2024-02-23 08:49:25 -05:00
Atanas Dinov
003560c3b3 Exit with status code 1 if cosign is not configured 2024-02-23 10:39:47 +02:00
Adam Martin
1b9d057f7a Merge pull request #188 from amartin120/registry-flag
add registry flag to sync cli to match annotation functionality
2024-02-22 08:57:44 -05:00
Adam Martin
2764e2d3ea Merge pull request #187 from amartin120/fix-exitcode
fix exit code on error
2024-02-22 08:57:31 -05:00
Zack Brady
360049fe19 reverting changes for logos (#189) 2024-02-21 14:35:07 -07:00
Brandon
79b240d17f Merge pull request #186 from rancherfederal/bdev
adding graphics to README.md
2024-02-20 06:50:36 -05:00
bgulla
214704bcfb adding graphics 2024-02-20 06:46:51 -05:00
Adam Martin
ef73fff01a fix exit code on error
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-19 16:45:04 -05:00
Adam Martin
0c6fdc86da add registry flag to cli for sync 2024-02-19 10:10:47 -05:00
Zack Brady
7fb537a31a merge pull request #184 from zackbradys/main
updated `readme` and `install.sh` for hauler `v1.0.0`
2024-02-17 22:12:44 -05:00
Zack Brady
6ca7fb6255 updated readme and removed roadmap 2024-02-17 15:00:59 -05:00
Zack Brady
d70a867283 updated/cleaned up install.sh 2024-02-17 10:53:10 -05:00
Adam Martin
46ea8b5df9 Merge pull request #180 from amartin120/remove-deprecated-cmds
remove deprecated commands
2024-02-17 09:09:05 -05:00
Adam Martin
5592ec0f88 remove deprecated commands
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-16 09:10:29 -05:00
Adam Martin
e8254371c0 Merge pull request #177 from amartin120/add-login
add login command
2024-02-16 08:16:58 -05:00
Adam Martin
8d2a84d27c Merge pull request #179 from rancherfederal/dependabot/go_modules/helm.sh/helm/v3-3.14.1
Bump helm.sh/helm/v3 from 3.14.0 to 3.14.1
2024-02-16 08:14:00 -05:00
Adam Martin
72734ecc76 Merge pull request #178 from amartin120/bug-file-name
bug-fix: handle complex file names
2024-02-16 08:13:28 -05:00
dependabot[bot]
4759879a5d Bump helm.sh/helm/v3 from 3.14.0 to 3.14.1
Bumps [helm.sh/helm/v3](https://github.com/helm/helm) from 3.14.0 to 3.14.1.
- [Release notes](https://github.com/helm/helm/releases)
- [Commits](https://github.com/helm/helm/compare/v3.14.0...v3.14.1)

---
updated-dependencies:
- dependency-name: helm.sh/helm/v3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-16 04:48:57 +00:00
Adam Martin
dbcfe13fb6 bug-fix: handle complex file names
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-15 19:55:38 -05:00
Adam Martin
cd8d4f6e46 add login command
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-15 15:54:23 -05:00
Adam Martin
e15c8d54fa Merge pull request #176 from amartin120/info-totals
update to add size totals and cosign artifacts to the `info` command
2024-02-15 12:39:17 -05:00
Adam Martin
ccd529ab48 update to add size totals and cosign bits to the info command
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-14 13:03:36 -05:00
Adam Martin
3cf4afe6d1 Merge pull request #173 from amartin120/sync-annotations
image spec manifest annotations - key/platform/registry
2024-02-12 13:10:13 -05:00
Adam Martin
0c55d00d49 switch the 'apply the registry override first in a image sync
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-11 10:58:31 -05:00
Adam Martin
6c2b97042e switch the 'not a multi-arch image' log message to be debug
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-11 10:37:40 -05:00
Adam Martin
be22e56f27 fix whitspace issue
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-10 23:32:42 -05:00
Adam Martin
c8ea279c0d add better logging for save
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-10 23:30:34 -05:00
Adam Martin
59ff02b52b add annotations for registry
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-10 22:38:11 -05:00
Adam Martin
8b3398018a add annotations for key and platform
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-10 21:07:29 -05:00
128 changed files with 5266 additions and 2990 deletions

View File

@@ -1,33 +1,51 @@
---
name: Bug Report
about: Create a report to help us improve!
about: Submit a bug report to help us improve!
title: '[BUG]'
labels: 'kind/bug'
labels: 'bug'
assignees: ''
---
<!-- Thank you for helping us to improve Hauler! We welcome all bug reports. Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. -->
<!-- Thank you for helping us to improve Hauler! We welcome all bug reports. Please fill out each area of the template so we can better assist you. Comments like this will be hidden when you submit, but you can delete them if you wish. -->
**Environmental Info:**
*
<!-- Provide the output of "uname -a" -->
-
**Hauler Version:**
*
**System CPU architecture, OS, and Version:**
* <!-- Provide the output from "uname -a" on the system where Hauler is installed -->
<!-- Provide the output of "hauler version" -->
**Describe the bug:**
* <!-- A clear and concise description of the bug. -->
-
**Steps To Reproduce:**
* <!-- A clear and concise way to reproduce the bug. -->
**Describe the Bug:**
**Expected behavior:**
* <!-- A clear and concise description of what you expected to happen, without the bug. -->
<!-- Provide a clear and concise description of the bug -->
**Actual behavior:**
* <!-- A clear and concise description of what actually happened. -->
-
**Additional context / logs:**
* <!-- Add any other context and/or logs about the problem here. -->
**Steps to Reproduce:**
<!-- Provide a clear and concise way to reproduce the bug -->
-
**Expected Behavior:**
<!-- Provide a clear and concise description of what you expected to happen -->
-
**Actual Behavior:**
<!-- Provide a clear and concise description of what actually happens -->
-
**Additional Context:**
<!-- Provide any other context and/or logs about the bug -->
-

View File

@@ -1,21 +1,33 @@
---
name: Feature Request
about: Create a report to help us improve!
title: '[RFE]'
labels: 'kind/rfe'
about: Submit a feature request for us to improve!
title: '[feature]'
labels: 'enhancement'
assignees: ''
---
<!-- Thanks for helping us to improve Hauler! We welcome all requests for enhancements (RFEs). Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. -->
<!-- Thank you for helping us to improve Hauler! We welcome all requests for enhancements (RFEs). Please fill out each area of the template so we can better assist you. Comments like this will be hidden when you submit, but you can delete them if you wish. -->
**Is your feature request related to a problem? Please describe.**
* <!-- A clear and concise description of the problem. -->
**Is this Feature/Enhancement related to an Existing Problem? If so, please describe:**
**Describe the solution you'd like**
* <!-- A clear and concise description of what you want to happen. -->
<!-- Provide a clear and concise description of the problem -->
**Describe alternatives you've considered**
* <!-- A clear and concise description of any alternative solutions or features you've considered. -->
-
**Additional context**
* <!-- Add any other context or screenshots about the feature request here. -->
**Describe Proposed Solution(s):**
<!-- Provide a clear and concise description of what you want to happen -->
-
**Describe Possible Alternatives:**
<!-- Provide a clear and concise description of any alternative solutions or features you've considered -->
-
**Additional Context:**
<!-- Provide a clear and concise description of the problem -->
-

View File

@@ -1,20 +1,36 @@
**Please check below, if the PR fulfills these requirements:**
- [ ] The commit message follows the guidelines.
- [ ] Tests for the changes have been added (for bug fixes / features).
- [ ] Docs have been added / updated (for bug fixes / features).
- [ ] Commit(s) and code follow the repositories guidelines.
- [ ] Test(s) have been added or updated to support these change(s).
- [ ] Doc(s) have been added or updated to support these change(s).
<!-- Comments like this will be hidden when you submit, but you can delete them if you wish. -->
**What kind of change does this PR introduce?**
* <!-- Bug fix, feature, docs update, ... -->
**Associated Links:**
**What is the current behavior?**
* <!-- You can also link to an open issue here -->
<!-- Provide any associated or linked related to these change(s) -->
**What is the new behavior (if this is a feature change)?**
* <!-- What changes did this PR introduce? -->
-
**Does this PR introduce a breaking change?**
* <!-- What changes might users need to make in their application due to this PR? -->
**Types of Changes:**
**Other information**:
* <!-- Any additional information -->
<!-- What is the type of change? Bugfix, Feature, Breaking Change, etc... -->
-
**Proposed Changes:**
<!-- Provide the high level and low level description of your change(s) so we can better understand these change(s) -->
-
**Verification/Testing of Changes:**
<!-- How can the changes be verified? Provide the steps necessary to reproduce and verify the proposed change(s) -->
-
**Additional Context:**
<!-- Provide any additional information, such as if this is a small or large or complex change. Feel free to kick off the discussion by explaining why you chose the solution you did and what alternatives you considered, etc... -->
-

47
.github/workflows/pages.yaml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: Pages Workflow
on:
workflow_dispatch:
push:
branches:
- main
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
deploy-pages:
name: Deploy GitHub Pages
runs-on: ubuntu-latest
timeout-minutes: 30
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Upload Pages Artifacts
uses: actions/upload-pages-artifact@v3
with:
path: './static'
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@@ -1,30 +1,60 @@
name: CI
name: Release Workflow
on:
workflow_dispatch:
push:
tags:
tags:
- '*'
jobs:
goreleaser:
name: GoReleaser Job
runs-on: ubuntu-latest
timeout-minutes: 30
timeout-minutes: 60
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v2
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Set Up Go
uses: actions/setup-go@v5
with:
go-version: 1.21.x
go-version-file: go.mod
check-latest: true
- name: Set Up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Authenticate to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Authenticate to DockerHub Container Registry
uses: docker/login-action@v3
with:
registry: docker.io
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v2
uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser
version: latest
args: release --rm-dist -p 1
version: "~> v2"
args: "release --clean --timeout 60m"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
HOMEBREW_TAP_GITHUB_TOKEN: ${{ secrets.HOMEBREW_TAP_GITHUB_TOKEN }}
DOCKER_CLI_EXPERIMENTAL: "enabled"

337
.github/workflows/tests.yaml vendored Normal file
View File

@@ -0,0 +1,337 @@
name: Tests Workflow
on:
workflow_dispatch:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
unit-tests:
name: Unit Tests
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Set Up Go
uses: actions/setup-go@v5
with:
go-version-file: go.mod
check-latest: true
- name: Install Go Releaser
uses: goreleaser/goreleaser-action@v6
with:
install-only: true
- name: Install Dependencies
run: |
sudo apt-get update
sudo apt-get install -y make
sudo apt-get install -y build-essential
- name: Run Makefile Targets
run: |
make build-all
- name: Upload Hauler Binaries
uses: actions/upload-artifact@v4
with:
name: hauler-binaries
path: dist/*
- name: Upload Coverage Report
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage.out
integration-tests:
name: Integration Tests
runs-on: ubuntu-latest
needs: [unit-tests]
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Install Dependencies
run: |
sudo apt-get update
sudo apt-get install -y unzip
sudo apt-get install -y tree
- name: Download Artifacts
uses: actions/download-artifact@v4
with:
name: hauler-binaries
path: dist
- name: Prepare Hauler for Tests
run: |
pwd
ls -la
ls -la dist/
chmod -R 755 dist/ testdata/certificate-script.sh
sudo mv dist/hauler_linux_amd64_v1/hauler /usr/local/bin/hauler
./testdata/certificate-script.sh && sudo chown -R $(whoami) testdata/certs/
- name: Verify - hauler version
run: |
hauler version
- name: Verify - hauler completion
run: |
hauler completion
hauler completion bash
hauler completion fish
hauler completion powershell
hauler completion zsh
- name: Verify - hauler help
run: |
hauler help
- name: Verify - hauler login
run: |
hauler login --help
hauler login docker.io --username ${{ secrets.DOCKERHUB_USERNAME }} --password ${{ secrets.DOCKERHUB_TOKEN }}
echo ${{ secrets.GITHUB_TOKEN }} | hauler login ghcr.io -u ${{ github.repository_owner }} --password-stdin
- name: Remove Hauler Store Credentials
run: |
rm -rf ~/.docker/config.json
- name: Verify - hauler store
run: |
hauler store --help
- name: Verify - hauler store add
run: |
hauler store add --help
- name: Verify - hauler store add chart
run: |
hauler store add chart --help
# verify via helm repository
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable --version 2.8.4
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable --version 2.8.3 --verify
# verify via oci helm repository
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev --version 1.0.6
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev --version 1.0.4 --verify
# verify via local helm repository
curl -sfOL https://github.com/rancherfederal/rancher-cluster-templates/releases/download/rancher-cluster-templates-0.5.2/rancher-cluster-templates-0.5.2.tgz
hauler store add chart rancher-cluster-templates-0.5.2.tgz --repo .
curl -sfOL https://github.com/rancherfederal/rancher-cluster-templates/releases/download/rancher-cluster-templates-0.5.1/rancher-cluster-templates-0.5.1.tgz
hauler store add chart rancher-cluster-templates-0.5.1.tgz --repo . --version 0.5.1
curl -sfOL https://github.com/rancherfederal/rancher-cluster-templates/releases/download/rancher-cluster-templates-0.5.0/rancher-cluster-templates-0.5.0.tgz
hauler store add chart rancher-cluster-templates-0.5.0.tgz --repo . --version 0.5.0 --verify
# verify via the hauler store contents
hauler store info
- name: Verify - hauler store add file
run: |
hauler store add file --help
# verify via remote file
hauler store add file https://get.rke2.io/install.sh
hauler store add file https://get.rke2.io/install.sh --name rke2-install.sh
# verify via local file
hauler store add file testdata/hauler-manifest.yaml
hauler store add file testdata/hauler-manifest.yaml --name hauler-manifest-local.yaml
# verify via the hauler store contents
hauler store info
- name: Verify - hauler store add image
run: |
hauler store add image --help
# verify via image reference
hauler store add image busybox
# verify via image reference with version and platform
hauler store add image busybox:stable --platform linux/amd64
# verify via image reference with full reference
hauler store add image gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
# verify via the hauler store contents
hauler store info
- name: Verify - hauler store copy
run: |
hauler store copy --help
# need more tests here
- name: Verify - hauler store extract
run: |
hauler store extract --help
# verify via extracting hauler store content
hauler store extract hauler/hauler-manifest-local.yaml:latest
# view extracted content from store
cat hauler-manifest-local.yaml
- name: Verify - hauler store info
run: |
hauler store info --help
# verify via table output
hauler store info --output table
# verify via json output
hauler store info --output json
# verify via filtered output (chart)
hauler store info --type chart
# verify via filtered output (file)
hauler store info --type file
# verify via filtered output (image)
hauler store info --type image
# verify store directory structure
tree -hC store
- name: Verify - hauler store save
run: |
hauler store save --help
# verify via save
hauler store save
# verify via save with filename
hauler store save --filename store.tar.zst
# verify via save with filename and platform (amd64)
hauler store save --filename store-amd64.tar.zst --platform linux/amd64
- name: Remove Hauler Store Contents
run: |
rm -rf store
hauler store info
- name: Verify - hauler store load
run: |
hauler store load --help
# verify via load
hauler store load
# verify via load with multiple files
hauler store load --filename haul.tar.zst --filename store.tar.zst
# verify via load with filename and temp directory
hauler store load --filename store.tar.zst --tempdir /opt
# verify via load with filename and platform (amd64)
hauler store load --filename store-amd64.tar.zst
- name: Verify Hauler Store Contents
run: |
# verify store
hauler store info
# verify store directory structure
tree -hC store
- name: Verify - docker load
run: |
docker load --help
# verify via load
docker load --input store-amd64.tar.zst
- name: Verify Docker Images Contents
run: |
docker images --help
# verify images
docker images --all
- name: Remove Hauler Store Contents
run: |
rm -rf store haul.tar.zst store.tar.zst store-amd64.tar.zst
hauler store info
- name: Verify - hauler store sync
run: |
hauler store sync --help
# download local helm repository
curl -sfOL https://github.com/rancherfederal/rancher-cluster-templates/releases/download/rancher-cluster-templates-0.5.2/rancher-cluster-templates-0.5.2.tgz
# verify via sync
hauler store sync --filename testdata/hauler-manifest-pipeline.yaml
# verify via sync with multiple files
hauler store sync --filename testdata/hauler-manifest-pipeline.yaml --filename testdata/hauler-manifest.yaml
# need more tests here
- name: Verify - hauler store serve
run: |
hauler store serve --help
- name: Verify - hauler store serve registry
run: |
hauler store serve registry --help
# verify via registry
hauler store serve registry &
until curl -sf http://localhost:5000/v2/_catalog; do : ; done
pkill -f "hauler store serve registry"
# verify via registry with different port
hauler store serve registry --port 5001 &
until curl -sf http://localhost:5001/v2/_catalog; do : ; done
pkill -f "hauler store serve registry --port 5001"
# verify via registry with different port and readonly
hauler store serve registry --port 5001 --readonly &
until curl -sf http://localhost:5001/v2/_catalog; do : ; done
pkill -f "hauler store serve registry --port 5001 --readonly"
# verify via registry with different port with readonly with tls
# hauler store serve registry --port 5001 --readonly --tls-cert testdata/certs/server-cert.crt --tls-key testdata/certs/server-cert.key &
# until curl -sf --cacert testdata/certs/cacerts.pem https://localhost:5001/v2/_catalog; do : ; done
# pkill -f "hauler store serve registry --port 5001 --readonly --tls-cert testdata/certs/server-cert.crt --tls-key testdata/certs/server-cert.key"
- name: Verify - hauler store serve fileserver
run: |
hauler store serve fileserver --help
# verify via fileserver
hauler store serve fileserver &
until curl -sf http://localhost:8080; do : ; done
pkill -f "hauler store serve fileserver"
# verify via fileserver with different port
hauler store serve fileserver --port 8000 &
until curl -sf http://localhost:8000; do : ; done
pkill -f "hauler store serve fileserver --port 8000"
# verify via fileserver with different port and timeout
hauler store serve fileserver --port 8000 --timeout 120 &
until curl -sf http://localhost:8000; do : ; done
pkill -f "hauler store serve fileserver --port 8000 --timeout 120"
# verify via fileserver with different port with timeout and tls
# hauler store serve fileserver --port 8000 --timeout 120 --tls-cert testdata/certs/server-cert.crt --tls-key testdata/certs/server-cert.key &
# until curl -sf --cacert testdata/certs/cacerts.pem https://localhost:8000; do : ; done
# pkill -f "hauler store serve fileserver --port 8000 --timeout 120 --tls-cert testdata/certs/server-cert.crt --tls-key testdata/certs/server-cert.key"
- name: Verify Hauler Store Contents
run: |
# verify store
hauler store info
# verify store directory structure
tree -hC store
# verify registry directory structure
tree -hC registry
# verify fileserver directory structure
tree -hC fileserver
- name: Create Hauler Report
run: |
hauler version >> hauler-report.txt
hauler store info --output table >> hauler-report.txt
- name: Remove Hauler Store Contents
run: |
rm -rf store registry fileserver
hauler store info
- name: Upload Hauler Report
uses: actions/upload-artifact@v4
with:
name: hauler-report
path: hauler-report.txt

View File

@@ -1,41 +0,0 @@
name: Unit Test
on:
push:
paths-ignore:
- "**.md"
- ".github/**"
- "!.github/workflows/unittest.yaml"
pull_request:
paths-ignore:
- "**.md"
- ".github/**"
- "!.github/workflows/unitcoverage.yaml"
workflow_dispatch: {}
jobs:
test:
name: Unit Tests
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.21.x
- name: Run Unit Tests
run: |
mkdir -p cmd/hauler/binaries
touch cmd/hauler/binaries/dummy.txt
go test -race -covermode=atomic -coverprofile=coverage.out ./pkg/... ./internal/... ./cmd/...
- name: On Failure, Launch Debug Session
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3
timeout-minutes: 5
- name: Upload Results To Codecov
uses: codecov/codecov-action@v1
with:
files: ./coverage.out
verbose: true # optional (default = false)

25
.gitignore vendored
View File

@@ -1,9 +1,4 @@
.DS_Store
# Vagrant
.vagrant
# Editor directories and files
**/.DS_Store
.idea
.vscode
*.suo
@@ -12,20 +7,12 @@
*.sln
*.sw?
*.dir-locals.el
# old, ad-hoc ignores
artifacts
local-artifacts
airgap-scp.sh
# test artifacts
*.tar*
*.out
# generated
dist/
tmp/
bin/
/store/
/registry/
cmd/hauler/binaries
registry/
fileserver/
cmd/hauler/binaries
testdata/certs/
coverage.out

View File

@@ -1,19 +1,24 @@
version: 2
project_name: hauler
before:
hooks:
- go mod tidy
- go mod download
- rm -rf cmd/hauler/binaries
- go fmt ./...
- go vet ./...
- go test ./... -cover -race -covermode=atomic -coverprofile=coverage.out
release:
prerelease: auto
make_latest: false
env:
- vpkg=github.com/rancherfederal/hauler/internal/version
- cosign_version=v2.2.2+carbide.2
- vpkg=hauler.dev/go/hauler/internal/version
- cosign_version=v2.2.3+carbide.3
builds:
- main: cmd/hauler/main.go
- dir: ./cmd/hauler/.
goos:
- linux
- darwin
@@ -23,27 +28,94 @@ builds:
- arm64
ldflags:
- -s -w -X {{ .Env.vpkg }}.gitVersion={{ .Version }} -X {{ .Env.vpkg }}.gitCommit={{ .ShortCommit }} -X {{ .Env.vpkg }}.gitTreeState={{if .IsGitDirty}}dirty{{else}}clean{{end}} -X {{ .Env.vpkg }}.buildDate={{ .Date }}
hooks:
pre:
- mkdir -p cmd/hauler/binaries
- wget -P cmd/hauler/binaries/ https://github.com/rancher-government-carbide/cosign/releases/download/{{ .Env.cosign_version }}/cosign-{{ .Os }}-{{ .Arch }}{{ if eq .Os "windows" }}.exe{{ end }}
post:
- rm -rf cmd/hauler/binaries
env:
- CGO_ENABLED=0
- GOEXPERIMENT=boringcrypto
universal_binaries:
- replace: false
changelog:
skip: false
disable: false
use: git
brews:
- name: hauler
tap:
owner: rancherfederal
repository:
owner: hauler-dev
name: homebrew-tap
token: "{{ .Env.HOMEBREW_TAP_GITHUB_TOKEN }}"
folder: Formula
directory: Formula
description: "Hauler CLI"
dockers:
- id: hauler-amd64
goos: linux
goarch: amd64
use: buildx
dockerfile: Dockerfile
build_flag_templates:
- "--platform=linux/amd64"
- "--target=release"
image_templates:
- "docker.io/hauler/hauler-amd64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-amd64:{{ .Version }}"
- id: hauler-arm64
goos: linux
goarch: arm64
use: buildx
dockerfile: Dockerfile
build_flag_templates:
- "--platform=linux/arm64"
- "--target=release"
image_templates:
- "docker.io/hauler/hauler-arm64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-arm64:{{ .Version }}"
- id: hauler-debug-amd64
goos: linux
goarch: amd64
use: buildx
dockerfile: Dockerfile
build_flag_templates:
- "--platform=linux/amd64"
- "--target=debug"
image_templates:
- "docker.io/hauler/hauler-debug-amd64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-debug-amd64:{{ .Version }}"
- id: hauler-debug-arm64
goos: linux
goarch: arm64
use: buildx
dockerfile: Dockerfile
build_flag_templates:
- "--platform=linux/arm64"
- "--target=debug"
image_templates:
- "docker.io/hauler/hauler-debug-arm64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-debug-arm64:{{ .Version }}"
docker_manifests:
- id: hauler-docker
use: docker
name_template: "docker.io/hauler/hauler:{{ .Version }}"
image_templates:
- "docker.io/hauler/hauler-amd64:{{ .Version }}"
- "docker.io/hauler/hauler-arm64:{{ .Version }}"
- id: hauler-ghcr
use: docker
name_template: "ghcr.io/hauler-dev/hauler:{{ .Version }}"
image_templates:
- "ghcr.io/hauler-dev/hauler-amd64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-arm64:{{ .Version }}"
- id: hauler-debug-docker
use: docker
name_template: "docker.io/hauler/hauler-debug:{{ .Version }}"
image_templates:
- "docker.io/hauler/hauler-debug-amd64:{{ .Version }}"
- "docker.io/hauler/hauler-debug-arm64:{{ .Version }}"
- id: hauler-debug-ghcr
use: docker
name_template: "ghcr.io/hauler-dev/hauler-debug:{{ .Version }}"
image_templates:
- "ghcr.io/hauler-dev/hauler-debug-amd64:{{ .Version }}"
- "ghcr.io/hauler-dev/hauler-debug-arm64:{{ .Version }}"

42
Dockerfile Normal file
View File

@@ -0,0 +1,42 @@
# builder stage
FROM registry.suse.com/bci/bci-base:15.5 AS builder
# fetched from goreleaser build proccess
COPY hauler /hauler
RUN echo "hauler:x:1001:1001::/home/hauler:" > /etc/passwd \
&& echo "hauler:x:1001:hauler" > /etc/group \
&& mkdir /home/hauler \
&& mkdir /store \
&& mkdir /fileserver \
&& mkdir /registry
# release stage
FROM scratch AS release
COPY --from=builder /var/lib/ca-certificates/ca-bundle.pem /etc/ssl/certs/ca-certificates.crt
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /etc/group /etc/group
COPY --from=builder --chown=hauler:hauler /home/hauler/. /home/hauler
COPY --from=builder --chown=hauler:hauler /tmp/. /tmp
COPY --from=builder --chown=hauler:hauler /store/. /store
COPY --from=builder --chown=hauler:hauler /registry/. /registry
COPY --from=builder --chown=hauler:hauler /fileserver/. /fileserver
COPY --from=builder --chown=hauler:hauler /hauler /hauler
USER hauler
ENTRYPOINT [ "/hauler" ]
# debug stage
FROM alpine AS debug
COPY --from=builder /var/lib/ca-certificates/ca-bundle.pem /etc/ssl/certs/ca-certificates.crt
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /etc/group /etc/group
COPY --from=builder --chown=hauler:hauler /home/hauler/. /home/hauler
COPY --from=builder --chown=hauler:hauler /hauler /usr/local/bin/hauler
RUN apk --no-cache add curl
USER hauler
WORKDIR /home/hauler

View File

@@ -1,39 +1,49 @@
SHELL:=/bin/bash
GO_FILES=$(shell go list ./... | grep -v /vendor/)
# Makefile for hauler
COSIGN_VERSION=v2.2.2+carbide.2
# set shell
SHELL=/bin/bash
.SILENT:
# set go variables
GO_FILES=./...
GO_COVERPROFILE=coverage.out
all: fmt vet install test
# set build variables
BIN_DIRECTORY=bin
DIST_DIRECTORY=dist
# local build of hauler for current platform
# references/configuration from .goreleaser.yaml
build:
rm -rf cmd/hauler/binaries;\
mkdir -p cmd/hauler/binaries;\
wget -P cmd/hauler/binaries/ https://github.com/rancher-government-carbide/cosign/releases/download/$(COSIGN_VERSION)/cosign-$(shell go env GOOS)-$(shell go env GOARCH);\
mkdir bin;\
CGO_ENABLED=0 go build -o bin ./cmd/...;\
goreleaser build --clean --snapshot --timeout 60m --single-target
build-all: fmt vet
goreleaser build --rm-dist --snapshot
# local build of hauler for all platforms
# references/configuration from .goreleaser.yaml
build-all:
goreleaser build --clean --snapshot --timeout 60m
# local release of hauler for all platforms
# references/configuration from .goreleaser.yaml
release:
goreleaser release --clean --snapshot --timeout 60m
# install depedencies
install:
rm -rf cmd/hauler/binaries;\
mkdir -p cmd/hauler/binaries;\
wget -P cmd/hauler/binaries/ https://github.com/rancher-government-carbide/cosign/releases/download/$(COSIGN_VERSION)/cosign-$(shell go env GOOS)-$(shell go env GOARCH);\
CGO_ENABLED=0 go install ./cmd/...;\
vet:
go vet $(GO_FILES)
go mod tidy
go mod download
CGO_ENABLED=0 go install ./cmd/...
# format go code
fmt:
go fmt $(GO_FILES)
# vet go code
vet:
go vet $(GO_FILES)
# test go code
test:
go test $(GO_FILES) -cover
integration_test:
go test -tags=integration $(GO_FILES)
go test $(GO_FILES) -cover -race -covermode=atomic -coverprofile=$(GO_COVERPROFILE)
# cleanup artifacts
clean:
rm -rf bin 2> /dev/null
rm -rf $(BIN_DIRECTORY) $(DIST_DIRECTORY) $(GO_COVERPROFILE)

View File

@@ -1,31 +1,55 @@
# Rancher Government Hauler
![rancher-government-hauler-logo](/static/rgs-hauler-logo.png)
## Airgap Swiss Army Knife
> ⚠️ This project is still in active development and *not* Generally Available (GA). Most of the core functionality and features are ready, but may have breaking changes. Please review the [Release Notes](https://github.com/rancherfederal/hauler/releases) for more information!
`Rancher Government Hauler` simplifies the airgap experience without requiring operators to adopt a specific workflow. **Hauler** simplifies the airgapping process, by representing assets (images, charts, files, etc...) as content and collections to allow operators to easily fetch, store, package, and distribute these assets with declarative manifests or through the command line.
`Rancher Government Hauler` simplifies the airgap experience without requiring users to adopt a specific workflow. **Hauler** simplifies the airgapping process, by representing assets (images, charts, files, etc...) as content and collections to allow users to easily fetch, store, package, and distribute these assets with declarative manifests or through the command line.
`Hauler` does this by storing contents and collections as OCI Artifacts and allows operators to serve contents and collections with an embedded registry and fileserver. Additionally, `Hauler` has the ability to store and inspect various non-image OCI Artifacts.
`Hauler` does this by storing contents and collections as OCI Artifacts and allows users to serve contents and collections with an embedded registry and fileserver. Additionally, `Hauler` has the ability to store and inspect various non-image OCI Artifacts.
For more information, please review the **[Hauler Documentation](https://hauler.dev)!**
For more information, please review the **[Hauler Documentation](https://rancherfederal.github.io/hauler-docs)!**
## Recent Changes
### In Hauler v1.2.0...
- Upgraded the `apiVersion` to `v1` from `v1alpha1`
- Users are able to use `v1` and `v1alpha1`, but `v1alpha1` is now deprecated and will be removed in a future release. We will update the community when we fully deprecate and remove the functionality of `v1alpha1`
- Users will see logging notices when using the old `apiVersion` such as...
- `!!! DEPRECATION WARNING !!! apiVersion [v1alpha1] will be removed in a future release !!! DEPRECATION WARNING !!!`
---
- Updated the behavior of `hauler store load` to default to loading a `haul` with the name of `haul.tar.zst` and requires the flag of `--filename/-f` to load a `haul` with a different name
- Users can load multiple `hauls` by specifying multiple flags of `--filename/-f`
- updated command usage: `hauler store load --filename hauling-hauls.tar.zst`
- previous command usage (do not use): `hauler store load hauling-hauls.tar.zst`
---
- Updated the behavior of `hauler store sync` to default to syncing a `manifest` with the name of `hauler-manifest.yaml` and requires the flag of `--filename/-f` to sync a `manifest` with a different name
- Users can sync multiple `manifests` by specifying multiple flags of `--filename/-f`
- updated command usage: `hauler store sync --filename hauling-hauls-manifest.yaml`
- previous command usage (do not use): `hauler store sync --files hauling-hauls-manifest.yaml`
---
Please review the documentation for any additional [Known Limits, Issues, and Notices](https://docs.hauler.dev/docs/known-limits)!
## Installation
### Linux/Darwin
```bash
# installs latest release
curl -sfL https://get.hauler.dev | bash
```
### Homebrew
```bash
# installs latest release
brew tap rancherfederal/homebrew-tap
brew tap hauler-dev/homebrew-tap
brew install hauler
```
### Windows
```bash
# coming soon
```
@@ -33,11 +57,7 @@ brew install hauler
## Acknowledgements
`Hauler` wouldn't be possible without the open-source community, but there are a few projects that stand out:
* [go-containerregistry](https://github.com/google/go-containerregistry)
* [oras cli](https://github.com/oras-project/oras)
* [cosign](https://github.com/sigstore/cosign)
## Notices
**WARNING - Upcoming Deprecated Command(s):**
`hauler download` (alternatively, `dl`) and `hauler serve` (_not_ `hauler store serve`) commands are deprecated and will be removed in a future release.
- [oras cli](https://github.com/oras-project/oras)
- [cosign](https://github.com/sigstore/cosign)
- [go-containerregistry](https://github.com/google/go-containerregistry)

View File

@@ -1,50 +0,0 @@
# Hauler Roadmap
## \> v0.2.0
- Leverage `referrers` api to robustly link content/collection
- Support signing for all `artifact.OCI` contents
- Support encryption for `artifact.OCI` layers
- Support incremental updates to stores (some implementation of layer diffing)
- Safely embed container runtime for user created `collections` creation and transformation
- Better defaults/configuration/security around for long-lived embedded registry
- Better support multi-platform content
- Better leverage `oras` (`>=0.5.0`) for content relocation
- Store git repos as CAS in OCI format
## v0.2.0 - MVP 2
- Re-focus on cli and framework for oci content fetching and delivery
- Focus on initial key contents
- Files (local/remote)
- Charts (local/remote)
- Images
- Establish framework for `content` and `collections`
- Define initial `content` types (`file`, `chart`, `image`)
- Define initial `collection` types (`thickchart`, `k3s`)
- Define framework for manipulating OCI content (`artifact.OCI`, `artifact.Collection`)
## v0.1.0 - MVP 1
- Install single-node k3s cluster
- Support tarball and rpm installation methods
- Target narrow set of known Operating Systems to have OS-specific code if needed
- Serve container images
- Collect images from image list file
- Collect images from image archives
- Deploy docker registry
- Populate registry with all images
- Serve git repositories
- Collect repos
- Deploy git server (Caddy? NGINX?)
- Populate git server with repos
- Serve files
- Collect files from directory, including subdirectories
- Deploy caddy file server
- Populate file server with directory contents
- NOTE: "generic" option - most other use cases can be satisfied by a specially crafted file
server directory
## v0.0.x
- Install single-node k3s cluster into an Ubuntu machine using the tarball installation method

View File

@@ -1,49 +0,0 @@
## Hauler Vagrant machine
A Vagrantfile is provided to allow easy provisioning of a local air-gapped CentOS environment. Some artifacts need to be collected from the internet; below are the steps required for successfully provisioning this machine, downloading all dependencies, and installing k3s (without hauler) into this machine.
### First-time setup
1. Install vagrant, if needed: <https://www.vagrantup.com/downloads>
2. Install `vagrant-vbguest` plugin, as noted in the Vagrantfile:
```shell
vagrant plugin install vagrant-vbguest
```
3. Deploy Vagrant machine, disabling SELinux:
```shell
SELINUX=Disabled vagrant up
```
4. Access the Vagrant machine via SSH:
```shell
vagrant ssh
```
5. Run all prep scripts inside of the Vagrant machine:
> This script temporarily enables internet access from within the VM to allow downloading all dependencies. Even so, the air-gapped network configuration IS restored before completion.
```shell
sudo /opt/hauler/vagrant-scripts/prep-all.sh
```
All dependencies for all `vagrant-scripts/*-install.sh` scripts are now downloaded to the local
repository under `local-artifacts`.
### Installing k3s manually
1. Access the Vagrant machine via SSH:
```bash
vagrant ssh
```
2. Run the k3s install script inside of the Vagrant machine:
```shell
sudo /opt/hauler/vagrant-scripts/k3s-install.sh
```
### Installing RKE2 manually
1. Access the Vagrant machine via SSH:
```shell
vagrant ssh
```
2. Run the RKE2 install script inside of the Vagrant machine:
```shell
sudo /opt/hauler/vagrant-scripts/rke2-install.sh
```

65
Vagrantfile vendored
View File

@@ -1,65 +0,0 @@
##################################
# The vagrant-vbguest plugin is required for CentOS 7.
# Run the following command to install/update this plugin:
# vagrant plugin install vagrant-vbguest
##################################
Vagrant.configure("2") do |config|
config.vm.box = "centos/8"
config.vm.hostname = "airgap"
config.vm.network "private_network", type: "dhcp"
config.vm.synced_folder ".", "/vagrant"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus = "2"
config.vm.provision "airgap", type: "shell", run: "always",
inline: "/vagrant/vagrant-scripts/airgap.sh airgap"
end
# SELinux is Enforcing by default.
# To set SELinux as Disabled on a VM that has already been provisioned:
# SELINUX=Disabled vagrant up --provision-with=selinux
# To set SELinux as Permissive on a VM that has already been provsioned
# SELINUX=Permissive vagrant up --provision-with=selinux
config.vm.provision "selinux", type: "shell", run: "once" do |sh|
sh.upload_path = "/tmp/vagrant-selinux"
sh.env = {
'SELINUX': ENV['SELINUX'] || "Enforcing"
}
sh.inline = <<~SHELL
#!/usr/bin/env bash
set -eux -o pipefail
if ! type -p getenforce setenforce &>/dev/null; then
echo SELinux is Disabled
exit 0
fi
case "${SELINUX}" in
Disabled)
if mountpoint -q /sys/fs/selinux; then
setenforce 0
umount -v /sys/fs/selinux
fi
;;
Enforcing)
mountpoint -q /sys/fs/selinux || mount -o rw,relatime -t selinuxfs selinuxfs /sys/fs/selinux
setenforce 1
;;
Permissive)
mountpoint -q /sys/fs/selinux || mount -o rw,relatime -t selinuxfs selinuxfs /sys/fs/selinux
setenforce 0
;;
*)
echo "SELinux mode not supported: ${SELINUX}" >&2
exit 1
;;
esac
echo SELinux is $(getenforce)
SHELL
end
end

View File

@@ -0,0 +1,6 @@
//go:build boringcrypto
// +build boringcrypto
package main
import _ "crypto/tls/fipsonly"

View File

@@ -1,25 +1,25 @@
package cli
import (
"github.com/spf13/cobra"
"context"
"github.com/rancherfederal/hauler/pkg/log"
cranecmd "github.com/google/go-containerregistry/cmd/crane/cmd"
"github.com/spf13/cobra"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/log"
)
type rootOpts struct {
logLevel string
}
var ro = &rootOpts{}
func New() *cobra.Command {
func New(ctx context.Context, ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "hauler",
Short: "Airgap Swiss Army Knife",
Use: "hauler",
Short: "Airgap Swiss Army Knife",
Example: " View the Docs: https://docs.hauler.dev\n Environment Variables: " + consts.HaulerDir + " | " + consts.HaulerTempDir + " | " + consts.HaulerStoreDir + " | " + consts.HaulerIgnoreErrors,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
l := log.FromContext(cmd.Context())
l.SetLevel(ro.logLevel)
l := log.FromContext(ctx)
l.SetLevel(ro.LogLevel)
l.Debugf("running cli command [%s]", cmd.CommandPath())
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
@@ -27,15 +27,12 @@ func New() *cobra.Command {
},
}
pf := cmd.PersistentFlags()
pf.StringVarP(&ro.logLevel, "log-level", "l", "info", "")
flags.AddRootFlags(cmd, ro)
// Add subcommands
addDownload(cmd)
addStore(cmd)
addServe(cmd)
addVersion(cmd)
addCompletion(cmd)
cmd.AddCommand(cranecmd.NewCmdAuthLogin("hauler"))
addStore(cmd, ro)
addVersion(cmd, ro)
addCompletion(cmd, ro)
return cmd
}

View File

@@ -3,54 +3,50 @@ package cli
import (
"fmt"
"os"
"github.com/spf13/cobra"
"hauler.dev/go/hauler/internal/flags"
)
func addCompletion(parent *cobra.Command) {
func addCompletion(parent *cobra.Command, ro *flags.CliRootOpts) {
cmd := &cobra.Command{
Use: "completion",
Short: "Generates completion scripts for various shells",
Long: `The completion sub-command generates completion scripts for various shells.`,
Use: "completion",
Short: "Generate auto-completion scripts for various shells",
}
cmd.AddCommand(
addCompletionZsh(),
addCompletionBash(),
addCompletionFish(),
addCompletionPowershell(),
addCompletionZsh(ro),
addCompletionBash(ro),
addCompletionFish(ro),
addCompletionPowershell(ro),
)
parent.AddCommand(cmd)
}
func completionError(err error) ([]string, cobra.ShellCompDirective) {
cobra.CompError(err.Error())
return nil, cobra.ShellCompDirectiveError
}
func addCompletionZsh() *cobra.Command {
func addCompletionZsh(ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "zsh",
Short: "Generates zsh completion scripts",
Long: `The completion sub-command generates completion scripts for zsh.`,
Short: "Generates auto-completion scripts for zsh",
Example: `To load completion run
. <(hauler completion zsh)
To configure your zsh shell to load completions for each session add to your zshrc
# ~/.zshrc or ~/.profile
command -v hauler >/dev/null && . <(hauler completion zsh)
or write a cached file in one of the completion directories in your ${fpath}:
echo "${fpath// /\n}" | grep -i completion
hauler completion zsh > _hauler
mv _hauler ~/.oh-my-zsh/completions # oh-my-zsh
mv _hauler ~/.zprezto/modules/completion/external/src/ # zprezto`,
Run: func(cmd *cobra.Command, args []string) {
cmd.GenZshCompletion(os.Stdout)
cmd.Root().GenZshCompletion(os.Stdout)
// Cobra doesn't source zsh completion file, explicitly doing it here
fmt.Println("compdef _hauler hauler")
},
@@ -58,66 +54,63 @@ func addCompletionZsh() *cobra.Command {
return cmd
}
func addCompletionBash() *cobra.Command {
func addCompletionBash(ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "bash",
Short: "Generates bash completion scripts",
Long: `The completion sub-command generates completion scripts for bash.`,
Short: "Generates auto-completion scripts for bash",
Example: `To load completion run
. <(hauler completion bash)
To configure your bash shell to load completions for each session add to your bashrc
# ~/.bashrc or ~/.profile
command -v hauler >/dev/null && . <(hauler completion bash)`,
Run: func(cmd *cobra.Command, args []string) {
cmd.GenBashCompletion(os.Stdout)
cmd.Root().GenBashCompletion(os.Stdout)
},
}
return cmd
}
func addCompletionFish() *cobra.Command {
func addCompletionFish(ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "fish",
Short: "Generates fish completion scripts",
Long: `The completion sub-command generates completion scripts for fish.`,
Short: "Generates auto-completion scripts for fish",
Example: `To configure your fish shell to load completions for each session write this script to your completions dir:
hauler completion fish > ~/.config/fish/completions/hauler.fish
See http://fishshell.com/docs/current/index.html#completion-own for more details`,
Run: func(cmd *cobra.Command, args []string) {
cmd.GenFishCompletion(os.Stdout, true)
cmd.Root().GenFishCompletion(os.Stdout, true)
},
}
return cmd
}
func addCompletionPowershell() *cobra.Command {
func addCompletionPowershell(ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "powershell",
Short: "Generates powershell completion scripts",
Long: `The completion sub-command generates completion scripts for powershell.`,
Short: "Generates auto-completion scripts for powershell",
Example: `To load completion run
. <(hauler completion powershell)
To configure your powershell shell to load completions for each session add to your powershell profile
Windows:
cd "$env:USERPROFILE\Documents\WindowsPowerShell\Modules"
hauler completion powershell >> hauler-completion.ps1
Linux:
cd "${XDG_CONFIG_HOME:-"$HOME/.config/"}/powershell/modules"
hauler completion powershell >> hauler-completions.ps1`,
Run: func(cmd *cobra.Command, args []string) {
cmd.GenPowerShellCompletion(os.Stdout)
cmd.Root().GenPowerShellCompletion(os.Stdout)
},
}
return cmd
}
}

View File

@@ -1,45 +0,0 @@
package cli
import (
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/cmd/hauler/cli/download"
)
func addDownload(parent *cobra.Command) {
o := &download.Opts{}
cmd := &cobra.Command{
Use: "download",
Short: "Download OCI content from a registry and populate it on disk",
Long: `*** WARNING: Deprecated Command ***
The 'download (dl)' command is deprecated and will be removed in a future release of Hauler.
Locate OCI content based on it's reference in a compatible registry and download the contents to disk.
Note that the content type determines it's format on disk. Hauler's built in content types act as follows:
- File: as a file named after the pushed contents source name (ex: my-file.yaml:latest --> my-file.yaml)
- Image: as a .tar named after the image (ex: alpine:latest --> alpine:latest.tar)
- Chart: as a .tar.gz named after the chart (ex: loki:2.0.2 --> loki-2.0.2.tar.gz)`,
Example: `
# Download a file
hauler dl localhost:5000/my-file.yaml:latest
# Download an image
hauler dl localhost:5000/rancher/k3s:v1.22.2-k3s2
# Download a chart
hauler dl localhost:5000/hauler/longhorn:1.2.0`,
Aliases: []string{"dl"},
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, arg []string) error {
ctx := cmd.Context()
return download.Cmd(ctx, o, arg[0])
},
}
o.AddArgs(cmd)
parent.AddCommand(cmd)
}

View File

@@ -1,87 +0,0 @@
package download
import (
"context"
"encoding/json"
"github.com/google/go-containerregistry/pkg/authn"
"github.com/google/go-containerregistry/pkg/v1/remote"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
"oras.land/oras-go/pkg/content"
"oras.land/oras-go/pkg/oras"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/internal/mapper"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/reference"
)
type Opts struct {
DestinationDir string
Username string
Password string
Insecure bool
PlainHTTP bool
}
func (o *Opts) AddArgs(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.DestinationDir, "output", "o", "", "Directory to save contents to (defaults to current directory)")
f.StringVarP(&o.Username, "username", "u", "", "Username when copying to an authenticated remote registry")
f.StringVarP(&o.Password, "password", "p", "", "Password when copying to an authenticated remote registry")
f.BoolVar(&o.Insecure, "insecure", false, "Toggle allowing insecure connections when copying to a remote registry")
f.BoolVar(&o.PlainHTTP, "plain-http", false, "Toggle allowing plain http connections when copying to a remote registry")
}
func Cmd(ctx context.Context, o *Opts, ref string) error {
l := log.FromContext(ctx)
ropts := content.RegistryOptions{
Username: o.Username,
Password: o.Password,
Insecure: o.Insecure,
PlainHTTP: o.PlainHTTP,
}
rs, err := content.NewRegistry(ropts)
if err != nil {
return err
}
r, err := reference.Parse(ref)
if err != nil {
return err
}
desc, err := remote.Get(r, remote.WithAuthFromKeychain(authn.DefaultKeychain), remote.WithContext(ctx))
if err != nil {
return err
}
manifestData, err := desc.RawManifest()
if err != nil {
return err
}
var manifest ocispec.Manifest
if err := json.Unmarshal(manifestData, &manifest); err != nil {
return err
}
mapperStore, err := mapper.FromManifest(manifest, o.DestinationDir)
if err != nil {
return err
}
pushedDesc, err := oras.Copy(ctx, rs, r.Name(), mapperStore, "",
oras.WithAdditionalCachedMediaTypes(consts.DockerManifestSchema2))
if err != nil {
return err
}
l.Infof("downloaded [%s] with digest [%s]", pushedDesc.MediaType, pushedDesc.Digest.String())
return nil
}

View File

@@ -1,57 +0,0 @@
package cli
import (
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/cmd/hauler/cli/serve"
)
func addServe(parent *cobra.Command) {
cmd := &cobra.Command{
Use: "serve",
Short: "Run one or more of hauler's embedded servers types",
Long: `*** WARNING: Deprecated Command ***
The 'serve' command is deprecated and will be removed in a future release of Hauler.`,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Help()
},
}
cmd.AddCommand(
addServeFiles(),
addServeRegistry(),
)
parent.AddCommand(cmd)
}
func addServeFiles() *cobra.Command {
o := &serve.FilesOpts{}
cmd := &cobra.Command{
Use: "files",
Short: "Start a fileserver",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
return serve.FilesCmd(ctx, o)
},
}
o.AddFlags(cmd)
return cmd
}
func addServeRegistry() *cobra.Command {
o := &serve.RegistryOpts{}
cmd := &cobra.Command{
Use: "registry",
Short: "Start a registry",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
return serve.RegistryCmd(ctx, o)
},
}
o.AddFlags(cmd)
return cmd
}

View File

@@ -1,37 +0,0 @@
package serve
import (
"context"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/internal/server"
)
type FilesOpts struct {
Root string
Port int
}
func (o *FilesOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Root, "root", "r", ".", "Path to root of the directory to serve")
f.IntVarP(&o.Port, "port", "p", 8080, "Port to listen on")
}
func FilesCmd(ctx context.Context, o *FilesOpts) error {
cfg := server.FileConfig{
Root: o.Root,
Port: o.Port,
}
s, err := server.NewFile(ctx, cfg)
if err != nil {
return err
}
if err := s.ListenAndServe(); err != nil {
return err
}
return nil
}

View File

@@ -1,81 +0,0 @@
package serve
import (
"context"
"fmt"
"net/http"
"os"
"github.com/distribution/distribution/v3/configuration"
dcontext "github.com/distribution/distribution/v3/context"
"github.com/distribution/distribution/v3/version"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/internal/server"
)
type RegistryOpts struct {
Root string
Port int
ConfigFile string
}
func (o *RegistryOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Root, "root", "r", ".", "Path to root of the directory to serve")
f.IntVarP(&o.Port, "port", "p", 5000, "Port to listen on")
f.StringVarP(&o.ConfigFile, "config", "c", "", "Path to a config file, will override all other configs")
}
func RegistryCmd(ctx context.Context, o *RegistryOpts) error {
ctx = dcontext.WithVersion(ctx, version.Version)
cfg := o.defaultConfig()
if o.ConfigFile != "" {
ucfg, err := loadConfig(o.ConfigFile)
if err != nil {
return err
}
cfg = ucfg
}
s, err := server.NewRegistry(ctx, cfg)
if err != nil {
return err
}
if err := s.ListenAndServe(); err != nil {
return err
}
return nil
}
func loadConfig(filename string) (*configuration.Configuration, error) {
f, err := os.Open(filename)
if err != nil {
return nil, err
}
return configuration.Parse(f)
}
func (o *RegistryOpts) defaultConfig() *configuration.Configuration {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": o.Root},
// TODO: Ensure this is toggleable via cli arg if necessary
// "maintenance": configuration.Parameters{"readonly.enabled": false},
},
}
cfg.Log.Level = "info"
cfg.HTTP.Addr = fmt.Sprintf(":%d", o.Port)
cfg.HTTP.Headers = http.Header{
"X-Content-Type-Options": []string{"nosniff"},
"Accept": []string{"application/vnd.dsse.envelope.v1+json, application/json"},
}
return cfg
}

View File

@@ -1,48 +1,48 @@
package cli
import (
"github.com/spf13/cobra"
"helm.sh/helm/v3/pkg/action"
"fmt"
"github.com/rancherfederal/hauler/cmd/hauler/cli/store"
"github.com/spf13/cobra"
"helm.sh/helm/v3/pkg/action"
"hauler.dev/go/hauler/cmd/hauler/cli/store"
"hauler.dev/go/hauler/internal/flags"
)
var rootStoreOpts = &store.RootOpts{}
func addStore(parent *cobra.Command, ro *flags.CliRootOpts) {
rso := &flags.StoreRootOpts{}
func addStore(parent *cobra.Command) {
cmd := &cobra.Command{
Use: "store",
Aliases: []string{"s"},
Short: "Interact with hauler's embedded content store",
Short: "Interact with the content store",
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Help()
},
}
rootStoreOpts.AddArgs(cmd)
rso.AddFlags(cmd)
cmd.AddCommand(
addStoreSync(),
addStoreExtract(),
addStoreLoad(),
addStoreSave(),
addStoreServe(),
addStoreInfo(),
addStoreCopy(),
// TODO: Remove this in favor of sync?
addStoreAdd(),
addStoreSync(rso, ro),
addStoreExtract(rso, ro),
addStoreLoad(rso, ro),
addStoreSave(rso, ro),
addStoreServe(rso, ro),
addStoreInfo(rso, ro),
addStoreCopy(rso, ro),
addStoreAdd(rso, ro),
)
parent.AddCommand(cmd)
}
func addStoreExtract() *cobra.Command {
o := &store.ExtractOpts{RootOpts: rootStoreOpts}
func addStoreExtract(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.ExtractOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "extract",
Short: "Extract content from the store to disk",
Short: "Extract artifacts from the content store to disk",
Aliases: []string{"x"},
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
@@ -56,17 +56,28 @@ func addStoreExtract() *cobra.Command {
return store.ExtractCmd(ctx, o, s, args[0])
},
}
o.AddArgs(cmd)
o.AddFlags(cmd)
return cmd
}
func addStoreSync() *cobra.Command {
o := &store.SyncOpts{RootOpts: rootStoreOpts}
func addStoreSync(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.SyncOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "sync",
Short: "Sync content to the embedded content store",
Short: "Sync content to the content store",
Args: cobra.ExactArgs(0),
PreRunE: func(cmd *cobra.Command, args []string) error {
// Check if the products flag was passed
if len(o.Products) > 0 {
// Only clear the default if the user did NOT explicitly set --filename
if !cmd.Flags().Changed("filename") {
o.FileName = []string{}
}
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -75,7 +86,7 @@ func addStoreSync() *cobra.Command {
return err
}
return store.SyncCmd(ctx, o, s)
return store.SyncCmd(ctx, o, s, rso, ro)
},
}
o.AddFlags(cmd)
@@ -83,13 +94,13 @@ func addStoreSync() *cobra.Command {
return cmd
}
func addStoreLoad() *cobra.Command {
o := &store.LoadOpts{RootOpts: rootStoreOpts}
func addStoreLoad(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.LoadOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "load",
Short: "Load a content store from a store archive",
Args: cobra.MinimumNArgs(1),
Args: cobra.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -99,7 +110,7 @@ func addStoreLoad() *cobra.Command {
}
_ = s
return store.LoadCmd(ctx, o, args...)
return store.LoadCmd(ctx, o, rso, ro)
},
}
o.AddFlags(cmd)
@@ -107,29 +118,29 @@ func addStoreLoad() *cobra.Command {
return cmd
}
func addStoreServe() *cobra.Command {
func addStoreServe(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "serve",
Short: "Expose the content of a local store through an OCI compliant registry or file server",
Short: "Serve the content store via an OCI Compliant Registry or Fileserver",
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Help()
},
}
cmd.AddCommand(
addStoreServeRegistry(),
addStoreServeFiles(),
addStoreServeRegistry(rso, ro),
addStoreServeFiles(rso, ro),
)
return cmd
}
// RegistryCmd serves the embedded registry
func addStoreServeRegistry() *cobra.Command {
o := &store.ServeRegistryOpts{RootOpts: rootStoreOpts}
func addStoreServeRegistry(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.ServeRegistryOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "registry",
Short: "Serve the embedded registry",
RunE: func(cmd *cobra.Command, args []string) error {
Use: "registry",
Short: "Serve the OCI Compliant Registry",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := o.Store(ctx)
@@ -137,22 +148,22 @@ func addStoreServeRegistry() *cobra.Command {
return err
}
return store.ServeRegistryCmd(ctx, o, s)
},
}
return store.ServeRegistryCmd(ctx, o, s, rso, ro)
},
}
o.AddFlags(cmd)
o.AddFlags(cmd)
return cmd
return cmd
}
// FileServerCmd serves the file server
func addStoreServeFiles() *cobra.Command {
o := &store.ServeFilesOpts{RootOpts: rootStoreOpts}
func addStoreServeFiles(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.ServeFilesOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "fileserver",
Short: "Serve the file server",
RunE: func(cmd *cobra.Command, args []string) error {
Use: "fileserver",
Short: "Serve the Fileserver",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := o.Store(ctx)
@@ -160,17 +171,17 @@ func addStoreServeFiles() *cobra.Command {
return err
}
return store.ServeFilesCmd(ctx, o, s)
},
}
return store.ServeFilesCmd(ctx, o, s, ro)
},
}
o.AddFlags(cmd)
o.AddFlags(cmd)
return cmd
return cmd
}
func addStoreSave() *cobra.Command {
o := &store.SaveOpts{RootOpts: rootStoreOpts}
func addStoreSave(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.SaveOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "save",
@@ -185,18 +196,18 @@ func addStoreSave() *cobra.Command {
}
_ = s
return store.SaveCmd(ctx, o, o.FileName)
return store.SaveCmd(ctx, o, rso, ro)
},
}
o.AddArgs(cmd)
o.AddFlags(cmd)
return cmd
}
func addStoreInfo() *cobra.Command {
o := &store.InfoOpts{RootOpts: rootStoreOpts}
func addStoreInfo(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.InfoOpts{StoreRootOpts: rso}
var allowedValues = []string{"image", "chart", "file", "all"}
var allowedValues = []string{"image", "chart", "file", "sigs", "atts", "sbom", "all"}
cmd := &cobra.Command{
Use: "info",
@@ -210,7 +221,7 @@ func addStoreInfo() *cobra.Command {
if err != nil {
return err
}
for _, allowed := range allowedValues {
if o.TypeFilter == allowed {
return store.InfoCmd(ctx, o, s)
@@ -224,12 +235,12 @@ func addStoreInfo() *cobra.Command {
return cmd
}
func addStoreCopy() *cobra.Command {
o := &store.CopyOpts{RootOpts: rootStoreOpts}
func addStoreCopy(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.CopyOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "copy",
Short: "Copy all store contents to another OCI registry",
Short: "Copy all store content to another location",
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -239,7 +250,7 @@ func addStoreCopy() *cobra.Command {
return err
}
return store.CopyCmd(ctx, o, s, args[0])
return store.CopyCmd(ctx, o, s, args[0], ro)
},
}
o.AddFlags(cmd)
@@ -247,31 +258,39 @@ func addStoreCopy() *cobra.Command {
return cmd
}
func addStoreAdd() *cobra.Command {
func addStoreAdd(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
cmd := &cobra.Command{
Use: "add",
Short: "Add content to store",
Short: "Add content to the store",
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Help()
},
}
cmd.AddCommand(
addStoreAddFile(),
addStoreAddImage(),
addStoreAddChart(),
addStoreAddFile(rso, ro),
addStoreAddImage(rso, ro),
addStoreAddChart(rso, ro),
)
return cmd
}
func addStoreAddFile() *cobra.Command {
o := &store.AddFileOpts{RootOpts: rootStoreOpts}
func addStoreAddFile(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.AddFileOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "file",
Short: "Add a file to the content store",
Args: cobra.ExactArgs(1),
Short: "Add a file to the store",
Example: `# fetch local file
hauler store add file file.txt
# fetch remote file
hauler store add file https://get.rke2.io/install.sh
# fetch remote file and assign new name
hauler store add file https://get.hauler.dev --name hauler-install.sh`,
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -288,13 +307,28 @@ func addStoreAddFile() *cobra.Command {
return cmd
}
func addStoreAddImage() *cobra.Command {
o := &store.AddImageOpts{RootOpts: rootStoreOpts}
func addStoreAddImage(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.AddImageOpts{StoreRootOpts: rso}
cmd := &cobra.Command{
Use: "image",
Short: "Add an image to the content store",
Args: cobra.ExactArgs(1),
Short: "Add a image to the store",
Example: `# fetch image
hauler store add image busybox
# fetch image with repository and tag
hauler store add image library/busybox:stable
# fetch image with full image reference and specific platform
hauler store add image ghcr.io/hauler-dev/hauler-debug:v1.2.0 --platform linux/amd64
# fetch image with full image reference via digest
hauler store add image gcr.io/distroless/base@sha256:7fa7445dfbebae4f4b7ab0e6ef99276e96075ae42584af6286ba080750d6dfe5
# fetch image with full image reference, specific platform, and signature verification
curl -sfOL https://raw.githubusercontent.com/rancherfederal/carbide-releases/main/carbide-key.pub
hauler store add image rgcrprod.azurecr.us/rancher/rke2-runtime:v1.31.5-rke2r1 --platform linux/amd64 --key carbide-key.pub`,
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
@@ -303,7 +337,7 @@ func addStoreAddImage() *cobra.Command {
return err
}
return store.AddImageCmd(ctx, o, s, args[0])
return store.AddImageCmd(ctx, o, s, args[0], rso, ro)
},
}
o.AddFlags(cmd)
@@ -311,28 +345,29 @@ func addStoreAddImage() *cobra.Command {
return cmd
}
func addStoreAddChart() *cobra.Command {
o := &store.AddChartOpts{
RootOpts: rootStoreOpts,
ChartOpts: &action.ChartPathOptions{},
}
func addStoreAddChart(rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *cobra.Command {
o := &flags.AddChartOpts{StoreRootOpts: rso, ChartOpts: &action.ChartPathOptions{}}
cmd := &cobra.Command{
Use: "chart",
Short: "Add a local or remote chart to the content store",
Example: `
# add a local chart
hauler store add chart path/to/chart/directory
Short: "Add a helm chart to the store",
Example: `# fetch local helm chart
hauler store add chart path/to/chart/directory --repo .
# add a local compressed chart
hauler store add chart path/to/chart.tar.gz
# fetch local compressed helm chart
hauler store add chart path/to/chart.tar.gz --repo .
# add a remote chart
hauler store add chart longhorn --repo "https://charts.longhorn.io"
# fetch remote oci helm chart
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev
# add a specific version of a chart
hauler store add chart rancher --repo "https://releases.rancher.com/server-charts/latest" --version "2.6.2"
`,
# fetch remote oci helm chart with version
hauler store add chart hauler-helm --repo oci://ghcr.io/hauler-dev --version 1.2.0
# fetch remote helm chart
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/stable
# fetch remote helm chart with specific version
hauler store add chart rancher --repo https://releases.rancher.com/server-charts/latest --version 2.10.1`,
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()

View File

@@ -2,42 +2,34 @@ package store
import (
"context"
"os"
"github.com/google/go-containerregistry/pkg/name"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/spf13/cobra"
"helm.sh/helm/v3/pkg/action"
"github.com/rancherfederal/hauler/pkg/artifacts/file"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"github.com/rancherfederal/hauler/pkg/content/chart"
"github.com/rancherfederal/hauler/pkg/cosign"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/reference"
"hauler.dev/go/hauler/internal/flags"
v1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
"hauler.dev/go/hauler/pkg/artifacts/file"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/content/chart"
"hauler.dev/go/hauler/pkg/cosign"
"hauler.dev/go/hauler/pkg/getter"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/reference"
"hauler.dev/go/hauler/pkg/store"
)
type AddFileOpts struct {
*RootOpts
Name string
}
func (o *AddFileOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Name, "name", "n", "", "(Optional) Name to assign to file in store")
}
func AddFileCmd(ctx context.Context, o *AddFileOpts, s *store.Layout, reference string) error {
cfg := v1alpha1.File{
func AddFileCmd(ctx context.Context, o *flags.AddFileOpts, s *store.Layout, reference string) error {
cfg := v1.File{
Path: reference,
}
if len(o.Name) > 0 {
cfg.Name = o.Name
}
return storeFile(ctx, s, cfg)
}
func storeFile(ctx context.Context, s *store.Layout, fi v1alpha1.File) error {
func storeFile(ctx context.Context, s *store.Layout, fi v1.File) error {
l := log.FromContext(ctx)
copts := getter.ClientOptions{
@@ -45,92 +37,90 @@ func storeFile(ctx context.Context, s *store.Layout, fi v1alpha1.File) error {
}
f := file.NewFile(fi.Path, file.WithClient(getter.NewClient(copts)))
ref, err := reference.NewTagged(f.Name(fi.Path), reference.DefaultTag)
ref, err := reference.NewTagged(f.Name(fi.Path), consts.DefaultTag)
if err != nil {
return err
}
desc, err := s.AddOCI(ctx, f, ref.Name())
l.Infof("adding file [%s] to the store as [%s]", fi.Path, ref.Name())
_, err = s.AddOCI(ctx, f, ref.Name())
if err != nil {
return err
}
l.Infof("added 'file' to store at [%s], with digest [%s]", ref.Name(), desc.Digest.String())
l.Infof("successfully added file [%s]", ref.Name())
return nil
}
type AddImageOpts struct {
*RootOpts
Name string
Key string
Platform string
}
func (o *AddImageOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Key, "key", "k", "", "(Optional) Path to the key for digital signature verification")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specific platform to save. i.e. linux/amd64. Defaults to all if flag is omitted.")
}
func AddImageCmd(ctx context.Context, o *AddImageOpts, s *store.Layout, reference string) error {
func AddImageCmd(ctx context.Context, o *flags.AddImageOpts, s *store.Layout, reference string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
cfg := v1alpha1.Image{
cfg := v1.Image{
Name: reference,
}
// Check if the user provided a key.
if o.Key != "" {
// verify signature using the provided key.
err := cosign.VerifySignature(ctx, s, o.Key, cfg.Name)
err := cosign.VerifySignature(ctx, s, o.Key, o.Tlog, cfg.Name, rso, ro)
if err != nil {
return err
}
l.Infof("signature verified for image [%s]", cfg.Name)
} else if o.CertIdentityRegexp != "" || o.CertIdentity != "" {
// verify signature using the provided keyless details
l.Infof("verifying keyless signature for [%s]", cfg.Name)
err := cosign.VerifyKeylessSignature(ctx, s, o.CertIdentity, o.CertIdentityRegexp, o.CertOidcIssuer, o.CertOidcIssuerRegexp, o.CertGithubWorkflowRepository, o.Tlog, cfg.Name, rso, ro)
if err != nil {
return err
}
l.Infof("keyless signature verified for image [%s]", cfg.Name)
}
return storeImage(ctx, s, cfg, o.Platform)
return storeImage(ctx, s, cfg, o.Platform, rso, ro)
}
func storeImage(ctx context.Context, s *store.Layout, i v1alpha1.Image, platform string) error {
func storeImage(ctx context.Context, s *store.Layout, i v1.Image, platform string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
if !ro.IgnoreErrors {
envVar := os.Getenv(consts.HaulerIgnoreErrors)
if envVar == "true" {
ro.IgnoreErrors = true
}
}
l.Infof("adding image [%s] to the store", i.Name)
r, err := name.ParseReference(i.Name)
if err != nil {
return err
if ro.IgnoreErrors {
l.Warnf("unable to parse image [%s]: %v... skipping...", i.Name, err)
return nil
} else {
l.Errorf("unable to parse image [%s]: %v", i.Name, err)
return err
}
}
err = cosign.SaveImage(ctx, s, r.Name(), platform)
err = cosign.SaveImage(ctx, s, r.Name(), platform, rso, ro)
if err != nil {
return err
if ro.IgnoreErrors {
l.Warnf("unable to add image [%s] to store: %v... skipping...", r.Name(), err)
return nil
} else {
l.Errorf("unable to add image [%s] to store: %v", r.Name(), err)
return err
}
}
l.Infof("added 'image' to store at [%s]", r.Name())
l.Infof("successfully added image [%s]", r.Name())
return nil
}
type AddChartOpts struct {
*RootOpts
ChartOpts *action.ChartPathOptions
}
func (o *AddChartOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVar(&o.ChartOpts.RepoURL, "repo", "", "chart repository url where to locate the requested chart")
f.StringVar(&o.ChartOpts.Version, "version", "", "specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used")
f.BoolVar(&o.ChartOpts.Verify, "verify", false, "verify the package before using it")
f.StringVar(&o.ChartOpts.Username, "username", "", "chart repository username where to locate the requested chart")
f.StringVar(&o.ChartOpts.Password, "password", "", "chart repository password where to locate the requested chart")
f.StringVar(&o.ChartOpts.CertFile, "cert-file", "", "identify HTTPS client using this SSL certificate file")
f.StringVar(&o.ChartOpts.KeyFile, "key-file", "", "identify HTTPS client using this SSL key file")
f.BoolVar(&o.ChartOpts.InsecureSkipTLSverify, "insecure-skip-tls-verify", false, "skip tls certificate checks for the chart download")
f.StringVar(&o.ChartOpts.CaFile, "ca-file", "", "verify certificates of HTTPS-enabled servers using this CA bundle")
}
func AddChartCmd(ctx context.Context, o *AddChartOpts, s *store.Layout, chartName string) error {
// TODO: Reduce duplicates between api chart and upstream helm opts
cfg := v1alpha1.Chart{
func AddChartCmd(ctx context.Context, o *flags.AddChartOpts, s *store.Layout, chartName string) error {
cfg := v1.Chart{
Name: chartName,
RepoURL: o.ChartOpts.RepoURL,
Version: o.ChartOpts.Version,
@@ -139,9 +129,11 @@ func AddChartCmd(ctx context.Context, o *AddChartOpts, s *store.Layout, chartNam
return storeChart(ctx, s, cfg, o.ChartOpts)
}
func storeChart(ctx context.Context, s *store.Layout, cfg v1alpha1.Chart, opts *action.ChartPathOptions) error {
func storeChart(ctx context.Context, s *store.Layout, cfg v1.Chart, opts *action.ChartPathOptions) error {
l := log.FromContext(ctx)
l.Infof("adding chart [%s] to the store", cfg.Name)
// TODO: This shouldn't be necessary
opts.RepoURL = cfg.RepoURL
opts.Version = cfg.Version
@@ -160,11 +152,11 @@ func storeChart(ctx context.Context, s *store.Layout, cfg v1alpha1.Chart, opts *
if err != nil {
return err
}
desc, err := s.AddOCI(ctx, chrt, ref.Name())
_, err = s.AddOCI(ctx, chrt, ref.Name())
if err != nil {
return err
}
l.Infof("added 'chart' to store at [%s], with digest [%s]", ref.Name(), desc.Digest.String())
l.Infof("successfully added chart [%s]", ref.Name())
return nil
}

View File

@@ -5,40 +5,21 @@ import (
"fmt"
"strings"
"github.com/spf13/cobra"
"oras.land/oras-go/pkg/content"
"github.com/rancherfederal/hauler/pkg/cosign"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/log"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/cosign"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/store"
)
type CopyOpts struct {
*RootOpts
Username string
Password string
Insecure bool
PlainHTTP bool
}
func (o *CopyOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Username, "username", "u", "", "Username when copying to an authenticated remote registry")
f.StringVarP(&o.Password, "password", "p", "", "Password when copying to an authenticated remote registry")
f.BoolVar(&o.Insecure, "insecure", false, "Toggle allowing insecure connections when copying to a remote registry")
f.BoolVar(&o.PlainHTTP, "plain-http", false, "Toggle allowing plain http connections when copying to a remote registry")
}
func CopyCmd(ctx context.Context, o *CopyOpts, s *store.Layout, targetRef string) error {
func CopyCmd(ctx context.Context, o *flags.CopyOpts, s *store.Layout, targetRef string, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
components := strings.SplitN(targetRef, "://", 2)
switch components[0] {
case "dir":
l.Debugf("identified directory target reference")
l.Debugf("identified directory target reference of [%s]", components[1])
fs := content.NewFile(components[1])
defer fs.Close()
@@ -48,22 +29,15 @@ func CopyCmd(ctx context.Context, o *CopyOpts, s *store.Layout, targetRef string
}
case "registry":
l.Debugf("identified registry target reference")
l.Debugf("identified registry target reference of [%s]", components[1])
ropts := content.RegistryOptions{
Username: o.Username,
Password: o.Password,
Insecure: o.Insecure,
PlainHTTP: o.PlainHTTP,
}
if ropts.Username != "" {
err := cosign.RegistryLogin(ctx, s, components[1], ropts)
if err != nil {
return err
}
}
err := cosign.LoadImages(ctx, s, components[1], ropts)
err := cosign.LoadImages(ctx, s, components[1], o.Only, ropts, ro)
if err != nil {
return err
}

View File

@@ -2,32 +2,20 @@ package store
import (
"context"
"strings"
"encoding/json"
"fmt"
"strings"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/internal/mapper"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/reference"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/internal/mapper"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/reference"
"hauler.dev/go/hauler/pkg/store"
)
type ExtractOpts struct {
*RootOpts
DestinationDir string
}
func (o *ExtractOpts) AddArgs(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.DestinationDir, "output", "o", "", "Directory to save contents to (defaults to current directory)")
}
func ExtractCmd(ctx context.Context, o *ExtractOpts, s *store.Layout, ref string) error {
func ExtractCmd(ctx context.Context, o *flags.ExtractOpts, s *store.Layout, ref string) error {
l := log.FromContext(ctx)
r, err := reference.Parse(ref)
@@ -35,10 +23,12 @@ func ExtractCmd(ctx context.Context, o *ExtractOpts, s *store.Layout, ref string
return err
}
// use the repository from the context and the identifier from the reference
repo := r.Context().RepositoryStr() + ":" + r.Identifier()
found := false
if err := s.Walk(func(reference string, desc ocispec.Descriptor) error {
if !strings.Contains(reference, r.Name()) {
if !strings.Contains(reference, repo) {
return nil
}
found = true

View File

@@ -1,84 +0,0 @@
package store
import (
"context"
"errors"
"os"
"path/filepath"
"github.com/rancherfederal/hauler/pkg/layer"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/log"
)
const (
DefaultStoreName = "store"
DefaultCacheDir = "hauler"
)
type RootOpts struct {
StoreDir string
CacheDir string
}
func (o *RootOpts) AddArgs(cmd *cobra.Command) {
pf := cmd.PersistentFlags()
pf.StringVar(&o.CacheDir, "cache", "", "Location of where to store cache data (defaults to $XDG_CACHE_DIR/hauler)")
pf.StringVarP(&o.StoreDir, "store", "s", DefaultStoreName, "Location to create store at")
}
func (o *RootOpts) Store(ctx context.Context) (*store.Layout, error) {
l := log.FromContext(ctx)
dir := o.StoreDir
abs, err := filepath.Abs(dir)
if err != nil {
return nil, err
}
l.Debugf("using store at %s", abs)
if _, err := os.Stat(abs); errors.Is(err, os.ErrNotExist) {
err := os.Mkdir(abs, os.ModePerm)
if err != nil {
return nil, err
}
} else if err != nil {
return nil, err
}
// TODO: Do we want this to be configurable?
c, err := o.Cache(ctx)
if err != nil {
return nil, err
}
s, err := store.NewLayout(abs, store.WithCache(c))
if err != nil {
return nil, err
}
return s, nil
}
func (o *RootOpts) Cache(ctx context.Context) (layer.Cache, error) {
dir := o.CacheDir
if dir == "" {
// Default to $XDG_CACHE_HOME
cachedir, err := os.UserCacheDir()
if err != nil {
return nil, err
}
abs, _ := filepath.Abs(filepath.Join(cachedir, DefaultCacheDir))
if err := os.MkdirAll(abs, os.ModePerm); err != nil {
return nil, err
}
dir = abs
}
c := layer.NewFilesystemCache(dir)
return c, nil
}

View File

@@ -4,38 +4,19 @@ import (
"context"
"encoding/json"
"fmt"
"github.com/olekukonko/tablewriter"
"os"
"sort"
"github.com/olekukonko/tablewriter"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/reference"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/reference"
"hauler.dev/go/hauler/pkg/store"
)
type InfoOpts struct {
*RootOpts
OutputFormat string
TypeFilter string
SizeUnit string
}
func (o *InfoOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.OutputFormat, "output", "o", "table", "Output format (table, json)")
f.StringVarP(&o.TypeFilter, "type", "t", "all", "Filter on type (image, chart, file)")
// TODO: Regex/globbing
}
func InfoCmd(ctx context.Context, o *InfoOpts, s *store.Layout) error {
func InfoCmd(ctx context.Context, o *flags.InfoOpts, s *store.Layout) error {
var items []item
if err := s.Walk(func(ref string, desc ocispec.Descriptor) error {
if _, ok := desc.Annotations[ocispec.AnnotationRefName]; !ok {
@@ -47,7 +28,7 @@ func InfoCmd(ctx context.Context, o *InfoOpts, s *store.Layout) error {
}
defer rc.Close()
// handle multi-arch images
// handle multi-arch images
if desc.MediaType == consts.OCIImageIndexSchema || desc.MediaType == consts.DockerManifestListSchema2 {
var idx ocispec.Index
if err := json.NewDecoder(rc).Decode(&idx); err != nil {
@@ -72,13 +53,13 @@ func InfoCmd(ctx context.Context, o *InfoOpts, s *store.Layout) error {
items = append(items, i)
}
}
// handle "non" multi-arch images
// handle "non" multi-arch images
} else if desc.MediaType == consts.DockerManifestSchema2 || desc.MediaType == consts.OCIManifestSchema1 {
var m ocispec.Manifest
if err := json.NewDecoder(rc).Decode(&m); err != nil {
return err
}
rc, err := s.FetchManifest(ctx, m)
if err != nil {
return err
@@ -90,7 +71,7 @@ func InfoCmd(ctx context.Context, o *InfoOpts, s *store.Layout) error {
if err := json.NewDecoder(rc).Decode(&internalManifest); err != nil {
return err
}
if internalManifest.Architecture != "" {
i := newItem(s, desc, m, fmt.Sprintf("%s/%s", internalManifest.OS, internalManifest.Architecture), o)
var emptyItem item
@@ -104,8 +85,8 @@ func InfoCmd(ctx context.Context, o *InfoOpts, s *store.Layout) error {
items = append(items, i)
}
}
// handle the rest
} else {
// handle the rest
} else {
var m ocispec.Manifest
if err := json.NewDecoder(rc).Decode(&m); err != nil {
return err
@@ -123,6 +104,11 @@ func InfoCmd(ctx context.Context, o *InfoOpts, s *store.Layout) error {
return err
}
if o.ListRepos {
buildListRepos(items...)
return nil
}
// sort items by ref and arch
sort.Sort(byReferenceAndArch(items))
@@ -137,6 +123,30 @@ func InfoCmd(ctx context.Context, o *InfoOpts, s *store.Layout) error {
return nil
}
func buildListRepos(items ...item) {
// Create map to track unique repository names
repos := make(map[string]bool)
for _, i := range items {
repoName := ""
for j := 0; j < len(i.Reference); j++ {
if i.Reference[j] == '/' {
repoName = i.Reference[:j]
break
}
}
if repoName == "" {
repoName = i.Reference
}
repos[repoName] = true
}
// Collect and print unique repository names
for repoName := range repos {
fmt.Println(repoName)
}
}
func buildTable(items ...item) {
// Create a table for the results
table := tablewriter.NewWriter(os.Stdout)
@@ -145,6 +155,7 @@ func buildTable(items ...item) {
table.SetRowLine(false)
table.SetAutoMergeCellsByColumnIndex([]int{0})
totalSize := int64(0)
for _, i := range items {
if i.Type != "" {
row := []string{
@@ -152,11 +163,14 @@ func buildTable(items ...item) {
i.Type,
i.Platform,
fmt.Sprintf("%d", i.Layers),
i.Size,
byteCountSI(i.Size),
}
totalSize += i.Size
table.Append(row)
}
}
table.SetFooter([]string{"", "", "", "Total", byteCountSI(totalSize)})
table.Render()
}
@@ -169,11 +183,11 @@ func buildJson(item ...item) string {
}
type item struct {
Reference string
Type string
Platform string
Layers int
Size string
Reference string
Type string
Platform string
Layers int
Size int64
}
type byReferenceAndArch []item
@@ -182,22 +196,24 @@ func (a byReferenceAndArch) Len() int { return len(a) }
func (a byReferenceAndArch) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a byReferenceAndArch) Less(i, j int) bool {
if a[i].Reference == a[j].Reference {
return a[i].Platform < a[j].Platform
if a[i].Type == "image" && a[j].Type == "image" {
return a[i].Platform < a[j].Platform
}
if a[i].Type == "image" {
return true
}
if a[j].Type == "image" {
return false
}
return a[i].Type < a[j].Type
}
return a[i].Reference < a[j].Reference
}
func newItem(s *store.Layout, desc ocispec.Descriptor, m ocispec.Manifest, plat string, o *InfoOpts) item {
// skip listing cosign items
if desc.Annotations["kind"] == "dev.cosignproject.cosign/atts" ||
desc.Annotations["kind"] == "dev.cosignproject.cosign/sigs" ||
desc.Annotations["kind"] == "dev.cosignproject.cosign/sboms" {
return item{}
}
func newItem(s *store.Layout, desc ocispec.Descriptor, m ocispec.Manifest, plat string, o *flags.InfoOpts) item {
var size int64 = 0
for _, l := range m.Layers {
size = +l.Size
size += l.Size
}
// Generate a human-readable content type
@@ -213,7 +229,20 @@ func newItem(s *store.Layout, desc ocispec.Descriptor, m ocispec.Manifest, plat
ctype = "image"
}
ref, err := reference.Parse(desc.Annotations[ocispec.AnnotationRefName])
switch desc.Annotations["kind"] {
case "dev.cosignproject.cosign/sigs":
ctype = "sigs"
case "dev.cosignproject.cosign/atts":
ctype = "atts"
case "dev.cosignproject.cosign/sboms":
ctype = "sbom"
}
refName := desc.Annotations["io.containerd.image.name"]
if refName == "" {
refName = desc.Annotations[ocispec.AnnotationRefName]
}
ref, err := reference.Parse(refName)
if err != nil {
return item{}
}
@@ -223,11 +252,11 @@ func newItem(s *store.Layout, desc ocispec.Descriptor, m ocispec.Manifest, plat
}
return item{
Reference: ref.Name(),
Type: ctype,
Platform: plat,
Layers: len(m.Layers),
Size: byteCountSI(size),
Reference: ref.Name(),
Type: ctype,
Platform: plat,
Layers: len(m.Layers),
Size: size,
}
}

View File

@@ -2,33 +2,42 @@ package store
import (
"context"
"io"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/mholt/archiver/v3"
"github.com/rancherfederal/hauler/pkg/content"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/log"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/archives"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/content"
"hauler.dev/go/hauler/pkg/getter"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/store"
)
type LoadOpts struct {
*RootOpts
}
func (o *LoadOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
_ = f
}
// LoadCmd
// TODO: Just use mholt/archiver for now, even though we don't need most of it
func LoadCmd(ctx context.Context, o *LoadOpts, archiveRefs ...string) error {
// extracts the contents of an archived oci layout to an existing oci layout
func LoadCmd(ctx context.Context, o *flags.LoadOpts, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
for _, archiveRef := range archiveRefs {
l.Infof("loading content from [%s] to [%s]", archiveRef, o.StoreDir)
err := unarchiveLayoutTo(ctx, archiveRef, o.StoreDir)
tempOverride := o.TempOverride
if tempOverride == "" {
tempOverride = os.Getenv(consts.HaulerTempDir)
}
tempDir, err := os.MkdirTemp(tempOverride, consts.DefaultHaulerTempDirName)
if err != nil {
return err
}
defer os.RemoveAll(tempDir)
l.Debugf("using temporary directory at [%s]", tempDir)
for _, fileName := range o.FileName {
l.Infof("loading haul [%s] to [%s]", fileName, o.StoreDir)
err := unarchiveLayoutTo(ctx, fileName, o.StoreDir, tempDir)
if err != nil {
return err
}
@@ -37,19 +46,46 @@ func LoadCmd(ctx context.Context, o *LoadOpts, archiveRefs ...string) error {
return nil
}
// unarchiveLayoutTo accepts an archived oci layout and extracts the contents to an existing oci layout, preserving the index
func unarchiveLayoutTo(ctx context.Context, archivePath string, dest string) error {
tmpdir, err := os.MkdirTemp("", "hauler")
if err != nil {
return err
}
defer os.RemoveAll(tmpdir)
// accepts an archived OCI layout, extracts the contents to an existing OCI layout, and preserves the index
func unarchiveLayoutTo(ctx context.Context, haulPath string, dest string, tempDir string) error {
l := log.FromContext(ctx)
if err := archiver.Unarchive(archivePath, tmpdir); err != nil {
if strings.HasPrefix(haulPath, "http://") || strings.HasPrefix(haulPath, "https://") {
l.Debugf("detected remote archive... starting download... [%s]", haulPath)
h := getter.NewHttp()
parsedURL, err := url.Parse(haulPath)
if err != nil {
return err
}
rc, err := h.Open(ctx, parsedURL)
if err != nil {
return err
}
defer rc.Close()
fileName := h.Name(parsedURL)
if fileName == "" {
fileName = filepath.Base(parsedURL.Path)
}
haulPath = filepath.Join(tempDir, fileName)
out, err := os.Create(haulPath)
if err != nil {
return err
}
defer out.Close()
if _, err = io.Copy(out, rc); err != nil {
return err
}
}
if err := archives.Unarchive(ctx, haulPath, tempDir); err != nil {
return err
}
s, err := store.NewLayout(tmpdir)
s, err := store.NewLayout(tempDir)
if err != nil {
return err
}

View File

@@ -1,37 +1,42 @@
package store
import (
"bytes"
"context"
"encoding/json"
"os"
"path"
"path/filepath"
"slices"
"github.com/mholt/archiver/v3"
"github.com/spf13/cobra"
referencev3 "github.com/distribution/distribution/v3/reference"
"github.com/google/go-containerregistry/pkg/name"
libv1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/layout"
"github.com/google/go-containerregistry/pkg/v1/tarball"
"github.com/google/go-containerregistry/pkg/v1/types"
imagev1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/rancherfederal/hauler/pkg/log"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/archives"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/log"
)
type SaveOpts struct {
*RootOpts
FileName string
}
func (o *SaveOpts) AddArgs(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.FileName, "filename", "f", "haul.tar.zst", "Name of archive")
}
// SaveCmd
// TODO: Just use mholt/archiver for now, even though we don't need most of it
func SaveCmd(ctx context.Context, o *SaveOpts, outputFile string) error {
// saves a content store to store archives
func SaveCmd(ctx context.Context, o *flags.SaveOpts, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
// TODO: Support more formats?
a := archiver.NewTarZstd()
a.OverwriteExisting = true
// maps to handle compression and archival types
compressionMap := archives.CompressionMap
archivalMap := archives.ArchivalMap
absOutputfile, err := filepath.Abs(outputFile)
// TODO: Support more formats?
// Select the correct compression and archival type based on user input
compression := compressionMap["zst"]
archival := archivalMap["tar"]
absOutputfile, err := filepath.Abs(o.FileName)
if err != nil {
return err
}
@@ -45,11 +50,194 @@ func SaveCmd(ctx context.Context, o *SaveOpts, outputFile string) error {
return err
}
err = a.Archive([]string{"."}, absOutputfile)
// create the manifest.json file
if err := writeExportsManifest(ctx, ".", o.Platform); err != nil {
return err
}
// create the archive
err = archives.Archive(ctx, ".", absOutputfile, compression, archival)
if err != nil {
return err
}
l.Infof("saved store [%s] -> [%s]", o.StoreDir, absOutputfile)
l.Infof("saving store [%s] to archive [%s]", o.StoreDir, o.FileName)
return nil
}
type exports struct {
digests []string
records map[string]tarball.Descriptor
}
func writeExportsManifest(ctx context.Context, dir string, platformStr string) error {
l := log.FromContext(ctx)
// validate platform format
platform, err := libv1.ParsePlatform(platformStr)
if err != nil {
return err
}
oci, err := layout.FromPath(dir)
if err != nil {
return err
}
idx, err := oci.ImageIndex()
if err != nil {
return err
}
imx, err := idx.IndexManifest()
if err != nil {
return err
}
x := &exports{
digests: []string{},
records: map[string]tarball.Descriptor{},
}
for _, desc := range imx.Manifests {
l.Debugf("descriptor [%s] = [%s]", desc.Digest.String(), desc.MediaType)
if artifactType := types.MediaType(desc.ArtifactType); artifactType != "" && !artifactType.IsImage() && !artifactType.IsIndex() {
l.Debugf("descriptor [%s] <<< SKIPPING ARTIFACT [%q]", desc.Digest.String(), desc.ArtifactType)
continue
}
if desc.Annotations != nil {
// we only care about images that cosign has added to the layout index
if kind, hasKind := desc.Annotations[consts.KindAnnotationName]; hasKind {
if refName, hasRefName := desc.Annotations["io.containerd.image.name"]; hasRefName {
// branch on image (aka image manifest) or image index
switch kind {
case consts.KindAnnotationImage:
if err := x.record(ctx, idx, desc, refName); err != nil {
return err
}
case consts.KindAnnotationIndex:
l.Debugf("index [%s]: digest=[%s]... type=[%s]... size=[%d]", refName, desc.Digest.String(), desc.MediaType, desc.Size)
// when no platform is provided, warn the user of potential mismatch on import
if platform.String() == "" {
l.Warnf("specify an export platform to prevent potential platform mismatch on import of index [%s]", refName)
}
iix, err := idx.ImageIndex(desc.Digest)
if err != nil {
return err
}
ixm, err := iix.IndexManifest()
if err != nil {
return err
}
for _, ixd := range ixm.Manifests {
if ixd.MediaType.IsImage() {
// check if platform is provided, if so, skip anything that doesn't match
if platform.String() != "" {
if ixd.Platform.Architecture != platform.Architecture || ixd.Platform.OS != platform.OS {
l.Debugf("index [%s]: digest=[%s], platform=[%s/%s]: does not match the supplied platform... skipping...", refName, desc.Digest.String(), ixd.Platform.OS, ixd.Platform.Architecture)
continue
}
}
// skip 'unknown' platforms... docker hates
if ixd.Platform.Architecture == "unknown" && ixd.Platform.OS == "unknown" {
l.Debugf("index [%s]: digest=[%s], platform=[%s/%s]: matches unknown platform... skipping...", refName, desc.Digest.String(), ixd.Platform.OS, ixd.Platform.Architecture)
continue
}
if err := x.record(ctx, iix, ixd, refName); err != nil {
return err
}
}
}
default:
l.Debugf("descriptor [%s] <<< SKIPPING KIND [%q]", desc.Digest.String(), kind)
}
}
}
}
}
buf := bytes.Buffer{}
mnf := x.describe()
err = json.NewEncoder(&buf).Encode(mnf)
if err != nil {
return err
}
return oci.WriteFile(consts.ImageManifestFile, buf.Bytes(), 0666)
}
func (x *exports) describe() tarball.Manifest {
m := make(tarball.Manifest, len(x.digests))
for i, d := range x.digests {
m[i] = x.records[d]
}
return m
}
func (x *exports) record(ctx context.Context, index libv1.ImageIndex, desc libv1.Descriptor, refname string) error {
l := log.FromContext(ctx)
digest := desc.Digest.String()
image, err := index.Image(desc.Digest)
if err != nil {
return err
}
config, err := image.ConfigName()
if err != nil {
return err
}
xd, recorded := x.records[digest]
if !recorded {
// record one export record per digest
x.digests = append(x.digests, digest)
xd = tarball.Descriptor{
Config: path.Join(imagev1.ImageBlobsDir, config.Algorithm, config.Hex),
RepoTags: []string{},
Layers: []string{},
}
layers, err := image.Layers()
if err != nil {
return err
}
for _, layer := range layers {
xl, err := layer.Digest()
if err != nil {
return err
}
xd.Layers = append(xd.Layers[:], path.Join(imagev1.ImageBlobsDir, xl.Algorithm, xl.Hex))
}
}
ref, err := name.ParseReference(refname)
if err != nil {
return err
}
// record tags for the digest, eliminating dupes
switch tag := ref.(type) {
case name.Tag:
named, err := referencev3.ParseNormalizedNamed(refname)
if err != nil {
return err
}
named = referencev3.TagNameOnly(named)
repotag := referencev3.FamiliarString(named)
xd.RepoTags = append(xd.RepoTags[:], repotag)
slices.Sort(xd.RepoTags)
xd.RepoTags = slices.Compact(xd.RepoTags)
ref = tag.Digest(digest)
}
l.Debugf("image [%s]: type=%s, size=%d", ref.Name(), desc.MediaType, desc.Size)
// record export descriptor for the digest
x.records[digest] = xd
return nil
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"net/http"
"os"
"strings"
"github.com/distribution/distribution/v3/configuration"
dcontext "github.com/distribution/distribution/v3/context"
@@ -12,33 +13,43 @@ import (
_ "github.com/distribution/distribution/v3/registry/storage/driver/filesystem"
_ "github.com/distribution/distribution/v3/registry/storage/driver/inmemory"
"github.com/distribution/distribution/v3/version"
"github.com/spf13/cobra"
"gopkg.in/yaml.v3"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/internal/server"
"github.com/rancherfederal/hauler/pkg/log"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/internal/server"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/store"
)
type ServeRegistryOpts struct {
*RootOpts
func DefaultRegistryConfig(o *flags.ServeRegistryOpts, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) *configuration.Configuration {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": o.RootDir},
"maintenance": configuration.Parameters{
"readonly": map[any]any{"enabled": o.ReadOnly},
},
},
}
Port int
RootDir string
ConfigFile string
if o.TLSCert != "" && o.TLSKey != "" {
cfg.HTTP.TLS.Certificate = o.TLSCert
cfg.HTTP.TLS.Key = o.TLSKey
}
storedir string
cfg.HTTP.Addr = fmt.Sprintf(":%d", o.Port)
cfg.HTTP.Headers = http.Header{
"X-Content-Type-Options": []string{"nosniff"},
}
cfg.Log.Level = configuration.Loglevel(ro.LogLevel)
cfg.Validation.Manifests.URLs.Allow = []string{".+"}
return cfg
}
func (o *ServeRegistryOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.IntVarP(&o.Port, "port", "p", 5000, "Port to listen on.")
f.StringVar(&o.RootDir, "directory", "registry", "Directory to use for backend. Defaults to $PWD/registry")
f.StringVarP(&o.ConfigFile, "config", "c", "", "Path to a config file, will override all other configs")
}
func ServeRegistryCmd(ctx context.Context, o *ServeRegistryOpts, s *store.Layout) error {
func ServeRegistryCmd(ctx context.Context, o *flags.ServeRegistryOpts, s *store.Layout, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
ctx = dcontext.WithVersion(ctx, version.Version)
@@ -47,14 +58,14 @@ func ServeRegistryCmd(ctx context.Context, o *ServeRegistryOpts, s *store.Layout
return err
}
opts := &CopyOpts{}
if err := CopyCmd(ctx, opts, s, "registry://"+tr.Registry()); err != nil {
opts := &flags.CopyOpts{}
if err := CopyCmd(ctx, opts, s, "registry://"+tr.Registry(), ro); err != nil {
return err
}
tr.Close()
cfg := o.defaultRegistryConfig()
cfg := DefaultRegistryConfig(o, rso, ro)
if o.ConfigFile != "" {
ucfg, err := loadConfig(o.ConfigFile)
if err != nil {
@@ -64,11 +75,21 @@ func ServeRegistryCmd(ctx context.Context, o *ServeRegistryOpts, s *store.Layout
}
l.Infof("starting registry on port [%d]", o.Port)
yamlConfig, err := yaml.Marshal(cfg)
if err != nil {
l.Errorf("failed to validate/output registry configuration: %v", err)
} else {
l.Infof("using registry configuration... \n%s", strings.TrimSpace(string(yamlConfig)))
}
l.Debugf("detailed registry configuration: %+v", cfg)
r, err := server.NewRegistry(ctx, cfg)
if err != nil {
return err
}
if err = r.ListenAndServe(); err != nil {
return err
}
@@ -76,44 +97,30 @@ func ServeRegistryCmd(ctx context.Context, o *ServeRegistryOpts, s *store.Layout
return nil
}
type ServeFilesOpts struct {
*RootOpts
Port int
RootDir string
storedir string
}
func (o *ServeFilesOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.IntVarP(&o.Port, "port", "p", 8080, "Port to listen on.")
f.StringVar(&o.RootDir, "directory", "store-files", "Directory to use for backend. Defaults to $PWD/store-files")
}
func ServeFilesCmd(ctx context.Context, o *ServeFilesOpts, s *store.Layout) error {
func ServeFilesCmd(ctx context.Context, o *flags.ServeFilesOpts, s *store.Layout, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
ctx = dcontext.WithVersion(ctx, version.Version)
opts := &CopyOpts{}
if err := CopyCmd(ctx, opts, s, "dir://"+o.RootDir); err != nil {
opts := &flags.CopyOpts{}
if err := CopyCmd(ctx, opts, s, "dir://"+o.RootDir, ro); err != nil {
return err
}
cfg := server.FileConfig{
Root: o.RootDir,
Port: o.Port,
}
f, err := server.NewFile(ctx, cfg)
f, err := server.NewFile(ctx, *o)
if err != nil {
return err
}
l.Infof("starting file server on port [%d]", o.Port)
if err := f.ListenAndServe(); err != nil {
return err
if o.TLSCert != "" && o.TLSKey != "" {
l.Infof("starting file server with tls on port [%d]", o.Port)
if err := f.ListenAndServeTLS(o.TLSCert, o.TLSKey); err != nil {
return err
}
} else {
l.Infof("starting file server on port [%d]", o.Port)
if err := f.ListenAndServe(); err != nil {
return err
}
}
return nil
@@ -127,27 +134,3 @@ func loadConfig(filename string) (*configuration.Configuration, error) {
return configuration.Parse(f)
}
func (o *ServeRegistryOpts) defaultRegistryConfig() *configuration.Configuration {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": o.RootDir},
// TODO: Ensure this is toggleable via cli arg if necessary
// "maintenance": configuration.Parameters{"readonly.enabled": false},
},
}
// Add validation configuration
cfg.Validation.Manifests.URLs.Allow = []string{".+"}
cfg.Log.Level = "info"
cfg.HTTP.Addr = fmt.Sprintf(":%d", o.Port)
cfg.HTTP.Headers = http.Header{
"X-Content-Type-Options": []string{"nosniff"},
}
return cfg
}

View File

@@ -5,92 +5,139 @@ import (
"context"
"fmt"
"io"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/spf13/cobra"
"github.com/mitchellh/go-homedir"
"helm.sh/helm/v3/pkg/action"
"k8s.io/apimachinery/pkg/util/yaml"
"github.com/mitchellh/go-homedir"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
tchart "github.com/rancherfederal/hauler/pkg/collection/chart"
"github.com/rancherfederal/hauler/pkg/collection/imagetxt"
"github.com/rancherfederal/hauler/pkg/collection/k3s"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/content"
"github.com/rancherfederal/hauler/pkg/cosign"
"github.com/rancherfederal/hauler/pkg/log"
"hauler.dev/go/hauler/internal/flags"
convert "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/convert"
v1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
v1alpha1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
tchart "hauler.dev/go/hauler/pkg/collection/chart"
"hauler.dev/go/hauler/pkg/collection/imagetxt"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/content"
"hauler.dev/go/hauler/pkg/cosign"
"hauler.dev/go/hauler/pkg/getter"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/reference"
"hauler.dev/go/hauler/pkg/store"
)
type SyncOpts struct {
*RootOpts
ContentFiles []string
Key string
Products []string
Platform string
}
func (o *SyncOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringSliceVarP(&o.ContentFiles, "files", "f", []string{}, "Path to content files")
f.StringVarP(&o.Key, "key", "k", "", "(Optional) Path to the key for signature verification")
f.StringSliceVar(&o.Products, "products", []string{}, "Used for RGS Carbide customers to supply a product and version and Hauler will retrieve the images. i.e. '--product rancher=v2.7.6'")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specific platform to save. i.e. linux/amd64. Defaults to all if flag is omitted.")
}
func SyncCmd(ctx context.Context, o *SyncOpts, s *store.Layout) error {
func SyncCmd(ctx context.Context, o *flags.SyncOpts, s *store.Layout, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
// if passed products, check for a remote manifest to retrieve and use.
for _, product := range o.Products {
l.Infof("processing content file for product: '%s'", product)
parts := strings.Split(product, "=")
tag := strings.ReplaceAll(parts[1], "+", "-")
manifestLoc := fmt.Sprintf("%s/hauler/%s-manifest.yaml:%s", consts.CarbideRegistry, parts[0], tag)
l.Infof("retrieving product manifest from: '%s'", manifestLoc)
img := v1alpha1.Image{
Name: manifestLoc,
}
err := storeImage(ctx, s, img, o.Platform)
if err != nil {
return err
}
err = ExtractCmd(ctx, &ExtractOpts{RootOpts: o.RootOpts}, s, fmt.Sprintf("hauler/%s-manifest.yaml:%s", parts[0],tag))
if err != nil {
return err
}
filename := fmt.Sprintf("%s-manifest.yaml", parts[0])
tempOverride := o.TempOverride
fi, err := os.Open(filename)
if err != nil {
return err
}
err = processContent(ctx, fi, o, s)
if err != nil {
return err
}
if tempOverride == "" {
tempOverride = os.Getenv(consts.HaulerTempDir)
}
// if passed a local manifest, process it
for _, filename := range o.ContentFiles {
l.Debugf("processing content file: '%s'", filename)
fi, err := os.Open(filename)
tempDir, err := os.MkdirTemp(tempOverride, consts.DefaultHaulerTempDirName)
if err != nil {
return err
}
defer os.RemoveAll(tempDir)
l.Debugf("using temporary directory at [%s]", tempDir)
// if passed products, check for a remote manifest to retrieve and use
for _, productName := range o.Products {
l.Infof("processing product manifest for [%s] to store [%s]", productName, o.StoreDir)
parts := strings.Split(productName, "=")
tag := strings.ReplaceAll(parts[1], "+", "-")
ProductRegistry := o.ProductRegistry // cli flag
// if no cli flag use CarbideRegistry.
if o.ProductRegistry == "" {
ProductRegistry = consts.CarbideRegistry
}
manifestLoc := fmt.Sprintf("%s/hauler/%s-manifest.yaml:%s", ProductRegistry, parts[0], tag)
l.Infof("fetching product manifest from [%s]", manifestLoc)
img := v1.Image{
Name: manifestLoc,
}
err := storeImage(ctx, s, img, o.Platform, rso, ro)
if err != nil {
return err
}
err = processContent(ctx, fi, o, s)
err = ExtractCmd(ctx, &flags.ExtractOpts{StoreRootOpts: o.StoreRootOpts}, s, fmt.Sprintf("hauler/%s-manifest.yaml:%s", parts[0], tag))
if err != nil {
return err
}
fileName := fmt.Sprintf("%s-manifest.yaml", parts[0])
fi, err := os.Open(fileName)
if err != nil {
return err
}
err = processContent(ctx, fi, o, s, rso, ro)
if err != nil {
return err
}
l.Infof("processing completed successfully")
}
// If passed a local manifest, process it
for _, fileName := range o.FileName {
l.Infof("processing manifest [%s] to store [%s]", fileName, o.StoreDir)
haulPath := fileName
if strings.HasPrefix(haulPath, "http://") || strings.HasPrefix(haulPath, "https://") {
l.Debugf("detected remote manifest... starting download... [%s]", haulPath)
h := getter.NewHttp()
parsedURL, err := url.Parse(haulPath)
if err != nil {
return err
}
rc, err := h.Open(ctx, parsedURL)
if err != nil {
return err
}
defer rc.Close()
fileName := h.Name(parsedURL)
if fileName == "" {
fileName = filepath.Base(parsedURL.Path)
}
haulPath = filepath.Join(tempDir, fileName)
out, err := os.Create(haulPath)
if err != nil {
return err
}
defer out.Close()
if _, err = io.Copy(out, rc); err != nil {
return err
}
}
fi, err := os.Open(haulPath)
if err != nil {
return err
}
defer fi.Close()
err = processContent(ctx, fi, o, s, rso, ro)
if err != nil {
return err
}
l.Infof("processing completed successfully")
}
return nil
}
func processContent(ctx context.Context, fi *os.File, o *SyncOpts, s *store.Layout) error {
func processContent(ctx context.Context, fi *os.File, o *flags.SyncOpts, s *store.Layout, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
reader := yaml.NewYAMLReader(bufio.NewReader(fi))
@@ -104,144 +151,470 @@ func processContent(ctx context.Context, fi *os.File, o *SyncOpts, s *store.Layo
if err != nil {
return err
}
docs = append(docs, raw)
}
for _, doc := range docs {
obj, err := content.Load(doc)
if err != nil {
l.Debugf("skipping sync of unknown content")
l.Warnf("skipping syncing due to %v", err)
continue
}
l.Infof("syncing [%s] to store", obj.GroupVersionKind().String())
gvk := obj.GroupVersionKind()
l.Infof("syncing content [%s] with [kind=%s] to store [%s]", gvk.GroupVersion(), gvk.Kind, o.StoreDir)
// TODO: Should type switch instead...
switch obj.GroupVersionKind().Kind {
case v1alpha1.FilesContentKind:
var cfg v1alpha1.Files
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
switch gvk.Kind {
for _, f := range cfg.Spec.Files {
err := storeFile(ctx, s, f)
if err != nil {
case consts.FilesContentKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release !!! DEPRECATION WARNING !!!", gvk.Version)
var alphaCfg v1alpha1.Files
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
}
case v1alpha1.ImagesContentKind:
var cfg v1alpha1.Images
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, i := range cfg.Spec.Images {
// Check if the user provided a key.
if o.Key != "" || i.Key != "" {
key := o.Key
if i.Key != "" {
key, err = homedir.Expand(i.Key)
var v1Cfg v1.Files
if err := convert.ConvertFiles(&alphaCfg, &v1Cfg); err != nil {
return err
}
for _, f := range v1Cfg.Spec.Files {
if err := storeFile(ctx, s, f); err != nil {
return err
}
l.Debugf("key for image [%s]", key)
// verify signature using the provided key.
err := cosign.VerifySignature(ctx, s, key, i.Name)
}
case "v1":
var cfg v1.Files
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, f := range cfg.Spec.Files {
if err := storeFile(ctx, s, f); err != nil {
return err
}
}
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
case consts.ImagesContentKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release !!! DEPRECATION WARNING !!!", gvk.Version)
var alphaCfg v1alpha1.Images
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
var v1Cfg v1.Images
if err := convert.ConvertImages(&alphaCfg, &v1Cfg); err != nil {
return err
}
a := v1Cfg.GetAnnotations()
for _, i := range v1Cfg.Spec.Images {
if a[consts.ImageAnnotationRegistry] != "" || o.Registry != "" {
newRef, _ := reference.Parse(i.Name)
newReg := o.Registry
if o.Registry == "" && a[consts.ImageAnnotationRegistry] != "" {
newReg = a[consts.ImageAnnotationRegistry]
}
if newRef.Context().RegistryStr() == "" {
newRef, err = reference.Relocate(i.Name, newReg)
if err != nil {
return err
}
}
i.Name = newRef.Name()
}
hasAnnotationIdentityOptions := a[consts.ImageAnnotationCertIdentityRegexp] != "" || a[consts.ImageAnnotationCertIdentity] != ""
hasCliIdentityOptions := o.CertIdentityRegexp != "" || o.CertIdentity != ""
hasImageIdentityOptions := i.CertIdentityRegexp != "" || i.CertIdentity != ""
needsKeylessVerificaton := hasAnnotationIdentityOptions || hasCliIdentityOptions || hasImageIdentityOptions
needsPubKeyVerification := a[consts.ImageAnnotationKey] != "" || o.Key != "" || i.Key != ""
if needsPubKeyVerification {
key := o.Key
if o.Key == "" && a[consts.ImageAnnotationKey] != "" {
key, err = homedir.Expand(a[consts.ImageAnnotationKey])
if err != nil {
return err
}
}
if i.Key != "" {
key, err = homedir.Expand(i.Key)
if err != nil {
return err
}
}
l.Debugf("key for image [%s]", key)
tlog := o.Tlog
if !o.Tlog && a[consts.ImageAnnotationTlog] == "true" {
tlog = true
}
if i.Tlog {
tlog = i.Tlog
}
l.Debugf("transparency log for verification [%b]", tlog)
if err := cosign.VerifySignature(ctx, s, key, tlog, i.Name, rso, ro); err != nil {
l.Errorf("signature verification failed for image [%s]... skipping...\n%v", i.Name, err)
continue
}
l.Infof("signature verified for image [%s]", i.Name)
} else if needsKeylessVerificaton { //Keyless signature verification
certIdentityRegexp := o.CertIdentityRegexp
if o.CertIdentityRegexp == "" && a[consts.ImageAnnotationCertIdentityRegexp] != "" {
certIdentityRegexp = a[consts.ImageAnnotationCertIdentityRegexp]
}
if i.CertIdentityRegexp != "" {
certIdentityRegexp = i.CertIdentityRegexp
}
l.Debugf("certIdentityRegexp for image [%s]", certIdentityRegexp)
certIdentity := o.CertIdentity
if o.CertIdentity == "" && a[consts.ImageAnnotationCertIdentity] != "" {
certIdentity = a[consts.ImageAnnotationCertIdentity]
}
if i.CertIdentity != "" {
certIdentity = i.CertIdentity
}
l.Debugf("certIdentity for image [%s]", certIdentity)
certOidcIssuer := o.CertOidcIssuer
if o.CertOidcIssuer == "" && a[consts.ImageAnnotationCertOidcIssuer] != "" {
certOidcIssuer = a[consts.ImageAnnotationCertOidcIssuer]
}
if i.CertOidcIssuer != "" {
certOidcIssuer = i.CertOidcIssuer
}
l.Debugf("certOidcIssuer for image [%s]", certOidcIssuer)
certOidcIssuerRegexp := o.CertOidcIssuerRegexp
if o.CertOidcIssuerRegexp == "" && a[consts.ImageAnnotationCertOidcIssuerRegexp] != "" {
certOidcIssuerRegexp = a[consts.ImageAnnotationCertOidcIssuerRegexp]
}
if i.CertOidcIssuerRegexp != "" {
certOidcIssuerRegexp = i.CertOidcIssuerRegexp
}
l.Debugf("certOidcIssuerRegexp for image [%s]", certOidcIssuerRegexp)
certGithubWorkflowRepository := o.CertGithubWorkflowRepository
if o.CertGithubWorkflowRepository == "" && a[consts.ImageAnnotationCertGithubWorkflowRepository] != "" {
certGithubWorkflowRepository = a[consts.ImageAnnotationCertGithubWorkflowRepository]
}
if i.CertGithubWorkflowRepository != "" {
certGithubWorkflowRepository = i.CertGithubWorkflowRepository
}
l.Debugf("certGithubWorkflowRepository for image [%s]", certGithubWorkflowRepository)
tlog := o.Tlog
if !o.Tlog && a[consts.ImageAnnotationTlog] == "true" {
tlog = true
}
if i.Tlog {
tlog = i.Tlog
}
l.Debugf("transparency log for verification [%b]", tlog)
if err := cosign.VerifyKeylessSignature(ctx, s, certIdentity, certIdentityRegexp, certOidcIssuer, certOidcIssuerRegexp, certGithubWorkflowRepository, tlog, i.Name, rso, ro); err != nil {
l.Errorf("keyless signature verification failed for image [%s]... skipping...\n%v", i.Name, err)
continue
}
l.Infof("keyless signature verified for image [%s]", i.Name)
}
platform := o.Platform
if o.Platform == "" && a[consts.ImageAnnotationPlatform] != "" {
platform = a[consts.ImageAnnotationPlatform]
}
if i.Platform != "" {
platform = i.Platform
}
if err := storeImage(ctx, s, i, platform, rso, ro); err != nil {
return err
}
}
s.CopyAll(ctx, s.OCI, nil)
case "v1":
var cfg v1.Images
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
a := cfg.GetAnnotations()
for _, i := range cfg.Spec.Images {
if a[consts.ImageAnnotationRegistry] != "" || o.Registry != "" {
newRef, _ := reference.Parse(i.Name)
newReg := o.Registry
if o.Registry == "" && a[consts.ImageAnnotationRegistry] != "" {
newReg = a[consts.ImageAnnotationRegistry]
}
if newRef.Context().RegistryStr() == "" {
newRef, err = reference.Relocate(i.Name, newReg)
if err != nil {
return err
}
}
i.Name = newRef.Name()
}
hasAnnotationIdentityOptions := a[consts.ImageAnnotationCertIdentityRegexp] != "" || a[consts.ImageAnnotationCertIdentity] != ""
hasCliIdentityOptions := o.CertIdentityRegexp != "" || o.CertIdentity != ""
hasImageIdentityOptions := i.CertIdentityRegexp != "" || i.CertIdentity != ""
needsKeylessVerificaton := hasAnnotationIdentityOptions || hasCliIdentityOptions || hasImageIdentityOptions
needsPubKeyVerification := a[consts.ImageAnnotationKey] != "" || o.Key != "" || i.Key != ""
if needsPubKeyVerification {
key := o.Key
if o.Key == "" && a[consts.ImageAnnotationKey] != "" {
key, err = homedir.Expand(a[consts.ImageAnnotationKey])
if err != nil {
return err
}
}
if i.Key != "" {
key, err = homedir.Expand(i.Key)
if err != nil {
return err
}
}
l.Debugf("key for image [%s]", key)
tlog := o.Tlog
if !o.Tlog && a[consts.ImageAnnotationTlog] == "true" {
tlog = true
}
if i.Tlog {
tlog = i.Tlog
}
l.Debugf("transparency log for verification [%b]", tlog)
if err := cosign.VerifySignature(ctx, s, key, tlog, i.Name, rso, ro); err != nil {
l.Errorf("signature verification failed for image [%s]... skipping...\n%v", i.Name, err)
continue
}
l.Infof("signature verified for image [%s]", i.Name)
} else if needsKeylessVerificaton { //Keyless signature verification
certIdentityRegexp := o.CertIdentityRegexp
if o.CertIdentityRegexp == "" && a[consts.ImageAnnotationCertIdentityRegexp] != "" {
certIdentityRegexp = a[consts.ImageAnnotationCertIdentityRegexp]
}
if i.CertIdentityRegexp != "" {
certIdentityRegexp = i.CertIdentityRegexp
}
l.Debugf("certIdentityRegexp for image [%s]", certIdentityRegexp)
certIdentity := o.CertIdentity
if o.CertIdentity == "" && a[consts.ImageAnnotationCertIdentity] != "" {
certIdentity = a[consts.ImageAnnotationCertIdentity]
}
if i.CertIdentity != "" {
certIdentity = i.CertIdentity
}
l.Debugf("certIdentity for image [%s]", certIdentity)
certOidcIssuer := o.CertOidcIssuer
if o.CertOidcIssuer == "" && a[consts.ImageAnnotationCertOidcIssuer] != "" {
certOidcIssuer = a[consts.ImageAnnotationCertOidcIssuer]
}
if i.CertOidcIssuer != "" {
certOidcIssuer = i.CertOidcIssuer
}
l.Debugf("certOidcIssuer for image [%s]", certOidcIssuer)
certOidcIssuerRegexp := o.CertOidcIssuerRegexp
if o.CertOidcIssuerRegexp == "" && a[consts.ImageAnnotationCertOidcIssuerRegexp] != "" {
certOidcIssuerRegexp = a[consts.ImageAnnotationCertOidcIssuerRegexp]
}
if i.CertOidcIssuerRegexp != "" {
certOidcIssuerRegexp = i.CertOidcIssuerRegexp
}
l.Debugf("certOidcIssuerRegexp for image [%s]", certOidcIssuerRegexp)
certGithubWorkflowRepository := o.CertGithubWorkflowRepository
if o.CertGithubWorkflowRepository == "" && a[consts.ImageAnnotationCertGithubWorkflowRepository] != "" {
certGithubWorkflowRepository = a[consts.ImageAnnotationCertGithubWorkflowRepository]
}
if i.CertGithubWorkflowRepository != "" {
certGithubWorkflowRepository = i.CertGithubWorkflowRepository
}
l.Debugf("certGithubWorkflowRepository for image [%s]", certGithubWorkflowRepository)
tlog := o.Tlog
if !o.Tlog && a[consts.ImageAnnotationTlog] == "true" {
tlog = true
}
if i.Tlog {
tlog = i.Tlog
}
l.Debugf("transparency log for verification [%b]", tlog)
if err := cosign.VerifyKeylessSignature(ctx, s, certIdentity, certIdentityRegexp, certOidcIssuer, certOidcIssuerRegexp, certGithubWorkflowRepository, tlog, i.Name, rso, ro); err != nil {
l.Errorf("keyless signature verification failed for image [%s]... skipping...\n%v", i.Name, err)
continue
}
l.Infof("keyless signature verified for image [%s]", i.Name)
}
platform := o.Platform
if o.Platform == "" && a[consts.ImageAnnotationPlatform] != "" {
platform = a[consts.ImageAnnotationPlatform]
}
if i.Platform != "" {
platform = i.Platform
}
if err := storeImage(ctx, s, i, platform, rso, ro); err != nil {
return err
}
}
s.CopyAll(ctx, s.OCI, nil)
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
case consts.ChartsContentKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release !!! DEPRECATION WARNING !!!", gvk.Version)
var alphaCfg v1alpha1.Charts
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
var v1Cfg v1.Charts
if err := convert.ConvertCharts(&alphaCfg, &v1Cfg); err != nil {
return err
}
for _, ch := range v1Cfg.Spec.Charts {
if err := storeChart(ctx, s, ch, &action.ChartPathOptions{}); err != nil {
return err
}
}
case "v1":
var cfg v1.Charts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, ch := range cfg.Spec.Charts {
if err := storeChart(ctx, s, ch, &action.ChartPathOptions{}); err != nil {
return err
}
}
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
case consts.ChartsCollectionKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release !!! DEPRECATION WARNING !!!", gvk.Version)
var alphaCfg v1alpha1.ThickCharts
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
var v1Cfg v1.ThickCharts
if err := convert.ConvertThickCharts(&alphaCfg, &v1Cfg); err != nil {
return err
}
for _, chObj := range v1Cfg.Spec.Charts {
tc, err := tchart.NewThickChart(chObj, &action.ChartPathOptions{
RepoURL: chObj.RepoURL,
Version: chObj.Version,
})
if err != nil {
l.Errorf("signature verification failed for image [%s]. ** hauler will skip adding this image to the store **:\n%v", i.Name, err)
continue
return err
}
if _, err := s.AddOCICollection(ctx, tc); err != nil {
return err
}
l.Infof("signature verified for image [%s]", i.Name)
}
// Check if the user provided a platform.
platform := o.Platform
if i.Platform != "" {
platform = i.Platform
}
err = storeImage(ctx, s, i, platform)
if err != nil {
case "v1":
var cfg v1.ThickCharts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
}
// sync with local index
s.CopyAll(ctx, s.OCI, nil)
for _, chObj := range cfg.Spec.Charts {
tc, err := tchart.NewThickChart(chObj, &action.ChartPathOptions{
RepoURL: chObj.RepoURL,
Version: chObj.Version,
})
if err != nil {
return err
}
if _, err := s.AddOCICollection(ctx, tc); err != nil {
return err
}
}
case v1alpha1.ChartsContentKind:
var cfg v1alpha1.Charts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
for _, ch := range cfg.Spec.Charts {
// TODO: Provide a way to configure syncs
err := storeChart(ctx, s, ch, &action.ChartPathOptions{})
if err != nil {
case consts.ImageTxtsContentKind:
switch gvk.Version {
case "v1alpha1":
l.Warnf("!!! DEPRECATION WARNING !!! apiVersion [%s] will be removed in a future release !!! DEPRECATION WARNING !!!", gvk.Version)
var alphaCfg v1alpha1.ImageTxts
if err := yaml.Unmarshal(doc, &alphaCfg); err != nil {
return err
}
}
case v1alpha1.K3sCollectionKind:
var cfg v1alpha1.K3s
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
k, err := k3s.NewK3s(cfg.Spec.Version)
if err != nil {
return err
}
if _, err := s.AddOCICollection(ctx, k); err != nil {
return err
}
case v1alpha1.ChartsCollectionKind:
var cfg v1alpha1.ThickCharts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, cfg := range cfg.Spec.Charts {
tc, err := tchart.NewThickChart(cfg, &action.ChartPathOptions{
RepoURL: cfg.RepoURL,
Version: cfg.Version,
})
if err != nil {
var v1Cfg v1.ImageTxts
if err := convert.ConvertImageTxts(&alphaCfg, &v1Cfg); err != nil {
return err
}
for _, cfgIt := range v1Cfg.Spec.ImageTxts {
it, err := imagetxt.New(cfgIt.Ref,
imagetxt.WithIncludeSources(cfgIt.Sources.Include...),
imagetxt.WithExcludeSources(cfgIt.Sources.Exclude...),
)
if err != nil {
return fmt.Errorf("convert ImageTxt %s: %v", v1Cfg.Name, err)
}
if _, err := s.AddOCICollection(ctx, it); err != nil {
return fmt.Errorf("add ImageTxt %s to store: %v", v1Cfg.Name, err)
}
}
if _, err := s.AddOCICollection(ctx, tc); err != nil {
case "v1":
var cfg v1.ImageTxts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
}
case v1alpha1.ImageTxtsContentKind:
var cfg v1alpha1.ImageTxts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, cfgIt := range cfg.Spec.ImageTxts {
it, err := imagetxt.New(cfgIt.Ref,
imagetxt.WithIncludeSources(cfgIt.Sources.Include...),
imagetxt.WithExcludeSources(cfgIt.Sources.Exclude...),
)
if err != nil {
return fmt.Errorf("convert ImageTxt %s: %v", cfg.Name, err)
for _, cfgIt := range cfg.Spec.ImageTxts {
it, err := imagetxt.New(cfgIt.Ref,
imagetxt.WithIncludeSources(cfgIt.Sources.Include...),
imagetxt.WithExcludeSources(cfgIt.Sources.Exclude...),
)
if err != nil {
return fmt.Errorf("convert ImageTxt %s: %v", cfg.Name, err)
}
if _, err := s.AddOCICollection(ctx, it); err != nil {
return fmt.Errorf("add ImageTxt %s to store: %v", cfg.Name, err)
}
}
if _, err := s.AddOCICollection(ctx, it); err != nil {
return fmt.Errorf("add ImageTxt %s to store: %v", cfg.Name, err)
}
default:
return fmt.Errorf("unsupported version [%s] for kind [%s]... valid versions are [v1 and v1alpha1]", gvk.Version, gvk.Kind)
}
default:
return fmt.Errorf("unrecognized content/collection type: %s", obj.GroupVersionKind().String())
return fmt.Errorf("unsupported kind [%s]... valid kinds are [Files, Images, Charts, ThickCharts, ImageTxts]", gvk.Kind)
}
}
return nil

View File

@@ -5,11 +5,12 @@ import (
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/internal/version"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/internal/version"
)
func addVersion(parent *cobra.Command) {
var json bool
func addVersion(parent *cobra.Command, ro *flags.CliRootOpts) {
o := &flags.VersionOpts{}
cmd := &cobra.Command{
Use: "version",
@@ -22,7 +23,7 @@ func addVersion(parent *cobra.Command) {
v.FontName = "starwars"
cmd.SetOut(cmd.OutOrStdout())
if json {
if o.JSON {
out, err := v.JSONString()
if err != nil {
return fmt.Errorf("unable to generate JSON from version info: %w", err)
@@ -34,7 +35,7 @@ func addVersion(parent *cobra.Command) {
return nil
},
}
cmd.Flags().BoolVar(&json, "json", false, "toggle output in JSON")
o.AddFlags(cmd)
parent.AddCommand(cmd)
}

View File

@@ -3,16 +3,12 @@ package main
import (
"context"
"os"
"embed"
"github.com/rancherfederal/hauler/cmd/hauler/cli"
"github.com/rancherfederal/hauler/pkg/cosign"
"github.com/rancherfederal/hauler/pkg/log"
"hauler.dev/go/hauler/cmd/hauler/cli"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/log"
)
//go:embed binaries/*
var binaries embed.FS
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@@ -20,12 +16,9 @@ func main() {
logger := log.NewLogger(os.Stdout)
ctx = logger.WithContext(ctx)
// ensure cosign binary is available
if err := cosign.EnsureBinaryExists(ctx, binaries); err != nil {
logger.Errorf("%v", err)
}
if err := cli.New().ExecuteContext(ctx); err != nil {
if err := cli.New(ctx, &flags.CliRootOpts{}).ExecuteContext(ctx); err != nil {
logger.Errorf("%v", err)
cancel()
os.Exit(1)
}
}

362
go.mod
View File

@@ -1,175 +1,337 @@
module github.com/rancherfederal/hauler
module hauler.dev/go/hauler
go 1.21
go 1.23.4
toolchain go1.24.2
replace github.com/sigstore/cosign/v2 => github.com/hauler-dev/cosign/v2 v2.4.3-0.20250404165522-3a44ef646a65
require (
github.com/common-nighthawk/go-figure v0.0.0-20210622060536-734e95fb86be
github.com/containerd/containerd v1.7.11
github.com/containerd/containerd v1.7.27
github.com/distribution/distribution/v3 v3.0.0-20221208165359-362910506bc2
github.com/docker/go-metrics v0.0.1
github.com/google/go-containerregistry v0.16.1
github.com/google/go-containerregistry v0.20.3
github.com/gorilla/handlers v1.5.1
github.com/gorilla/mux v1.8.0
github.com/mholt/archiver/v3 v3.5.1
github.com/gorilla/mux v1.8.1
github.com/mholt/archives v0.1.0
github.com/mitchellh/go-homedir v1.1.0
github.com/olekukonko/tablewriter v0.0.5
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0-rc6
github.com/opencontainers/image-spec v1.1.0
github.com/pkg/errors v0.9.1
github.com/rs/zerolog v1.31.0
github.com/sigstore/cosign/v2 v2.4.1
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.10.0
github.com/spf13/cobra v1.8.0
golang.org/x/sync v0.6.0
helm.sh/helm/v3 v3.14.0
k8s.io/apimachinery v0.29.0
k8s.io/client-go v0.29.0
github.com/spf13/afero v1.11.0
github.com/spf13/cobra v1.8.1
golang.org/x/sync v0.12.0
gopkg.in/yaml.v3 v3.0.1
helm.sh/helm/v3 v3.17.3
k8s.io/apimachinery v0.32.2
k8s.io/client-go v0.32.2
oras.land/oras-go v1.2.5
)
require (
cloud.google.com/go/auth v0.14.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.7 // indirect
cloud.google.com/go/compute/metadata v0.6.0 // indirect
cuelabs.dev/go/oci/ociregistry v0.0.0-20241125120445-2c00c104c6e1 // indirect
cuelang.org/go v0.12.0 // indirect
dario.cat/mergo v1.0.1 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/BurntSushi/toml v1.3.2 // indirect
github.com/AliyunContainerService/ack-ram-tool/pkg/credentials/provider v0.14.0 // indirect
github.com/Azure/azure-sdk-for-go v68.0.0+incompatible // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.29 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.23 // indirect
github.com/Azure/go-autorest/autorest/azure/auth v0.5.12 // indirect
github.com/Azure/go-autorest/autorest/azure/cli v0.4.6 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/BurntSushi/toml v1.4.0 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.2.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.3 // indirect
github.com/Masterminds/semver/v3 v3.3.1 // indirect
github.com/Masterminds/sprig/v3 v3.3.0 // indirect
github.com/Masterminds/squirrel v1.5.4 // indirect
github.com/Microsoft/hcsshim v0.11.4 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/OneOfOne/xxhash v1.2.8 // indirect
github.com/ProtonMail/go-crypto v0.0.0-20230923063757-afb1ddc0824c // indirect
github.com/STARRY-S/zip v0.2.1 // indirect
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d // indirect
github.com/andybalholm/brotli v1.0.1 // indirect
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 // indirect
github.com/ThalesIgnite/crypto11 v1.2.5 // indirect
github.com/agnivade/levenshtein v1.2.0 // indirect
github.com/alibabacloud-go/alibabacloud-gateway-spi v0.0.4 // indirect
github.com/alibabacloud-go/cr-20160607 v1.0.1 // indirect
github.com/alibabacloud-go/cr-20181201 v1.0.10 // indirect
github.com/alibabacloud-go/darabonba-openapi v0.2.1 // indirect
github.com/alibabacloud-go/debug v1.0.0 // indirect
github.com/alibabacloud-go/endpoint-util v1.1.1 // indirect
github.com/alibabacloud-go/openapi-util v0.1.0 // indirect
github.com/alibabacloud-go/tea v1.2.1 // indirect
github.com/alibabacloud-go/tea-utils v1.4.5 // indirect
github.com/alibabacloud-go/tea-xml v1.1.3 // indirect
github.com/aliyun/credentials-go v1.3.2 // indirect
github.com/andybalholm/brotli v1.1.1 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/aws/aws-sdk-go-v2 v1.34.0 // indirect
github.com/aws/aws-sdk-go-v2/config v1.29.2 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.55 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.25 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.29 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.29 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ecr v1.20.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ecrpublic v1.18.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.10 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.12 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.11 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.33.10 // indirect
github.com/aws/smithy-go v1.22.2 // indirect
github.com/awslabs/amazon-ecr-credential-helper/ecr-login v0.0.0-20231024185945-8841054dbdb8 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver v3.5.1+incompatible // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/bodgit/plumbing v1.3.0 // indirect
github.com/bodgit/sevenzip v1.6.0 // indirect
github.com/bodgit/windows v1.0.1 // indirect
github.com/bshuster-repo/logrus-logstash-hook v1.0.0 // indirect
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd // indirect
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b // indirect
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/buildkite/agent/v3 v3.91.0 // indirect
github.com/buildkite/go-pipeline v0.13.3 // indirect
github.com/buildkite/interpolate v0.1.5 // indirect
github.com/buildkite/roko v1.3.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/chrismellard/docker-credential-acr-env v0.0.0-20230304212654-82a0ddb27589 // indirect
github.com/chzyer/readline v1.5.1 // indirect
github.com/clbanning/mxj/v2 v2.7.0 // indirect
github.com/cloudflare/circl v1.3.7 // indirect
github.com/cockroachdb/apd/v3 v3.2.1 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/distribution/reference v0.5.0 // indirect
github.com/docker/cli v25.0.1+incompatible // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.16.3 // indirect
github.com/coreos/go-oidc/v3 v3.12.0 // indirect
github.com/cyberphone/json-canonicalization v0.0.0-20231011164504-785e29786b46 // indirect
github.com/cyphar/filepath-securejoin v0.3.6 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/digitorus/pkcs7 v0.0.0-20230818184609-3a137a874352 // indirect
github.com/digitorus/timestamp v0.0.0-20231217203849-220c5c2851b7 // indirect
github.com/dimchansky/utfbom v1.1.1 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/cli v27.5.0+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker v25.0.1+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/docker v27.5.0+incompatible // indirect
github.com/docker/docker-credential-helpers v0.8.2 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1 // indirect
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v5.7.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/felixge/httpsnoop v1.0.3 // indirect
github.com/emicklei/proto v1.13.4 // indirect
github.com/evanphx/json-patch v5.9.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/color v1.16.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.8.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-chi/chi v4.1.2+incompatible // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-gorp/gorp/v3 v3.1.0 // indirect
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-jose/go-jose/v3 v3.0.4 // indirect
github.com/go-jose/go-jose/v4 v4.0.5 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-openapi/analysis v0.23.0 // indirect
github.com/go-openapi/errors v0.22.0 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/loads v0.22.0 // indirect
github.com/go-openapi/runtime v0.28.0 // indirect
github.com/go-openapi/spec v0.21.0 // indirect
github.com/go-openapi/strfmt v0.23.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-openapi/validate v0.24.0 // indirect
github.com/go-piv/piv-go/v2 v2.3.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/gomodule/redigo v1.8.2 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/certificate-transparency-go v1.3.1 // indirect
github.com/google/gnostic-models v0.6.9-0.20230804172637-c7be7c783f49 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/go-github/v55 v55.0.0 // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/s2a-go v0.1.9 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect
github.com/googleapis/gax-go/v2 v2.14.1 // indirect
github.com/gorilla/websocket v1.5.1 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/huandu/xstrings v1.4.0 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/hashicorp/go-retryablehttp v0.7.7 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/hcl v1.0.1-vault-5 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/in-toto/attestation v1.1.0 // indirect
github.com/in-toto/in-toto-golang v0.9.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmoiron/sqlx v1.3.5 // indirect
github.com/jedisct1/go-minisign v0.0.0-20230811132847-661be99b8267 // indirect
github.com/jmespath/go-jmespath v0.4.1-0.20220621161143-b0104c826a24 // indirect
github.com/jmoiron/sqlx v1.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.16.5 // indirect
github.com/klauspost/pgzip v1.2.5 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/klauspost/pgzip v1.2.6 // indirect
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/letsencrypt/boulder v0.0.0-20240620165639-de9c06129bec // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/magiconair/properties v1.8.9 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/manifoldco/promptui v0.9.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.5.1-0.20231216201459-8508981c8b6c // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/mozillazg/docker-credential-acr-helper v0.4.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/nwaples/rardecode v1.1.0 // indirect
github.com/nozzle/throttler v0.0.0-20180817012639-2ea982251481 // indirect
github.com/nwaples/rardecode/v2 v2.0.0-beta.4.0.20241112120701-034e449c6e78 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/oleiade/reflections v1.1.0 // indirect
github.com/open-policy-agent/opa v1.1.0 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/pborman/uuid v1.2.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.3 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pierrec/lz4/v4 v4.1.2 // indirect
github.com/prometheus/client_golang v1.16.0 // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.10.1 // indirect
github.com/rubenv/sql-migrate v1.5.2 // indirect
github.com/pierrec/lz4/v4 v4.1.21 // indirect
github.com/prometheus/client_golang v1.20.5 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/protocolbuffers/txtpbfmt v0.0.0-20241112170944-20d2c9ebc01d // indirect
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/rogpeppe/go-internal v1.13.2-0.20241226121412-a5dc8ff20d0a // indirect
github.com/rubenv/sql-migrate v1.7.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/shopspring/decimal v1.3.1 // indirect
github.com/spf13/cast v1.5.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/ulikunitz/xz v0.5.9 // indirect
github.com/vbatts/tar-split v0.11.3 // indirect
github.com/sagikazarmark/locafero v0.4.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
github.com/sassoftware/relic v7.2.1+incompatible // indirect
github.com/secure-systems-lab/go-securesystemslib v0.9.0 // indirect
github.com/segmentio/ksuid v1.0.4 // indirect
github.com/shibumi/go-pathspec v1.3.0 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/sigstore/fulcio v1.6.6 // indirect
github.com/sigstore/protobuf-specs v0.4.0 // indirect
github.com/sigstore/rekor v1.3.9 // indirect
github.com/sigstore/sigstore v1.8.12 // indirect
github.com/sigstore/sigstore-go v0.7.0 // indirect
github.com/sigstore/timestamp-authority v1.2.4 // indirect
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 // indirect
github.com/sorairolake/lzip-go v0.3.5 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/cast v1.7.0 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/spf13/viper v1.19.0 // indirect
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d // indirect
github.com/tchap/go-patricia/v2 v2.3.2 // indirect
github.com/thales-e-security/pool v0.0.2 // indirect
github.com/therootcompany/xz v1.0.1 // indirect
github.com/theupdateframework/go-tuf v0.7.0 // indirect
github.com/theupdateframework/go-tuf/v2 v2.0.2 // indirect
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 // indirect
github.com/tjfoc/gmsm v1.4.1 // indirect
github.com/transparency-dev/merkle v0.0.2 // indirect
github.com/ulikunitz/xz v0.5.12 // indirect
github.com/vbatts/tar-split v0.11.6 // indirect
github.com/withfig/autocomplete-tools/integrations/cobra v1.2.1 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
github.com/yashtewari/glob-intersection v0.2.0 // indirect
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 // indirect
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50 // indirect
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.45.0 // indirect
go.opentelemetry.io/otel v1.19.0 // indirect
go.opentelemetry.io/otel/metric v1.19.0 // indirect
go.opentelemetry.io/otel/trace v1.19.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
golang.org/x/crypto v0.18.0 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/oauth2 v0.10.0 // indirect
golang.org/x/sys v0.16.0 // indirect
golang.org/x/term v0.16.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.3.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d // indirect
google.golang.org/grpc v1.58.3 // indirect
google.golang.org/protobuf v1.31.0 // indirect
github.com/zeebo/errs v1.4.0 // indirect
gitlab.com/gitlab-org/api/client-go v0.121.0 // indirect
go.mongodb.org/mongo-driver v1.14.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.59.0 // indirect
go.opentelemetry.io/otel v1.34.0 // indirect
go.opentelemetry.io/otel/metric v1.34.0 // indirect
go.opentelemetry.io/otel/sdk v1.34.0 // indirect
go.opentelemetry.io/otel/trace v1.34.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
go4.org v0.0.0-20230225012048-214862532bf5 // indirect
golang.org/x/crypto v0.36.0 // indirect
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f // indirect
golang.org/x/mod v0.22.0 // indirect
golang.org/x/net v0.38.0 // indirect
golang.org/x/oauth2 v0.25.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/term v0.30.0 // indirect
golang.org/x/text v0.23.0 // indirect
golang.org/x/time v0.9.0 // indirect
google.golang.org/api v0.219.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250124145028-65684f501c47 // indirect
google.golang.org/grpc v1.70.0 // indirect
google.golang.org/protobuf v1.36.4 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/api v0.29.0 // indirect
k8s.io/apiextensions-apiserver v0.29.0 // indirect
k8s.io/apiserver v0.29.0 // indirect
k8s.io/cli-runtime v0.29.0 // indirect
k8s.io/component-base v0.29.0 // indirect
k8s.io/klog/v2 v2.110.1 // indirect
k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 // indirect
k8s.io/kubectl v0.29.0 // indirect
k8s.io/utils v0.0.0-20230726121419-3b25d923346b // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/kustomize/kyaml v0.14.3-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
k8s.io/api v0.32.2 // indirect
k8s.io/apiextensions-apiserver v0.32.2 // indirect
k8s.io/apiserver v0.32.2 // indirect
k8s.io/cli-runtime v0.32.2 // indirect
k8s.io/component-base v0.32.2 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect
k8s.io/kubectl v0.32.2 // indirect
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/kustomize/api v0.18.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.18.1 // indirect
sigs.k8s.io/release-utils v0.11.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)

1291
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -2,28 +2,46 @@
# Usage:
# - curl -sfL... | ENV_VAR=... bash
# - ENV_VAR=... bash ./install.sh
# - ./install.sh ENV_VAR=...
# Example:
# - ENV_VAR=... ./install.sh
#
# Install Usage:
# Install Latest Release
# - curl -sfL https://get.hauler.dev | bash
# - ./install.sh
#
# Install Specific Release
# - curl -sfL https://get.hauler.dev | HAULER_VERSION=0.4.2 bash
# - curl -sfL https://get.hauler.dev | HAULER_VERSION=1.0.0 bash
# - HAULER_VERSION=1.0.0 ./install.sh
#
# Set Install Directory
# - curl -sfL https://get.hauler.dev | HAULER_INSTALL_DIR=/usr/local/bin bash
# - HAULER_INSTALL_DIR=/usr/local/bin ./install.sh
#
# Set Hauler Directory
# - curl -sfL https://get.hauler.dev | HAULER_DIR=$HOME/.hauler bash
# - HAULER_DIR=$HOME/.hauler ./install.sh
#
# Debug Usage:
# - curl -sfL https://get.hauler.dev | HAULER_DEBUG=true bash
# - HAULER_DEBUG=true ./install.sh
#
# Uninstall Usage:
# - curl -sfL https://get.hauler.dev | HAULER_UNINSTALL=true bash
# - HAULER_UNINSTALL=true ./install.sh
#
# Documentation:
# - https://hauler.dev
# - https://github.com/rancherfederal/hauler
# set functions for debugging/logging
function info {
echo && echo "[INFO] Hauler: $1"
}
# - https://github.com/hauler-dev/hauler
# set functions for logging
function verbose {
echo "$1"
}
function info {
echo && echo "[INFO] Hauler: $1"
}
function warn {
echo && echo "[WARN] Hauler: $1"
}
@@ -33,124 +51,173 @@ function fatal {
exit 1
}
# check for required dependencies
for cmd in curl sed awk openssl tar rm; do
# debug hauler from argument or environment variable
if [ "${HAULER_DEBUG}" = "true" ]; then
set -x
fi
# start hauler preflight checks
info "Starting Preflight Checks..."
# check for required packages and dependencies
for cmd in echo curl grep sed rm mkdir awk openssl tar install source; do
if ! command -v "$cmd" &> /dev/null; then
fatal "$cmd is not installed"
fatal "$cmd is required to install Hauler"
fi
done
# start hauler installation
info "Starting Installation..."
# set install directory from argument or environment variable
HAULER_INSTALL_DIR=${HAULER_INSTALL_DIR:-/usr/local/bin}
# set version with an environment variable
version=${HAULER_VERSION:-$(curl -s https://api.github.com/repos/rancherfederal/hauler/releases/latest | grep '"tag_name":' | sed 's/.*"v\([^"]*\)".*/\1/')}
# ensure install directory exists and/or create it
if [ ! -d "${HAULER_INSTALL_DIR}" ]; then
mkdir -p "${HAULER_INSTALL_DIR}" || fatal "Failed to Create Install Directory: ${HAULER_INSTALL_DIR}"
fi
# set verision with an argument
while [[ $# -gt 0 ]]; do
case "$1" in
HAULER_VERSION=*)
version="${1#*=}"
shift
;;
*)
shift
;;
esac
done
# ensure install directory is writable (by user or root privileges)
if [ ! -w "${HAULER_INSTALL_DIR}" ]; then
if [ "$(id -u)" -ne 0 ]; then
fatal "Root privileges are required to install Hauler to Directory: ${HAULER_INSTALL_DIR}"
fi
fi
# uninstall hauler from argument or environment variable
if [ "${HAULER_UNINSTALL}" = "true" ]; then
# remove the hauler binary
rm -rf "${HAULER_INSTALL_DIR}/hauler" || fatal "Failed to Remove Hauler from ${HAULER_INSTALL_DIR}"
# remove the hauler directory
rm -rf "${HAULER_DIR}" || fatal "Failed to Remove Hauler Directory: ${HAULER_DIR}"
info "Successfully Uninstalled Hauler" && echo
exit 0
fi
# set version environment variable
if [ -z "${HAULER_VERSION}" ]; then
# attempt to retrieve the latest version from GitHub
HAULER_VERSION=$(curl -sI https://github.com/hauler-dev/hauler/releases/latest | grep -i location | sed -e 's#.*tag/v##' -e 's/^[[:space:]]*//g' -e 's/[[:space:]]*$//g')
# exit if the version could not be detected
if [ -z "${HAULER_VERSION}" ]; then
fatal "HAULER_VERSION is unable to be detected and/or retrieved from GitHub. Please set: HAULER_VERSION"
fi
fi
# detect the operating system
platform=$(uname -s | tr '[:upper:]' '[:lower:]')
case $platform in
PLATFORM=$(uname -s | tr '[:upper:]' '[:lower:]')
case $PLATFORM in
linux)
platform="linux"
PLATFORM="linux"
;;
darwin)
platform="darwin"
PLATFORM="darwin"
;;
*)
fatal "Unsupported Platform: $platform"
fatal "Unsupported Platform: ${PLATFORM}"
;;
esac
# detect the architecture
arch=$(uname -m)
case $arch in
ARCH=$(uname -m)
case $ARCH in
x86_64 | x86-32 | x64 | x32 | amd64)
arch="amd64"
ARCH="amd64"
;;
aarch64 | arm64)
arch="arm64"
ARCH="arm64"
;;
*)
fatal "Unsupported Architecture: $arch"
fatal "Unsupported Architecture: ${ARCH}"
;;
esac
# set hauler directory from argument or environment variable
HAULER_DIR=${HAULER_DIR:-$HOME/.hauler}
# start hauler installation
info "Starting Installation..."
# display the version, platform, and architecture
verbose "- Version: v$version"
verbose "- Platform: $platform"
verbose "- Architecture: $arch"
verbose "- Version: v${HAULER_VERSION}"
verbose "- Platform: ${PLATFORM}"
verbose "- Architecture: ${ARCH}"
verbose "- Install Directory: ${HAULER_INSTALL_DIR}"
verbose "- Hauler Directory: ${HAULER_DIR}"
# ensure hauler directory exists and/or create it
if [ ! -d "${HAULER_DIR}" ]; then
mkdir -p "${HAULER_DIR}" || fatal "Failed to Create Hauler Directory: ${HAULER_DIR}"
fi
# ensure hauler directory is writable (by user or root privileges)
chmod -R 777 "${HAULER_DIR}" || fatal "Failed to Update Permissions of Hauler Directory: ${HAULER_DIR}"
# change to hauler directory
cd "${HAULER_DIR}" || fatal "Failed to Change Directory to Hauler Directory: ${HAULER_DIR}"
# start hauler artifacts download
info "Starting Download..."
# download the checksum file
if ! curl -sOL "https://github.com/rancherfederal/hauler/releases/download/v${version}/hauler_${version}_checksums.txt"; then
fatal "Failed to Download: hauler_${version}_checksums.txt"
if ! curl -sfOL "https://github.com/hauler-dev/hauler/releases/download/v${HAULER_VERSION}/hauler_${HAULER_VERSION}_checksums.txt"; then
fatal "Failed to Download: hauler_${HAULER_VERSION}_checksums.txt"
fi
# download the archive file
if ! curl -sOL "https://github.com/rancherfederal/hauler/releases/download/v${version}/hauler_${version}_${platform}_${arch}.tar.gz"; then
fatal "Failed to Download: hauler_${version}_${platform}_${arch}.tar.gz"
if ! curl -sfOL "https://github.com/hauler-dev/hauler/releases/download/v${HAULER_VERSION}/hauler_${HAULER_VERSION}_${PLATFORM}_${ARCH}.tar.gz"; then
fatal "Failed to Download: hauler_${HAULER_VERSION}_${PLATFORM}_${ARCH}.tar.gz"
fi
# start hauler checksum verification
info "Starting Checksum Verification..."
# Verify the Hauler checksum
expected_checksum=$(awk -v version="$version" -v platform="$platform" -v arch="$arch" '$2 == "hauler_"version"_"platform"_"arch".tar.gz" {print $1}' "hauler_${version}_checksums.txt")
determined_checksum=$(openssl dgst -sha256 "hauler_${version}_${platform}_${arch}.tar.gz" | awk '{print $2}')
# verify the Hauler checksum
EXPECTED_CHECKSUM=$(awk -v HAULER_VERSION="${HAULER_VERSION}" -v PLATFORM="${PLATFORM}" -v ARCH="${ARCH}" '$2 == "hauler_"HAULER_VERSION"_"PLATFORM"_"ARCH".tar.gz" {print $1}' "hauler_${HAULER_VERSION}_checksums.txt")
DETERMINED_CHECKSUM=$(openssl dgst -sha256 "hauler_${HAULER_VERSION}_${PLATFORM}_${ARCH}.tar.gz" | awk '{print $2}')
if [ -z "$expected_checksum" ]; then
fatal "Failed to Locate Checksum: hauler_${version}_${platform}_${arch}.tar.gz"
elif [ "$determined_checksum" = "$expected_checksum" ]; then
verbose "- Expected Checksum: $expected_checksum"
verbose "- Determined Checksum: $determined_checksum"
verbose "- Successfully Verified Checksum: hauler_${version}_${platform}_${arch}.tar.gz"
if [ -z "${EXPECTED_CHECKSUM}" ]; then
fatal "Failed to Locate Checksum: hauler_${HAULER_VERSION}_${PLATFORM}_${ARCH}.tar.gz"
elif [ "${DETERMINED_CHECKSUM}" = "${EXPECTED_CHECKSUM}" ]; then
verbose "- Expected Checksum: ${EXPECTED_CHECKSUM}"
verbose "- Determined Checksum: ${DETERMINED_CHECKSUM}"
verbose "- Successfully Verified Checksum: hauler_${HAULER_VERSION}_${PLATFORM}_${ARCH}.tar.gz"
else
verbose "- Expected: $expected_checksum"
verbose "- Determined: $determined_checksum"
fatal "Failed Checksum Verification: hauler_${version}_${platform}_${arch}.tar.gz"
verbose "- Expected: ${EXPECTED_CHECKSUM}"
verbose "- Determined: ${DETERMINED_CHECKSUM}"
fatal "Failed Checksum Verification: hauler_${HAULER_VERSION}_${PLATFORM}_${ARCH}.tar.gz"
fi
# uncompress the archive
tar -xzf "hauler_${version}_${platform}_${arch}.tar.gz" || fatal "Failed to Extract: hauler_${version}_${platform}_${arch}.tar.gz"
# uncompress the hauler archive
tar -xzf "hauler_${HAULER_VERSION}_${PLATFORM}_${ARCH}.tar.gz" || fatal "Failed to Extract: hauler_${HAULER_VERSION}_${PLATFORM}_${ARCH}.tar.gz"
# install the binary
case "$platform" in
linux)
install hauler /usr/local/bin || fatal "Failed to Install Hauler to /usr/local/bin"
;;
darwin)
install hauler /usr/local/bin || fatal "Failed to Install Hauler to /usr/local/bin"
;;
*)
fatal "Unsupported Platform or Architecture: $platform/$arch"
;;
esac
# install the hauler binary
install -m 755 hauler "${HAULER_INSTALL_DIR}" || fatal "Failed to Install Hauler: ${HAULER_INSTALL_DIR}"
# clean up checksum(s)
rm -rf "hauler_${version}_checksums.txt" || warn "Failed to Remove: hauler_${version}_checksums.txt"
# clean up archive file(s)
rm -rf "hauler_${version}_${platform}_${arch}.tar.gz" || warn "Failed to Remove: hauler_${version}_${platform}_${arch}.tar.gz"
# clean up other files
rm -rf LICENSE README.md hauler
# add hauler to the path
if [[ ":$PATH:" != *":${HAULER_INSTALL_DIR}:"* ]]; then
if [ -f "$HOME/.bashrc" ]; then
echo "export PATH=\$PATH:${HAULER_INSTALL_DIR}" >> "$HOME/.bashrc"
source "$HOME/.bashrc"
elif [ -f "$HOME/.bash_profile" ]; then
echo "export PATH=\$PATH:${HAULER_INSTALL_DIR}" >> "$HOME/.bash_profile"
source "$HOME/.bash_profile"
elif [ -f "$HOME/.zshrc" ]; then
echo "export PATH=\$PATH:${HAULER_INSTALL_DIR}" >> "$HOME/.zshrc"
source "$HOME/.zshrc"
elif [ -f "$HOME/.profile" ]; then
echo "export PATH=\$PATH:${HAULER_INSTALL_DIR}" >> "$HOME/.profile"
source "$HOME/.profile"
else
warn "Failed to add ${HAULER_INSTALL_DIR} to PATH: Unsupported Shell"
fi
fi
# display success message
info "Successfully Installed at /usr/local/bin/hauler"
info "Successfully Installed Hauler at ${HAULER_INSTALL_DIR}/hauler"
# display availability message
verbose "- Hauler v${version} is now available for use!"
info "Hauler v${HAULER_VERSION} is now available for use!"
# display hauler docs message
verbose "- Documentation: https://hauler.dev" && echo

61
internal/flags/add.go Normal file
View File

@@ -0,0 +1,61 @@
package flags
import (
"github.com/spf13/cobra"
"helm.sh/helm/v3/pkg/action"
)
type AddImageOpts struct {
*StoreRootOpts
Name string
Key string
CertOidcIssuer string
CertOidcIssuerRegexp string
CertIdentity string
CertIdentityRegexp string
CertGithubWorkflowRepository string
Tlog bool
Platform string
}
func (o *AddImageOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Key, "key", "k", "", "(Optional) Location of public key to use for signature verification")
f.StringVar(&o.CertIdentity, "certificate-identity", "", "(Optional) Cosign certificate-identity (either --certificate-identity or --certificate-identity-regexp required for keyless verification)")
f.StringVar(&o.CertIdentityRegexp, "certificate-identity-regexp", "", "(Optional) Cosign certificate-identity-regexp (either --certificate-identity or --certificate-identity-regexp required for keyless verification)")
f.StringVar(&o.CertOidcIssuer, "certificate-oidc-issuer", "", "(Optional) Cosign option to validate oidc issuer")
f.StringVar(&o.CertOidcIssuerRegexp, "certificate-oidc-issuer-regexp", "", "(Optional) Cosign option to validate oidc issuer with regex")
f.StringVar(&o.CertGithubWorkflowRepository, "certificate-github-workflow-repository", "", "(Optional) Cosign certificate-github-workflow-repository option")
f.BoolVarP(&o.Tlog, "use-tlog-verify", "v", false, "(Optional) Allow transparency log verification. (defaults to false)")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specifiy the platform of the image... i.e. linux/amd64 (defaults to all)")
}
type AddFileOpts struct {
*StoreRootOpts
Name string
}
func (o *AddFileOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Name, "name", "n", "", "(Optional) Rewrite the name of the file")
}
type AddChartOpts struct {
*StoreRootOpts
ChartOpts *action.ChartPathOptions
}
func (o *AddChartOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVar(&o.ChartOpts.RepoURL, "repo", "", "Location of the chart (https:// | http:// | oci://)")
f.StringVar(&o.ChartOpts.Version, "version", "", "(Optional) Specifiy the version of the chart (v1.0.0 | 2.0.0 | ^2.0.0)")
f.BoolVar(&o.ChartOpts.Verify, "verify", false, "(Optional) Verify the chart before fetching it")
f.StringVar(&o.ChartOpts.Username, "username", "", "(Optional) Username to use for authentication")
f.StringVar(&o.ChartOpts.Password, "password", "", "(Optional) Password to use for authentication")
f.StringVar(&o.ChartOpts.CertFile, "cert-file", "", "(Optional) Location of the TLS Certificate to use for client authenication")
f.StringVar(&o.ChartOpts.KeyFile, "key-file", "", "(Optional) Location of the TLS Key to use for client authenication")
f.BoolVar(&o.ChartOpts.InsecureSkipTLSverify, "insecure-skip-tls-verify", false, "(Optional) Skip TLS certificate verification")
f.StringVar(&o.ChartOpts.CaFile, "ca-file", "", "(Optional) Location of CA Bundle to enable certification verification")
}

17
internal/flags/cli.go Normal file
View File

@@ -0,0 +1,17 @@
package flags
import "github.com/spf13/cobra"
type CliRootOpts struct {
LogLevel string
HaulerDir string
IgnoreErrors bool
}
func AddRootFlags(cmd *cobra.Command, ro *CliRootOpts) {
pf := cmd.PersistentFlags()
pf.StringVarP(&ro.LogLevel, "log-level", "l", "info", "Set the logging level (i.e. info, debug, warn)")
pf.StringVarP(&ro.HaulerDir, "haulerdir", "d", "", "Set the location of the hauler directory (default $HOME/.hauler)")
pf.BoolVar(&ro.IgnoreErrors, "ignore-errors", false, "Ignore/Bypass errors (i.e. warn on error) (defaults false)")
}

23
internal/flags/copy.go Normal file
View File

@@ -0,0 +1,23 @@
package flags
import "github.com/spf13/cobra"
type CopyOpts struct {
*StoreRootOpts
Username string
Password string
Insecure bool
PlainHTTP bool
Only string
}
func (o *CopyOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Username, "username", "u", "", "(Optional) Username to use for authentication")
f.StringVarP(&o.Password, "password", "p", "", "(Optional) Password to use for authentication")
f.BoolVar(&o.Insecure, "insecure", false, "(Optional) Allow insecure connections")
f.BoolVar(&o.PlainHTTP, "plain-http", false, "(Optional) Allow plain HTTP connections")
f.StringVarP(&o.Only, "only", "o", "", "(Optional) Custom string array to only copy specific 'image' items, this flag is comma delimited. ex: --only=sig,att,sbom")
}

14
internal/flags/extract.go Normal file
View File

@@ -0,0 +1,14 @@
package flags
import "github.com/spf13/cobra"
type ExtractOpts struct {
*StoreRootOpts
DestinationDir string
}
func (o *ExtractOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.DestinationDir, "output", "o", "", "(Optional) Set the directory to output (defaults to current directory)")
}

20
internal/flags/info.go Normal file
View File

@@ -0,0 +1,20 @@
package flags
import "github.com/spf13/cobra"
type InfoOpts struct {
*StoreRootOpts
OutputFormat string
TypeFilter string
SizeUnit string
ListRepos bool
}
func (o *InfoOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.OutputFormat, "output", "o", "table", "(Optional) Specify the output format (table | json)")
f.StringVarP(&o.TypeFilter, "type", "t", "all", "(Optional) Filter on content type (image | chart | file | sigs | atts | sbom)")
f.BoolVar(&o.ListRepos, "list-repos", false, "(Optional) List all repository names")
}

21
internal/flags/load.go Normal file
View File

@@ -0,0 +1,21 @@
package flags
import (
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
)
type LoadOpts struct {
*StoreRootOpts
FileName []string
TempOverride string
}
func (o *LoadOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
// On Unix systems, the default is $TMPDIR if non-empty, else /tmp
// On Windows, the default is GetTempPath, returning the first value from %TMP%, %TEMP%, %USERPROFILE%, or Windows directory
f.StringSliceVarP(&o.FileName, "filename", "f", []string{consts.DefaultHaulerArchiveName}, "(Optional) Specify the name of inputted haul(s)")
f.StringVarP(&o.TempOverride, "tempdir", "t", "", "(Optional) Override the default temporary directiory determined by the OS")
}

19
internal/flags/save.go Normal file
View File

@@ -0,0 +1,19 @@
package flags
import (
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
)
type SaveOpts struct {
*StoreRootOpts
FileName string
Platform string
}
func (o *SaveOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.FileName, "filename", "f", consts.DefaultHaulerArchiveName, "(Optional) Specify the name of outputted haul")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specify the platform for runtime imports... i.e. linux/amd64 (unspecified implies all)")
}

56
internal/flags/serve.go Normal file
View File

@@ -0,0 +1,56 @@
package flags
import (
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
)
type ServeRegistryOpts struct {
*StoreRootOpts
Port int
RootDir string
ConfigFile string
ReadOnly bool
TLSCert string
TLSKey string
}
func (o *ServeRegistryOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.IntVarP(&o.Port, "port", "p", consts.DefaultRegistryPort, "(Optional) Set the port to use for incoming connections")
f.StringVar(&o.RootDir, "directory", consts.DefaultRegistryRootDir, "(Optional) Directory to use for backend. Defaults to $PWD/registry")
f.StringVarP(&o.ConfigFile, "config", "c", "", "(Optional) Location of config file (overrides all flags)")
f.BoolVar(&o.ReadOnly, "readonly", true, "(Optional) Run the registry as readonly")
f.StringVar(&o.TLSCert, "tls-cert", "", "(Optional) Location of the TLS Certificate to use for server authenication")
f.StringVar(&o.TLSKey, "tls-key", "", "(Optional) Location of the TLS Key to use for server authenication")
cmd.MarkFlagsRequiredTogether("tls-cert", "tls-key")
}
type ServeFilesOpts struct {
*StoreRootOpts
Port int
Timeout int
RootDir string
TLSCert string
TLSKey string
}
func (o *ServeFilesOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.IntVarP(&o.Port, "port", "p", consts.DefaultFileserverPort, "(Optional) Set the port to use for incoming connections")
f.IntVarP(&o.Timeout, "timeout", "t", consts.DefaultFileserverTimeout, "(Optional) Timeout duration for HTTP Requests in seconds for both reads/writes")
f.StringVar(&o.RootDir, "directory", consts.DefaultFileserverRootDir, "(Optional) Directory to use for backend. Defaults to $PWD/fileserver")
f.StringVar(&o.TLSCert, "tls-cert", "", "(Optional) Location of the TLS Certificate to use for server authenication")
f.StringVar(&o.TLSKey, "tls-key", "", "(Optional) Location of the TLS Key to use for server authenication")
cmd.MarkFlagsRequiredTogether("tls-cert", "tls-key")
}

61
internal/flags/store.go Normal file
View File

@@ -0,0 +1,61 @@
package flags
import (
"context"
"errors"
"os"
"path/filepath"
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/store"
)
type StoreRootOpts struct {
StoreDir string
Retries int
}
func (o *StoreRootOpts) AddFlags(cmd *cobra.Command) {
pf := cmd.PersistentFlags()
pf.StringVarP(&o.StoreDir, "store", "s", "", "Set the directory to use for the content store")
pf.IntVarP(&o.Retries, "retries", "r", consts.DefaultRetries, "Set the number of retries for operations")
}
func (o *StoreRootOpts) Store(ctx context.Context) (*store.Layout, error) {
l := log.FromContext(ctx)
storeDir := o.StoreDir
if storeDir == "" {
storeDir = os.Getenv(consts.HaulerStoreDir)
}
if storeDir == "" {
storeDir = consts.DefaultStoreName
}
abs, err := filepath.Abs(storeDir)
if err != nil {
return nil, err
}
o.StoreDir = abs
l.Debugf("using store at [%s]", abs)
if _, err := os.Stat(abs); errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(abs, os.ModePerm); err != nil {
return nil, err
}
} else if err != nil {
return nil, err
}
s, err := store.NewLayout(abs)
if err != nil {
return nil, err
}
return s, nil
}

41
internal/flags/sync.go Normal file
View File

@@ -0,0 +1,41 @@
package flags
import (
"github.com/spf13/cobra"
"hauler.dev/go/hauler/pkg/consts"
)
type SyncOpts struct {
*StoreRootOpts
FileName []string
Key string
CertOidcIssuer string
CertOidcIssuerRegexp string
CertIdentity string
CertIdentityRegexp string
CertGithubWorkflowRepository string
Products []string
Platform string
Registry string
ProductRegistry string
TempOverride string
Tlog bool
}
func (o *SyncOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringSliceVarP(&o.FileName, "filename", "f", []string{consts.DefaultHaulerManifestName}, "Specify the name of manifest(s) to sync")
f.StringVarP(&o.Key, "key", "k", "", "(Optional) Location of public key to use for signature verification")
f.StringVar(&o.CertIdentity, "certificate-identity", "", "(Optional) Cosign certificate-identity (either --certificate-identity or --certificate-identity-regexp required for keyless verification)")
f.StringVar(&o.CertIdentityRegexp, "certificate-identity-regexp", "", "(Optional) Cosign certificate-identity-regexp (either --certificate-identity or --certificate-identity-regexp required for keyless verification)")
f.StringVar(&o.CertOidcIssuer, "certificate-oidc-issuer", "", "(Optional) Cosign option to validate oidc issuer")
f.StringVar(&o.CertOidcIssuerRegexp, "certificate-oidc-issuer-regexp", "", "(Optional) Cosign option to validate oidc issuer with regex")
f.StringVar(&o.CertGithubWorkflowRepository, "certificate-github-workflow-repository", "", "(Optional) Cosign certificate-github-workflow-repository option")
f.StringSliceVar(&o.Products, "products", []string{}, "(Optional) Specify the product name to fetch collections from the product registry i.e. rancher=v2.10.1,rke2=v1.31.5+rke2r1")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specify the platform of the image... i.e linux/amd64 (defaults to all)")
f.StringVarP(&o.Registry, "registry", "g", "", "(Optional) Specify the registry of the image for images that do not alredy define one")
f.StringVarP(&o.ProductRegistry, "product-registry", "c", "", "(Optional) Specify the product registry. Defaults to RGS Carbide Registry (rgcrprod.azurecr.us)")
f.StringVarP(&o.TempOverride, "tempdir", "t", "", "(Optional) Override the default temporary directiory determined by the OS")
f.BoolVarP(&o.Tlog, "use-tlog-verify", "v", false, "(Optional) Allow transparency log verification. (defaults to false)")
}

12
internal/flags/version.go Normal file
View File

@@ -0,0 +1,12 @@
package flags
import "github.com/spf13/cobra"
type VersionOpts struct {
JSON bool
}
func (o *VersionOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.BoolVar(&o.JSON, "json", false, "Set the output format to JSON")
}

View File

@@ -2,7 +2,7 @@ package mapper
import (
"context"
"io/ioutil"
"io"
"os"
"path/filepath"
"strings"
@@ -15,7 +15,8 @@ import (
)
// NewMapperFileStore creates a new file store that uses mapper functions for each detected descriptor.
// This extends content.File, and differs in that it allows much more functionality into how each descriptor is written.
//
// This extends content.File, and differs in that it allows much more functionality into how each descriptor is written.
func NewMapperFileStore(root string, mapper map[string]Fn) *store {
fs := content.NewFile(root)
return &store{
@@ -58,7 +59,7 @@ func (s *pusher) Push(ctx context.Context, desc ocispec.Descriptor) (ccontent.Wr
// If no custom mapper found, fall back to content.File mapper
if _, ok := s.mapper[desc.MediaType]; !ok {
return content.NewIoContentWriter(ioutil.Discard, content.WithOutputHash(desc.Digest)), nil
return content.NewIoContentWriter(io.Discard, content.WithOutputHash(desc.Digest)), nil
}
filename, err := s.mapper[desc.MediaType](desc)

View File

@@ -6,7 +6,7 @@ import (
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"oras.land/oras-go/pkg/target"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/consts"
)
type Fn func(desc ocispec.Descriptor) (string, error)
@@ -36,7 +36,7 @@ func Images() map[string]Fn {
m := make(map[string]Fn)
manifestMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
return "manifest.json", nil
return consts.ImageManifestFile, nil
})
for _, l := range []string{consts.DockerManifestSchema2, consts.DockerManifestListSchema2, consts.OCIManifestSchema1} {
@@ -52,7 +52,7 @@ func Images() map[string]Fn {
}
configMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
return "config.json", nil
return consts.ImageConfigFile, nil
})
for _, l := range []string{consts.DockerConfigJSON} {

View File

@@ -9,32 +9,32 @@ import (
"github.com/gorilla/handlers"
"github.com/gorilla/mux"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/consts"
)
type FileConfig struct {
Root string
Host string
Port int
}
// NewFile returns a fileserver
// TODO: Better configs
func NewFile(ctx context.Context, cfg FileConfig) (Server, error) {
func NewFile(ctx context.Context, cfg flags.ServeFilesOpts) (Server, error) {
r := mux.NewRouter()
r.PathPrefix("/").Handler(handlers.LoggingHandler(os.Stdout, http.StripPrefix("/", http.FileServer(http.Dir(cfg.Root)))))
if cfg.Root == "" {
cfg.Root = "."
r.PathPrefix("/").Handler(handlers.LoggingHandler(os.Stdout, http.StripPrefix("/", http.FileServer(http.Dir(cfg.RootDir)))))
if cfg.RootDir == "" {
cfg.RootDir = "."
}
if cfg.Port == 0 {
cfg.Port = 8080
cfg.Port = consts.DefaultFileserverPort
}
if cfg.Timeout == 0 {
cfg.Timeout = consts.DefaultFileserverTimeout
}
srv := &http.Server{
Handler: r,
Addr: fmt.Sprintf(":%d", cfg.Port),
WriteTimeout: 15 * time.Second,
ReadTimeout: 15 * time.Second,
WriteTimeout: time.Duration(cfg.Timeout) * time.Second,
ReadTimeout: time.Duration(cfg.Timeout) * time.Second,
}
return srv, nil

View File

@@ -11,7 +11,6 @@ import (
"github.com/distribution/distribution/v3/configuration"
"github.com/distribution/distribution/v3/registry"
"github.com/distribution/distribution/v3/registry/handlers"
"github.com/docker/go-metrics"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -22,14 +21,6 @@ func NewRegistry(ctx context.Context, cfg *configuration.Configuration) (*regist
return nil, err
}
if cfg.HTTP.Debug.Prometheus.Enabled {
path := cfg.HTTP.Debug.Prometheus.Path
if path == "" {
path = "/metrics"
}
http.Handle(path, metrics.Handler())
}
return r, nil
}
@@ -45,9 +36,9 @@ func NewTempRegistry(ctx context.Context, root string) *tmpRegistryServer {
"filesystem": configuration.Parameters{"rootdirectory": root},
},
}
// Add validation configuration
cfg.Validation.Manifests.URLs.Allow = []string{".+"}
cfg.Log.Level = "error"
cfg.HTTP.Headers = http.Header{
"X-Content-Type-Options": []string{"nosniff"},

View File

@@ -2,4 +2,5 @@ package server
type Server interface {
ListenAndServe() error
ListenAndServeTLS(string, string) error
}

View File

@@ -28,10 +28,9 @@ import (
"time"
"github.com/common-nighthawk/go-figure"
"hauler.dev/go/hauler/pkg/consts"
)
const unknown = "unknown"
// Base version information.
//
// This is the fallback data used when version information from git is not
@@ -41,19 +40,19 @@ var (
// branch should be tagged using the correct versioning strategy.
gitVersion = "devel"
// SHA1 from git, output of $(git rev-parse HEAD)
gitCommit = unknown
gitCommit = consts.Unknown
// State of git tree, either "clean" or "dirty"
gitTreeState = unknown
gitTreeState = consts.Unknown
// Build date in ISO8601 format, output of $(date -u +'%Y-%m-%dT%H:%M:%SZ')
buildDate = unknown
buildDate = consts.Unknown
// flag to print the ascii name banner
asciiName = "true"
// goVersion is the used golang version.
goVersion = unknown
goVersion = consts.Unknown
// compiler is the used golang compiler.
compiler = unknown
compiler = consts.Unknown
// platform is the used os/arch identifier.
platform = unknown
platform = consts.Unknown
once sync.Once
info = Info{}
@@ -84,7 +83,7 @@ func getBuildInfo() *debug.BuildInfo {
func getGitVersion(bi *debug.BuildInfo) string {
if bi == nil {
return unknown
return consts.Unknown
}
// TODO: remove this when the issue https://github.com/golang/go/issues/29228 is fixed
@@ -107,28 +106,28 @@ func getDirty(bi *debug.BuildInfo) string {
if modified == "false" {
return "clean"
}
return unknown
return consts.Unknown
}
func getBuildDate(bi *debug.BuildInfo) string {
buildTime := getKey(bi, "vcs.time")
t, err := time.Parse("2006-01-02T15:04:05Z", buildTime)
if err != nil {
return unknown
return consts.Unknown
}
return t.Format("2006-01-02T15:04:05")
}
func getKey(bi *debug.BuildInfo, key string) string {
if bi == nil {
return unknown
return consts.Unknown
}
for _, iter := range bi.Settings {
if iter.Key == key {
return iter.Value
}
}
return unknown
return consts.Unknown
}
// GetVersionInfo represents known information on how this binary was built.
@@ -136,27 +135,27 @@ func GetVersionInfo() Info {
once.Do(func() {
buildInfo := getBuildInfo()
gitVersion = getGitVersion(buildInfo)
if gitCommit == unknown {
if gitCommit == consts.Unknown {
gitCommit = getCommit(buildInfo)
}
if gitTreeState == unknown {
if gitTreeState == consts.Unknown {
gitTreeState = getDirty(buildInfo)
}
if buildDate == unknown {
if buildDate == consts.Unknown {
buildDate = getBuildDate(buildInfo)
}
if goVersion == unknown {
if goVersion == consts.Unknown {
goVersion = runtime.Version()
}
if compiler == unknown {
if compiler == consts.Unknown {
compiler = runtime.Compiler
}
if platform == unknown {
if platform == consts.Unknown {
platform = fmt.Sprintf("%s/%s", runtime.GOOS, runtime.GOARCH)
}
@@ -226,4 +225,4 @@ func (i *Info) CheckFontName(fontName string) bool {
fmt.Fprintln(os.Stderr, "font not valid, using default")
return false
}
}

View File

@@ -0,0 +1,121 @@
package v1alpha1
import (
"fmt"
v1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
v1alpha1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
)
// converts v1alpha1.Files -> v1.Files
func ConvertFiles(in *v1alpha1.Files, out *v1.Files) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.Files = make([]v1.File, len(in.Spec.Files))
for i := range in.Spec.Files {
out.Spec.Files[i].Name = in.Spec.Files[i].Name
out.Spec.Files[i].Path = in.Spec.Files[i].Path
}
return nil
}
// converts v1alpha1.Images -> v1.Images
func ConvertImages(in *v1alpha1.Images, out *v1.Images) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.Images = make([]v1.Image, len(in.Spec.Images))
for i := range in.Spec.Images {
out.Spec.Images[i].Name = in.Spec.Images[i].Name
out.Spec.Images[i].Platform = in.Spec.Images[i].Platform
out.Spec.Images[i].Key = in.Spec.Images[i].Key
}
return nil
}
// converts v1alpha1.Charts -> v1.Charts
func ConvertCharts(in *v1alpha1.Charts, out *v1.Charts) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.Charts = make([]v1.Chart, len(in.Spec.Charts))
for i := range in.Spec.Charts {
out.Spec.Charts[i].Name = in.Spec.Charts[i].Name
out.Spec.Charts[i].RepoURL = in.Spec.Charts[i].RepoURL
out.Spec.Charts[i].Version = in.Spec.Charts[i].Version
}
return nil
}
// converts v1alpha1.ThickCharts -> v1.ThickCharts
func ConvertThickCharts(in *v1alpha1.ThickCharts, out *v1.ThickCharts) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.Charts = make([]v1.ThickChart, len(in.Spec.Charts))
for i := range in.Spec.Charts {
out.Spec.Charts[i].Chart.Name = in.Spec.Charts[i].Chart.Name
out.Spec.Charts[i].Chart.RepoURL = in.Spec.Charts[i].Chart.RepoURL
out.Spec.Charts[i].Chart.Version = in.Spec.Charts[i].Chart.Version
}
return nil
}
// converts v1alpha1.ImageTxts -> v1.ImageTxts
func ConvertImageTxts(in *v1alpha1.ImageTxts, out *v1.ImageTxts) error {
out.TypeMeta = in.TypeMeta
out.ObjectMeta = in.ObjectMeta
out.Spec.ImageTxts = make([]v1.ImageTxt, len(in.Spec.ImageTxts))
for i := range in.Spec.ImageTxts {
out.Spec.ImageTxts[i].Ref = in.Spec.ImageTxts[i].Ref
out.Spec.ImageTxts[i].Sources.Include = append(
out.Spec.ImageTxts[i].Sources.Include,
in.Spec.ImageTxts[i].Sources.Include...,
)
out.Spec.ImageTxts[i].Sources.Exclude = append(
out.Spec.ImageTxts[i].Sources.Exclude,
in.Spec.ImageTxts[i].Sources.Exclude...,
)
}
return nil
}
// convert v1alpha1 object to v1 object
func ConvertObject(in interface{}) (interface{}, error) {
switch src := in.(type) {
case *v1alpha1.Files:
dst := &v1.Files{}
if err := ConvertFiles(src, dst); err != nil {
return nil, err
}
return dst, nil
case *v1alpha1.Images:
dst := &v1.Images{}
if err := ConvertImages(src, dst); err != nil {
return nil, err
}
return dst, nil
case *v1alpha1.Charts:
dst := &v1.Charts{}
if err := ConvertCharts(src, dst); err != nil {
return nil, err
}
return dst, nil
case *v1alpha1.ThickCharts:
dst := &v1.ThickCharts{}
if err := ConvertThickCharts(src, dst); err != nil {
return nil, err
}
return dst, nil
case *v1alpha1.ImageTxts:
dst := &v1.ImageTxts{}
if err := ConvertImageTxts(src, dst); err != nil {
return nil, err
}
return dst, nil
}
return nil, fmt.Errorf("unsupported object type [%T]", in)
}

View File

@@ -0,0 +1,42 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Charts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ChartSpec `json:"spec,omitempty"`
}
type ChartSpec struct {
Charts []Chart `json:"charts,omitempty"`
}
type Chart struct {
Name string `json:"name,omitempty"`
RepoURL string `json:"repoURL,omitempty"`
Version string `json:"version,omitempty"`
}
type ThickCharts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ThickChartSpec `json:"spec,omitempty"`
}
type ThickChartSpec struct {
Charts []ThickChart `json:"charts,omitempty"`
}
type ThickChart struct {
Chart `json:",inline,omitempty"`
ExtraImages []ChartImage `json:"extraImages,omitempty"`
}
type ChartImage struct {
Reference string `json:"ref"`
}

View File

@@ -0,0 +1,17 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Driver struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec DriverSpec `json:"spec"`
}
type DriverSpec struct {
Type string `json:"type"`
Version string `json:"version"`
}

View File

@@ -0,0 +1,25 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Files struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec FileSpec `json:"spec,omitempty"`
}
type FileSpec struct {
Files []File `json:"files,omitempty"`
}
type File struct {
// Path is the path to the file contents, can be a local or remote path
Path string `json:"path"`
// Name is an optional field specifying the name of the file when specified,
// it will override any dynamic name discovery from Path
Name string `json:"name,omitempty"`
}

View File

@@ -0,0 +1,12 @@
package v1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"hauler.dev/go/hauler/pkg/consts"
)
var (
ContentGroupVersion = schema.GroupVersion{Group: consts.ContentGroup, Version: "v1"}
CollectionGroupVersion = schema.GroupVersion{Group: consts.CollectionGroup, Version: "v1"}
)

View File

@@ -0,0 +1,40 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Images struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ImageSpec `json:"spec,omitempty"`
}
type ImageSpec struct {
Images []Image `json:"images,omitempty"`
}
type Image struct {
// Name is the full location for the image, can be referenced by tags or digests
Name string `json:"name"`
// Path is the path to the cosign public key used for verifying image signatures
//Key string `json:"key,omitempty"`
Key string `json:"key"`
// Path is the path to the cosign public key used for verifying image signatures
//Tlog string `json:"use-tlog-verify,omitempty"`
Tlog bool `json:"use-tlog-verify"`
// cosign keyless validation options
CertIdentity string `json:"certificate-identity"`
CertIdentityRegexp string `json:"certificate-identity-regexp"`
CertOidcIssuer string `json:"certificate-oidc-issuer"`
CertOidcIssuerRegexp string `json:"certificate-oidc-issuer-regexp"`
CertGithubWorkflowRepository string `json:"certificate-github-workflow-repository"`
// Platform of the image to be pulled. If not specified, all platforms will be pulled.
//Platform string `json:"key,omitempty"`
Platform string `json:"platform"`
}

View File

@@ -0,0 +1,26 @@
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type ImageTxts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ImageTxtsSpec `json:"spec,omitempty"`
}
type ImageTxtsSpec struct {
ImageTxts []ImageTxt `json:"imageTxts,omitempty"`
}
type ImageTxt struct {
Ref string `json:"ref,omitempty"`
Sources ImageTxtSources `json:"sources,omitempty"`
}
type ImageTxtSources struct {
Include []string `json:"include,omitempty"`
Exclude []string `json:"exclude,omitempty"`
}

View File

@@ -4,11 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
ChartsContentKind = "Charts"
ChartsCollectionKind = "ThickCharts"
)
type Charts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -4,10 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
DriverContentKind = "Driver"
)
type Driver struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -4,8 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const FilesContentKind = "Files"
type Files struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -2,17 +2,11 @@ package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
)
const (
Version = "v1alpha1"
ContentGroup = "content.hauler.cattle.io"
CollectionGroup = "collection.hauler.cattle.io"
"hauler.dev/go/hauler/pkg/consts"
)
var (
ContentGroupVersion = schema.GroupVersion{Group: ContentGroup, Version: Version}
// SchemeBuilder = &scheme.Builder{GroupVersion: ContentGroupVersion}
CollectionGroupVersion = schema.GroupVersion{Group: CollectionGroup, Version: Version}
ContentGroupVersion = schema.GroupVersion{Group: consts.ContentGroup, Version: "v1alpha1"}
CollectionGroupVersion = schema.GroupVersion{Group: consts.CollectionGroup, Version: "v1alpha1"}
)

View File

@@ -4,8 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const ImagesContentKind = "Images"
type Images struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
@@ -20,11 +18,22 @@ type ImageSpec struct {
type Image struct {
// Name is the full location for the image, can be referenced by tags or digests
Name string `json:"name"`
// Path is the path to the cosign public key used for verifying image signatures
//Key string `json:"key,omitempty"`
Key string `json:"key"`
// Path is the path to the cosign public key used for verifying image signatures
//Tlog string `json:"use-tlog-verify,omitempty"`
Tlog bool `json:"use-tlog-verify"`
// cosign keyless validation options
CertIdentity string `json:"certificate-identity"`
CertIdentityRegexp string `json:"certificate-identity-regexp"`
CertOidcIssuer string `json:"certificate-oidc-issuer"`
CertOidcIssuerRegexp string `json:"certificate-oidc-issuer-regexp"`
CertGithubWorkflowRepository string `json:"certificate-github-workflow-repository"`
// Platform of the image to be pulled. If not specified, all platforms will be pulled.
//Platform string `json:"key,omitempty"`
Platform string `json:"platform"`

View File

@@ -4,10 +4,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
ImageTxtsContentKind = "ImageTxts"
)
type ImageTxts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

View File

@@ -1,17 +0,0 @@
package v1alpha1
import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
const K3sCollectionKind = "K3s"
type K3s struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec K3sSpec `json:"spec,omitempty"`
}
type K3sSpec struct {
Version string `json:"version"`
Arch string `json:"arch"`
}

104
pkg/archives/archiver.go Normal file
View File

@@ -0,0 +1,104 @@
package archives
import (
"context"
"fmt"
"os"
"path/filepath"
"github.com/mholt/archives"
"hauler.dev/go/hauler/pkg/log"
)
// maps to handle compression types
var CompressionMap = map[string]archives.Compression{
"gz": archives.Gz{},
"bz2": archives.Bz2{},
"xz": archives.Xz{},
"zst": archives.Zstd{},
"lz4": archives.Lz4{},
"br": archives.Brotli{},
}
// maps to handle archival types
var ArchivalMap = map[string]archives.Archival{
"tar": archives.Tar{},
"zip": archives.Zip{},
}
// check if a path exists
func isExist(path string) bool {
_, statErr := os.Stat(path)
return !os.IsNotExist(statErr)
}
// archives the files in a directory
// dir: the directory to Archive
// outfile: the output file
// compression: the compression to use (gzip, bzip2, etc.)
// archival: the archival to use (tar, zip, etc.)
func Archive(ctx context.Context, dir, outfile string, compression archives.Compression, archival archives.Archival) error {
l := log.FromContext(ctx)
l.Debugf("starting the archival process for [%s]", dir)
// remove outfile
l.Debugf("removing existing output file: [%s]", outfile)
if err := os.RemoveAll(outfile); err != nil {
errMsg := fmt.Errorf("failed to remove existing output file [%s]: %w", outfile, err)
l.Debugf(errMsg.Error())
return errMsg
}
if !isExist(dir) {
errMsg := fmt.Errorf("directory [%s] does not exist, cannot proceed with archival", dir)
l.Debugf(errMsg.Error())
return errMsg
}
// map files on disk to their paths in the archive
l.Debugf("mapping files in directory [%s]", dir)
archiveDirName := filepath.Base(filepath.Clean(dir))
if dir == "." {
archiveDirName = ""
}
files, err := archives.FilesFromDisk(context.Background(), nil, map[string]string{
dir: archiveDirName,
})
if err != nil {
errMsg := fmt.Errorf("error mapping files from directory [%s]: %w", dir, err)
l.Debugf(errMsg.Error())
return errMsg
}
l.Debugf("successfully mapped files for directory [%s]", dir)
// create the output file we'll write to
l.Debugf("creating output file [%s]", outfile)
outf, err := os.Create(outfile)
if err != nil {
errMsg := fmt.Errorf("error creating output file [%s]: %w", outfile, err)
l.Debugf(errMsg.Error())
return errMsg
}
defer func() {
l.Debugf("closing output file [%s]", outfile)
outf.Close()
}()
// define the archive format
l.Debugf("defining the archive format: [%T]/[%T]", archival, compression)
format := archives.CompressedArchive{
Compression: compression,
Archival: archival,
}
// create the archive
l.Debugf("starting archive for [%s]", outfile)
err = format.Archive(context.Background(), outf, files)
if err != nil {
errMsg := fmt.Errorf("error during archive creation for output file [%s]: %w", outfile, err)
l.Debugf(errMsg.Error())
return errMsg
}
l.Debugf("archive created successfully [%s]", outfile)
return nil
}

158
pkg/archives/unarchiver.go Normal file
View File

@@ -0,0 +1,158 @@
package archives
import (
"context"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/mholt/archives"
"hauler.dev/go/hauler/pkg/log"
)
const (
dirPermissions = 0o700 // default directory permissions
filePermissions = 0o600 // default file permissions
)
// ensures the path is safely relative to the target directory
func securePath(basePath, relativePath string) (string, error) {
relativePath = filepath.Clean("/" + relativePath)
relativePath = strings.TrimPrefix(relativePath, string(os.PathSeparator))
dstPath := filepath.Join(basePath, relativePath)
if !strings.HasPrefix(filepath.Clean(dstPath)+string(os.PathSeparator), filepath.Clean(basePath)+string(os.PathSeparator)) {
return "", fmt.Errorf("illegal file path: %s", dstPath)
}
return dstPath, nil
}
// creates a directory with specified permissions
func createDirWithPermissions(ctx context.Context, path string, mode os.FileMode) error {
l := log.FromContext(ctx)
l.Debugf("creating directory [%s]", path)
if err := os.MkdirAll(path, mode); err != nil {
return fmt.Errorf("failed to mkdir: %w", err)
}
return nil
}
// sets permissions to a file or directory
func setPermissions(path string, mode os.FileMode) error {
if err := os.Chmod(path, mode); err != nil {
return fmt.Errorf("failed to chmod: %w", err)
}
return nil
}
// handles the extraction of a file from the archive.
func handleFile(ctx context.Context, f archives.FileInfo, dst string) error {
l := log.FromContext(ctx)
l.Debugf("handling file [%s]", f.NameInArchive)
// validate and construct the destination path
dstPath, pathErr := securePath(dst, f.NameInArchive)
if pathErr != nil {
return pathErr
}
// ensure the parent directory exists
parentDir := filepath.Dir(dstPath)
if dirErr := createDirWithPermissions(ctx, parentDir, dirPermissions); dirErr != nil {
return dirErr
}
// handle directories
if f.IsDir() {
// create the directory with permissions from the archive
if dirErr := createDirWithPermissions(ctx, dstPath, f.Mode()); dirErr != nil {
return fmt.Errorf("failed to create directory: %w", dirErr)
}
l.Debugf("successfully created directory [%s]", dstPath)
return nil
}
// ignore symlinks (or hardlinks)
if f.LinkTarget != "" {
l.Debugf("skipping symlink [%s] to [%s]", dstPath, f.LinkTarget)
return nil
}
// check and handle parent directory permissions
originalMode, statErr := os.Stat(parentDir)
if statErr != nil {
return fmt.Errorf("failed to stat parent directory: %w", statErr)
}
// if parent directory is read only, temporarily make it writable
if originalMode.Mode().Perm()&0o200 == 0 {
l.Debugf("parent directory is read only... temporarily making it writable [%s]", parentDir)
if chmodErr := os.Chmod(parentDir, originalMode.Mode()|0o200); chmodErr != nil {
return fmt.Errorf("failed to chmod parent directory: %w", chmodErr)
}
defer func() {
// restore the original permissions after writing
if chmodErr := os.Chmod(parentDir, originalMode.Mode()); chmodErr != nil {
l.Debugf("failed to restore original permissions for [%s]: %v", parentDir, chmodErr)
}
}()
}
// handle regular files
reader, openErr := f.Open()
if openErr != nil {
return fmt.Errorf("failed to open file: %w", openErr)
}
defer reader.Close()
dstFile, createErr := os.OpenFile(dstPath, os.O_CREATE|os.O_WRONLY, f.Mode())
if createErr != nil {
return fmt.Errorf("failed to create file: %w", createErr)
}
defer dstFile.Close()
if _, copyErr := io.Copy(dstFile, reader); copyErr != nil {
return fmt.Errorf("failed to copy: %w", copyErr)
}
l.Debugf("successfully extracted file [%s]", dstPath)
return nil
}
// unarchives a tarball to a directory, symlinks, and hardlinks are ignored
func Unarchive(ctx context.Context, tarball, dst string) error {
l := log.FromContext(ctx)
l.Debugf("unarchiving temporary archive [%s] to temporary store [%s]", tarball, dst)
archiveFile, openErr := os.Open(tarball)
if openErr != nil {
return fmt.Errorf("failed to open tarball %s: %w", tarball, openErr)
}
defer archiveFile.Close()
format, input, identifyErr := archives.Identify(context.Background(), tarball, archiveFile)
if identifyErr != nil {
return fmt.Errorf("failed to identify format: %w", identifyErr)
}
extractor, ok := format.(archives.Extractor)
if !ok {
return fmt.Errorf("unsupported format for extraction")
}
if dirErr := createDirWithPermissions(ctx, dst, dirPermissions); dirErr != nil {
return fmt.Errorf("failed to create destination directory: %w", dirErr)
}
handler := func(ctx context.Context, f archives.FileInfo) error {
return handleFile(ctx, f, dst)
}
if extractErr := extractor.Extract(context.Background(), input, handler); extractErr != nil {
return fmt.Errorf("failed to extract: %w", extractErr)
}
l.Infof("unarchiving completed successfully")
return nil
}

View File

@@ -8,7 +8,7 @@ import (
"github.com/google/go-containerregistry/pkg/v1/partial"
"github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/consts"
)
var _ partial.Describable = (*marshallableConfig)(nil)

View File

@@ -7,9 +7,9 @@ import (
"github.com/google/go-containerregistry/pkg/v1/partial"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/getter"
)
// interface guard

View File

@@ -13,9 +13,9 @@ import (
"github.com/spf13/afero"
"github.com/rancherfederal/hauler/pkg/artifacts/file"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/artifacts/file"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/getter"
)
var (

View File

@@ -1,8 +1,8 @@
package file
import (
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/getter"
)
type Option func(*File)

View File

@@ -1,12 +1,13 @@
package image
import (
"fmt"
"github.com/google/go-containerregistry/pkg/authn"
gname "github.com/google/go-containerregistry/pkg/name"
gv1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/rancherfederal/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts"
)
var _ artifacts.OCI = (*Image)(nil)
@@ -51,3 +52,29 @@ func NewImage(name string, opts ...remote.Option) (*Image, error) {
Image: img,
}, nil
}
func IsMultiArchImage(name string, opts ...remote.Option) (bool, error) {
ref, err := gname.ParseReference(name)
if err != nil {
return false, fmt.Errorf("parsing reference %q: %v", name, err)
}
defaultOpts := []remote.Option{
remote.WithAuthFromKeychain(authn.DefaultKeychain),
}
opts = append(opts, defaultOpts...)
desc, err := remote.Get(ref, opts...)
if err != nil {
return false, fmt.Errorf("getting image %q: %v", name, err)
}
_, err = desc.ImageIndex()
if err != nil {
// If the descriptor could not be converted to an image index, it's not a multi-arch image
return false, nil
}
// If the descriptor could be converted to an image index, it's a multi-arch image
return true, nil
}

View File

@@ -6,8 +6,8 @@ import (
"github.com/google/go-containerregistry/pkg/v1/static"
"github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/consts"
)
var _ artifacts.OCI = (*Memory)(nil)

View File

@@ -7,7 +7,7 @@ import (
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/opencontainers/go-digest"
"github.com/rancherfederal/hauler/pkg/artifacts/memory"
"hauler.dev/go/hauler/pkg/artifacts/memory"
)
func TestMemory_Layers(t *testing.T) {

View File

@@ -1,6 +1,6 @@
package memory
import "github.com/rancherfederal/hauler/pkg/artifacts"
import "hauler.dev/go/hauler/pkg/artifacts"
type Option func(*Memory)

View File

@@ -3,8 +3,9 @@ package artifacts
import "github.com/google/go-containerregistry/pkg/v1"
// OCI is the bare minimum we need to represent an artifact in an oci layout
// At a high level, it is not constrained by an Image's config, manifests, and layer ordinality
// This specific implementation fully encapsulates v1.Layer's within a more generic form
//
// At a high level, it is not constrained by an Image's config, manifests, and layer ordinality
// This specific implementation fully encapsulates v1.Layer's within a more generic form
type OCI interface {
MediaType() string

View File

@@ -1,13 +1,13 @@
package chart
import (
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/image"
"helm.sh/helm/v3/pkg/action"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"github.com/rancherfederal/hauler/pkg/content/chart"
"github.com/rancherfederal/hauler/pkg/reference"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts/image"
"hauler.dev/go/hauler/pkg/content/chart"
"hauler.dev/go/hauler/pkg/reference"
)
var _ artifacts.OCICollection = (*tchart)(nil)
@@ -15,13 +15,13 @@ var _ artifacts.OCICollection = (*tchart)(nil)
// tchart is a thick chart that includes all the dependent images as well as the chart itself
type tchart struct {
chart *chart.Chart
config v1alpha1.ThickChart
config v1.ThickChart
computed bool
contents map[string]artifacts.OCI
}
func NewThickChart(cfg v1alpha1.ThickChart, opts *action.ChartPathOptions) (artifacts.OCICollection, error) {
func NewThickChart(cfg v1.ThickChart, opts *action.ChartPathOptions) (artifacts.OCICollection, error) {
o, err := chart.NewChart(cfg.Chart.Name, opts)
if err != nil {
return nil, err

View File

@@ -15,7 +15,7 @@ import (
"k8s.io/apimachinery/pkg/util/yaml"
"k8s.io/client-go/util/jsonpath"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
)
var defaultKnownImagePaths = []string{
@@ -29,13 +29,13 @@ var defaultKnownImagePaths = []string{
}
// ImagesInChart will render a chart and identify all dependent images from it
func ImagesInChart(c *helmchart.Chart) (v1alpha1.Images, error) {
func ImagesInChart(c *helmchart.Chart) (v1.Images, error) {
docs, err := template(c)
if err != nil {
return v1alpha1.Images{}, err
return v1.Images{}, err
}
var images []v1alpha1.Image
var images []v1.Image
reader := yaml.NewYAMLReader(bufio.NewReader(strings.NewReader(docs)))
for {
raw, err := reader.Read()
@@ -43,17 +43,17 @@ func ImagesInChart(c *helmchart.Chart) (v1alpha1.Images, error) {
break
}
if err != nil {
return v1alpha1.Images{}, err
return v1.Images{}, err
}
found := find(raw, defaultKnownImagePaths...)
for _, f := range found {
images = append(images, v1alpha1.Image{Name: f})
images = append(images, v1.Image{Name: f})
}
}
ims := v1alpha1.Images{
Spec: v1alpha1.ImageSpec{
ims := v1.Images{
Spec: v1.ImageSpec{
Images: images,
},
}

View File

@@ -9,12 +9,12 @@ import (
"strings"
"sync"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/google/go-containerregistry/pkg/name"
artifact "github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/rancherfederal/hauler/pkg/artifacts/image"
artifact "hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts/image"
"hauler.dev/go/hauler/pkg/getter"
"hauler.dev/go/hauler/pkg/log"
)
type ImageTxt struct {

View File

@@ -8,8 +8,8 @@ import (
"os"
"testing"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/image"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts/image"
)
var (

View File

@@ -1,178 +0,0 @@
package k3s
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"path"
"strings"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/image"
"github.com/rancherfederal/hauler/pkg/artifacts/file"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/rancherfederal/hauler/pkg/reference"
)
var _ artifacts.OCICollection = (*k3s)(nil)
const (
releaseUrl = "https://github.com/k3s-io/k3s/releases/download"
channelUrl = "https://update.k3s.io/v1-release/channels"
bootstrapUrl = "https://get.k3s.io"
)
var (
ErrImagesNotFound = errors.New("k3s dependent images not found")
ErrFetchingImages = errors.New("failed to fetch k3s dependent images")
ErrExecutableNotfound = errors.New("k3s executable not found")
ErrChannelNotFound = errors.New("desired k3s channel not found")
)
type k3s struct {
version string
arch string
computed bool
contents map[string]artifacts.OCI
channels map[string]string
client *getter.Client
}
func NewK3s(version string) (artifacts.OCICollection, error) {
return &k3s{
version: version,
contents: make(map[string]artifacts.OCI),
}, nil
}
func (k *k3s) Contents() (map[string]artifacts.OCI, error) {
if err := k.compute(); err != nil {
return nil, err
}
return k.contents, nil
}
func (k *k3s) compute() error {
if k.computed {
return nil
}
if err := k.fetchChannels(); err == nil {
if version, ok := k.channels[k.version]; ok {
k.version = version
}
}
if err := k.images(); err != nil {
return err
}
if err := k.executable(); err != nil {
return err
}
if err := k.bootstrap(); err != nil {
return err
}
k.computed = true
return nil
}
func (k *k3s) executable() error {
n := "k3s"
if k.arch != "" && k.arch != "amd64" {
n = fmt.Sprintf("name-%s", k.arch)
}
fref := k.releaseUrl(n)
resp, err := http.Head(fref)
if resp.StatusCode != http.StatusOK || err != nil {
return ErrExecutableNotfound
}
f := file.NewFile(fref)
ref := fmt.Sprintf("%s/k3s:%s", reference.DefaultNamespace, k.dnsCompliantVersion())
k.contents[ref] = f
return nil
}
func (k *k3s) bootstrap() error {
c := getter.NewClient(getter.ClientOptions{NameOverride: "k3s-init.sh"})
f := file.NewFile(bootstrapUrl, file.WithClient(c))
ref := fmt.Sprintf("%s/k3s-init.sh:%s", reference.DefaultNamespace, reference.DefaultTag)
k.contents[ref] = f
return nil
}
func (k *k3s) images() error {
resp, err := http.Get(k.releaseUrl("k3s-images.txt"))
if resp.StatusCode != http.StatusOK {
return ErrFetchingImages
} else if err != nil {
return ErrImagesNotFound
}
defer resp.Body.Close()
scanner := bufio.NewScanner(resp.Body)
for scanner.Scan() {
reference := scanner.Text()
o, err := image.NewImage(reference)
if err != nil {
return err
}
k.contents[reference] = o
}
return nil
}
func (k *k3s) releaseUrl(artifact string) string {
u, _ := url.Parse(releaseUrl)
complete := []string{u.Path}
u.Path = path.Join(append(complete, []string{k.version, artifact}...)...)
return u.String()
}
func (k *k3s) dnsCompliantVersion() string {
return strings.ReplaceAll(k.version, "+", "-")
}
func (k *k3s) fetchChannels() error {
resp, err := http.Get(channelUrl)
if err != nil {
return err
}
var c channel
if err := json.NewDecoder(resp.Body).Decode(&c); err != nil {
return err
}
channels := make(map[string]string)
for _, ch := range c.Data {
channels[ch.Name] = ch.Latest
}
k.channels = channels
return nil
}
type channel struct {
Data []channelData `json:"data"`
}
type channelData struct {
ID string `json:"id"`
Name string `json:"name"`
Latest string `json:"latest"`
}

View File

@@ -1,54 +1,99 @@
package consts
const (
// container media types
OCIManifestSchema1 = "application/vnd.oci.image.manifest.v1+json"
DockerManifestSchema2 = "application/vnd.docker.distribution.manifest.v2+json"
DockerManifestListSchema2 = "application/vnd.docker.distribution.manifest.list.v2+json"
OCIImageIndexSchema = "application/vnd.oci.image.index.v1+json"
DockerConfigJSON = "application/vnd.docker.container.image.v1+json"
DockerLayer = "application/vnd.docker.image.rootfs.diff.tar.gzip"
DockerForeignLayer = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
DockerUncompressedLayer = "application/vnd.docker.image.rootfs.diff.tar"
OCILayer = "application/vnd.oci.image.layer.v1.tar+gzip"
OCIArtifact = "application/vnd.oci.empty.v1+json"
DockerConfigJSON = "application/vnd.docker.container.image.v1+json"
DockerLayer = "application/vnd.docker.image.rootfs.diff.tar.gzip"
DockerForeignLayer = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
DockerUncompressedLayer = "application/vnd.docker.image.rootfs.diff.tar"
OCILayer = "application/vnd.oci.image.layer.v1.tar+gzip"
OCIArtifact = "application/vnd.oci.empty.v1+json"
// ChartConfigMediaType is the reserved media type for the Helm chart manifest config
// helm chart media types
ChartConfigMediaType = "application/vnd.cncf.helm.config.v1+json"
ChartLayerMediaType = "application/vnd.cncf.helm.chart.content.v1.tar+gzip"
ProvLayerMediaType = "application/vnd.cncf.helm.chart.provenance.v1.prov"
// ChartLayerMediaType is the reserved media type for Helm chart package content
ChartLayerMediaType = "application/vnd.cncf.helm.chart.content.v1.tar+gzip"
// ProvLayerMediaType is the reserved media type for Helm chart provenance files
ProvLayerMediaType = "application/vnd.cncf.helm.chart.provenance.v1.prov"
// FileLayerMediaType is the reserved media type for File content layers
FileLayerMediaType = "application/vnd.content.hauler.file.layer.v1"
// FileLocalConfigMediaType is the reserved media type for File config
// file media types
FileLayerMediaType = "application/vnd.content.hauler.file.layer.v1"
FileLocalConfigMediaType = "application/vnd.content.hauler.file.local.config.v1+json"
FileDirectoryConfigMediaType = "application/vnd.content.hauler.file.directory.config.v1+json"
FileHttpConfigMediaType = "application/vnd.content.hauler.file.http.config.v1+json"
// MemoryConfigMediaType is the reserved media type for Memory config for a generic set of bytes stored in memory
// memory media types
MemoryConfigMediaType = "application/vnd.content.hauler.memory.config.v1+json"
// WasmArtifactLayerMediaType is the reserved media type for WASM artifact layers
// wasm media types
WasmArtifactLayerMediaType = "application/vnd.wasm.content.layer.v1+wasm"
WasmConfigMediaType = "application/vnd.wasm.config.v1+json"
// WasmConfigMediaType is the reserved media type for WASM configs
WasmConfigMediaType = "application/vnd.wasm.config.v1+json"
// unknown media types
UnknownManifest = "application/vnd.hauler.cattle.io.unknown.v1+json"
UnknownLayer = "application/vnd.content.hauler.unknown.layer"
Unknown = "unknown"
// vendor prefixes
OCIVendorPrefix = "vnd.oci"
DockerVendorPrefix = "vnd.docker"
HaulerVendorPrefix = "vnd.hauler"
OCIImageIndexFile = "index.json"
KindAnnotationName = "kind"
KindAnnotation = "dev.cosignproject.cosign/image"
// annotation keys
KindAnnotationName = "kind"
KindAnnotationImage = "dev.cosignproject.cosign/image"
KindAnnotationIndex = "dev.cosignproject.cosign/imageIndex"
ImageAnnotationKey = "hauler.dev/key"
ImageAnnotationPlatform = "hauler.dev/platform"
ImageAnnotationRegistry = "hauler.dev/registry"
ImageAnnotationTlog = "hauler.dev/use-tlog-verify"
CarbideRegistry = "rgcrprod.azurecr.us"
// cosign keyless validation options
ImageAnnotationCertIdentity = "hauler.dev/certificate-identity"
ImageAnnotationCertIdentityRegexp = "hauler.dev/certificate-identity-regexp"
ImageAnnotationCertOidcIssuer = "hauler.dev/certificate-oidc-issuer"
ImageAnnotationCertOidcIssuerRegexp = "hauler.dev/certificate-oidc-issuer-regexp"
ImageAnnotationCertGithubWorkflowRepository = "hauler.dev/certificate-github-workflow-repository"
// content kinds
ImagesContentKind = "Images"
ChartsContentKind = "Charts"
FilesContentKind = "Files"
DriverContentKind = "Driver"
ImageTxtsContentKind = "ImageTxts"
ChartsCollectionKind = "ThickCharts"
// content groups
ContentGroup = "content.hauler.cattle.io"
CollectionGroup = "collection.hauler.cattle.io"
// environment variables
HaulerDir = "HAULER_DIR"
HaulerTempDir = "HAULER_TEMP_DIR"
HaulerStoreDir = "HAULER_STORE_DIR"
HaulerIgnoreErrors = "HAULER_IGNORE_ERRORS"
// container files and directories
ImageManifestFile = "manifest.json"
ImageConfigFile = "config.json"
// other constraints
CarbideRegistry = "rgcrprod.azurecr.us"
DefaultNamespace = "hauler"
DefaultTag = "latest"
DefaultStoreName = "store"
DefaultHaulerDirName = ".hauler"
DefaultHaulerTempDirName = "hauler"
DefaultRegistryRootDir = "registry"
DefaultRegistryPort = 5000
DefaultFileserverRootDir = "fileserver"
DefaultFileserverPort = 8080
DefaultFileserverTimeout = 60
DefaultHaulerArchiveName = "haul.tar.zst"
DefaultHaulerManifestName = "hauler-manifest.yaml"
DefaultRetries = 3
RetriesInterval = 5
CustomTimeFormat = "2006-01-02 15:04:05"
)

View File

@@ -5,8 +5,10 @@ import (
"bytes"
"compress/gzip"
"encoding/json"
"fmt"
"io"
"io/fs"
"net/url"
"os"
"path/filepath"
@@ -14,18 +16,22 @@ import (
"github.com/google/go-containerregistry/pkg/v1/partial"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/rancherfederal/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/log"
"helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/chart/loader"
"helm.sh/helm/v3/pkg/cli"
"helm.sh/helm/v3/pkg/registry"
"github.com/rancherfederal/hauler/pkg/layer"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/layer"
)
var _ artifacts.OCI = (*Chart)(nil)
var (
_ artifacts.OCI = (*Chart)(nil)
settings = cli.New()
)
// Chart implements the OCI interface for Chart API objects. API spec values are
// stored into the Repo, Name, and Version fields.
@@ -34,24 +40,33 @@ type Chart struct {
annotations map[string]string
}
// NewChart is a helper method that returns NewLocalChart or NewRemoteChart depending on v1alpha1.Chart contents
// NewChart is a helper method that returns NewLocalChart or NewRemoteChart depending on chart contents
func NewChart(name string, opts *action.ChartPathOptions) (*Chart, error) {
cpo := action.ChartPathOptions{
RepoURL: opts.RepoURL,
Version: opts.Version,
CaFile: opts.CaFile,
CertFile: opts.CertFile,
KeyFile: opts.KeyFile,
InsecureSkipTLSverify: opts.InsecureSkipTLSverify,
Keyring: opts.Keyring,
Password: opts.Password,
PassCredentialsAll: opts.PassCredentialsAll,
Username: opts.Username,
Verify: opts.Verify,
chartRef := name
actionConfig := new(action.Configuration)
if err := actionConfig.Init(settings.RESTClientGetter(), settings.Namespace(), os.Getenv("HELM_DRIVER"), log.NewLogger(os.Stdout).Debugf); err != nil {
return nil, err
}
chartPath, err := cpo.LocateChart(name, cli.New())
client := action.NewInstall(actionConfig)
client.ChartPathOptions.Version = opts.Version
registryClient, err := newRegistryClient(client.CertFile, client.KeyFile, client.CaFile,
client.InsecureSkipTLSverify, client.PlainHTTP)
if err != nil {
return nil, fmt.Errorf("missing registry client: %w", err)
}
client.SetRegistryClient(registryClient)
if registry.IsOCI(opts.RepoURL) {
chartRef = opts.RepoURL + "/" + name
} else if isUrl(opts.RepoURL) { // OCI Protocol registers as a valid URL
client.ChartPathOptions.RepoURL = opts.RepoURL
} else { // Handles cases like grafana/loki
chartRef = opts.RepoURL + "/" + name
}
chartPath, err := client.ChartPathOptions.LocateChart(chartRef, settings)
if err != nil {
return nil, err
}
@@ -217,3 +232,52 @@ func (h *Chart) chartData() (gv1.Layer, error) {
return chartDataLayer, err
}
func isUrl(name string) bool {
_, err := url.ParseRequestURI(name)
return err == nil
}
func newRegistryClient(certFile, keyFile, caFile string, insecureSkipTLSverify, plainHTTP bool) (*registry.Client, error) {
if certFile != "" && keyFile != "" || caFile != "" || insecureSkipTLSverify {
registryClient, err := newRegistryClientWithTLS(certFile, keyFile, caFile, insecureSkipTLSverify)
if err != nil {
return nil, err
}
return registryClient, nil
}
registryClient, err := newDefaultRegistryClient(plainHTTP)
if err != nil {
return nil, err
}
return registryClient, nil
}
func newDefaultRegistryClient(plainHTTP bool) (*registry.Client, error) {
opts := []registry.ClientOption{
registry.ClientOptDebug(settings.Debug),
registry.ClientOptEnableCache(true),
registry.ClientOptWriter(os.Stderr),
registry.ClientOptCredentialsFile(settings.RegistryConfig),
}
if plainHTTP {
opts = append(opts, registry.ClientOptPlainHTTP())
}
// Create a new registry client
registryClient, err := registry.NewClient(opts...)
if err != nil {
return nil, err
}
return registryClient, nil
}
func newRegistryClientWithTLS(certFile, keyFile, caFile string, insecureSkipTLSverify bool) (*registry.Client, error) {
// Create a new registry client
registryClient, err := registry.NewRegistryClientWithTLS(os.Stderr, certFile, keyFile, caFile, insecureSkipTLSverify,
settings.RegistryConfig, settings.Debug,
)
if err != nil {
return nil, err
}
return registryClient, nil
}

View File

@@ -6,29 +6,19 @@ import (
"testing"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/mholt/archiver/v3"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"helm.sh/helm/v3/pkg/action"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/content/chart"
)
var (
chartpath = "../../../testdata/podinfo-6.0.3.tgz"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/content/chart"
)
func TestNewChart(t *testing.T) {
tmpdir, err := os.MkdirTemp("", "hauler")
tempDir, err := os.MkdirTemp("", "hauler")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpdir)
if err := archiver.Unarchive(chartpath, tmpdir); err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tempDir)
type args struct {
name string
@@ -43,18 +33,18 @@ func TestNewChart(t *testing.T) {
{
name: "should create from a chart archive",
args: args{
name: chartpath,
opts: &action.ChartPathOptions{},
name: "rancher-cluster-templates-0.5.2.tgz",
opts: &action.ChartPathOptions{RepoURL: "../../../testdata"},
},
want: v1.Descriptor{
MediaType: consts.ChartLayerMediaType,
Size: 13524,
Size: 14970,
Digest: v1.Hash{
Algorithm: "sha256",
Hex: "e30b95a08787de69ffdad3c232d65cfb131b5b50c6fd44295f48a078fceaa44e",
Hex: "0905de044a6e57cf3cd27bfc8482753049920050b10347ae2315599bd982a0e3",
},
Annotations: map[string]string{
ocispec.AnnotationTitle: "podinfo-6.0.3.tgz",
ocispec.AnnotationTitle: "rancher-cluster-templates-0.5.2.tgz",
},
},
wantErr: false,
@@ -63,7 +53,7 @@ func TestNewChart(t *testing.T) {
// {
// name: "should create from a chart directory",
// args: args{
// path: filepath.Join(tmpdir, "podinfo"),
// path: filepath.Join(tempDir, "podinfo"),
// },
// want: want,
// wantErr: false,
@@ -72,18 +62,18 @@ func TestNewChart(t *testing.T) {
// TODO: Use a mock helm server
name: "should fetch a remote chart",
args: args{
name: "ingress-nginx",
opts: &action.ChartPathOptions{RepoURL: "https://kubernetes.github.io/ingress-nginx", Version: "4.0.16"},
name: "cert-manager",
opts: &action.ChartPathOptions{RepoURL: "https://charts.jetstack.io", Version: "1.15.3"},
},
want: v1.Descriptor{
MediaType: consts.ChartLayerMediaType,
Size: 38591,
Size: 94751,
Digest: v1.Hash{
Algorithm: "sha256",
Hex: "b0ea91f7febc6708ad9971871d2de6e8feb2072110c3add6dd7082d90753caa2",
Hex: "016e68d9f7083d2c4fd302f951ee6490dbf4cb1ef44cfc06914c39cbfb01d858",
},
Annotations: map[string]string{
ocispec.AnnotationTitle: "ingress-nginx-4.0.16.tgz",
ocispec.AnnotationTitle: "cert-manager-v1.15.3.tgz",
},
},
wantErr: false,

View File

@@ -7,17 +7,31 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/yaml"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
v1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1"
v1alpha1 "hauler.dev/go/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
)
func Load(data []byte) (schema.ObjectKind, error) {
var tm metav1.TypeMeta
if err := yaml.Unmarshal(data, &tm); err != nil {
return nil, err
return nil, fmt.Errorf("failed to parse manifest: %w", err)
}
if tm.GroupVersionKind().GroupVersion() != v1alpha1.ContentGroupVersion && tm.GroupVersionKind().GroupVersion() != v1alpha1.CollectionGroupVersion {
return nil, fmt.Errorf("unrecognized content/collection type: %s", tm.GroupVersionKind().String())
if tm.APIVersion == "" {
return nil, fmt.Errorf("missing required manifest field [apiVersion]")
}
if tm.Kind == "" {
return nil, fmt.Errorf("missing required manifest field [kind]")
}
gv := tm.GroupVersionKind().GroupVersion()
// allow v1 and v1alpha1 content/collection
if gv != v1.ContentGroupVersion &&
gv != v1.CollectionGroupVersion &&
gv != v1alpha1.ContentGroupVersion &&
gv != v1alpha1.CollectionGroupVersion {
return nil, fmt.Errorf("unrecognized content/collection [%s] with [kind=%s]", tm.APIVersion, tm.Kind)
}
return &tm, nil

View File

@@ -5,13 +5,14 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"sort"
"strings"
"sync"
"github.com/google/go-containerregistry/pkg/name"
ccontent "github.com/containerd/containerd/content"
"github.com/containerd/containerd/remotes"
"github.com/opencontainers/image-spec/specs-go"
@@ -19,7 +20,8 @@ import (
"oras.land/oras-go/pkg/content"
"oras.land/oras-go/pkg/target"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/reference"
)
var _ target.Target = (*OCI)(nil)
@@ -45,14 +47,26 @@ func (o *OCI) AddIndex(desc ocispec.Descriptor) error {
if _, ok := desc.Annotations[ocispec.AnnotationRefName]; !ok {
return fmt.Errorf("descriptor must contain a reference from the annotation: %s", ocispec.AnnotationRefName)
}
key := fmt.Sprintf("%s-%s-%s", desc.Digest.String(), desc.Annotations[ocispec.AnnotationRefName], desc.Annotations[consts.KindAnnotationName])
o.nameMap.Store(key, desc)
key, err := reference.Parse(desc.Annotations[ocispec.AnnotationRefName])
if err != nil {
return err
}
if strings.TrimSpace(key.String()) != "--" {
switch key.(type) {
case name.Digest:
o.nameMap.Store(fmt.Sprintf("%s-%s", key.Context().String(), desc.Annotations[consts.KindAnnotationName]), desc)
case name.Tag:
o.nameMap.Store(fmt.Sprintf("%s-%s", key.String(), desc.Annotations[consts.KindAnnotationName]), desc)
}
}
return o.SaveIndex()
}
// LoadIndex will load the index from disk
func (o *OCI) LoadIndex() error {
path := o.path(consts.OCIImageIndexFile)
path := o.path(ocispec.ImageIndexFile)
idx, err := os.Open(path)
if err != nil {
if !os.IsNotExist(err) {
@@ -72,11 +86,21 @@ func (o *OCI) LoadIndex() error {
}
for _, desc := range o.index.Manifests {
key := fmt.Sprintf("%s-%s-%s", desc.Digest.String(), desc.Annotations[ocispec.AnnotationRefName], desc.Annotations[consts.KindAnnotationName])
if strings.TrimSpace(key) != "--" {
o.nameMap.Store(key, desc)
key, err := reference.Parse(desc.Annotations[ocispec.AnnotationRefName])
if err != nil {
return err
}
if strings.TrimSpace(key.String()) != "--" {
switch key.(type) {
case name.Digest:
o.nameMap.Store(fmt.Sprintf("%s-%s", key.Context().String(), desc.Annotations[consts.KindAnnotationName]), desc)
case name.Tag:
o.nameMap.Store(fmt.Sprintf("%s-%s", key.String(), desc.Annotations[consts.KindAnnotationName]), desc)
}
}
}
return nil
}
@@ -101,9 +125,9 @@ func (o *OCI) SaveIndex() error {
kindJ := descs[j].Annotations["kind"]
// Objects with the prefix of "dev.cosignproject.cosign/image" should be at the top.
if strings.HasPrefix(kindI, consts.KindAnnotation) && !strings.HasPrefix(kindJ, consts.KindAnnotation) {
if strings.HasPrefix(kindI, consts.KindAnnotationImage) && !strings.HasPrefix(kindJ, consts.KindAnnotationImage) {
return true
} else if !strings.HasPrefix(kindI, consts.KindAnnotation) && strings.HasPrefix(kindJ, consts.KindAnnotation) {
} else if !strings.HasPrefix(kindI, consts.KindAnnotationImage) && strings.HasPrefix(kindJ, consts.KindAnnotationImage) {
return false
}
return false // Default: maintain the order.
@@ -114,7 +138,7 @@ func (o *OCI) SaveIndex() error {
if err != nil {
return err
}
return os.WriteFile(o.path(consts.OCIImageIndexFile), data, 0644)
return os.WriteFile(o.path(ocispec.ImageIndexFile), data, 0644)
}
// Resolve attempts to resolve the reference into a name and descriptor.
@@ -180,9 +204,11 @@ func (o *OCI) Pusher(ctx context.Context, ref string) (remotes.Pusher, error) {
var baseRef, hash string
parts := strings.SplitN(ref, "@", 2)
baseRef = parts[0]
if len(parts) > 1 {
hash = parts[1]
}
return &ociPusher{
oci: o,
ref: baseRef,
@@ -233,7 +259,7 @@ func (o *OCI) blobWriterAt(desc ocispec.Descriptor) (*os.File, error) {
}
func (o *OCI) ensureBlob(alg string, hex string) (string, error) {
dir := o.path("blobs", alg)
dir := o.path(ocispec.ImageBlobsDir, alg)
if err := os.MkdirAll(dir, os.ModePerm); err != nil && !os.IsExist(err) {
return "", err
}
@@ -275,7 +301,7 @@ func (p *ociPusher) Push(ctx context.Context, d ocispec.Descriptor) (ccontent.Wr
if _, err := os.Stat(blobPath); err == nil {
// file already exists, discard (but validate digest)
return content.NewIoContentWriter(ioutil.Discard, content.WithOutputHash(d.Digest)), nil
return content.NewIoContentWriter(io.Discard, content.WithOutputHash(d.Digest)), nil
}
f, err := os.Create(blobPath)

View File

@@ -1,134 +1,161 @@
package cosign
import (
"context"
"fmt"
"os"
"os/exec"
"os/user"
"path/filepath"
"runtime"
"context"
"time"
"bufio"
"embed"
"strings"
"time"
"github.com/sigstore/cosign/v2/cmd/cosign/cli"
"github.com/sigstore/cosign/v2/cmd/cosign/cli/options"
"github.com/sigstore/cosign/v2/cmd/cosign/cli/verify"
"hauler.dev/go/hauler/internal/flags"
"hauler.dev/go/hauler/pkg/artifacts/image"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/log"
"hauler.dev/go/hauler/pkg/store"
"oras.land/oras-go/pkg/content"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/log"
)
const maxRetries = 3
const retryDelay = time.Second * 5
// VerifyFileSignature verifies the digital signature of a file using Sigstore/Cosign.
func VerifySignature(ctx context.Context, s *store.Layout, keyPath string, ref string) error {
// VerifySignature verifies the digital signature of a file using Sigstore/Cosign.
func VerifySignature(ctx context.Context, s *store.Layout, keyPath string, useTlog bool, ref string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
operation := func() error {
cosignBinaryPath, err := getCosignPath(ctx)
if err != nil {
return err
v := &verify.VerifyCommand{
KeyRef: keyPath,
IgnoreTlog: true, // Ignore transparency log by default.
}
cmd := exec.Command(cosignBinaryPath, "verify", "--insecure-ignore-tlog", "--key", keyPath, ref)
output, err := cmd.CombinedOutput()
// if the user wants to use the transparency log, set the flag to false
if useTlog {
v.IgnoreTlog = false
}
err := log.CaptureOutput(l, true, func() error {
return v.Exec(ctx, []string{ref})
})
if err != nil {
return fmt.Errorf("error verifying signature: %v, output: %s", err, output)
return err
}
return nil
}
return RetryOperation(ctx, operation)
return RetryOperation(ctx, rso, ro, operation)
}
// VerifyKeylessSignature verifies the digital signature of a file using Sigstore/Cosign.
func VerifyKeylessSignature(ctx context.Context, s *store.Layout, identity string, identityRegexp string, oidcIssuer string, oidcIssuerRegexp string, ghWorkflowRepository string, useTlog bool, ref string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
operation := func() error {
certVerifyOptions := options.CertVerifyOptions{
CertOidcIssuer: oidcIssuer,
CertOidcIssuerRegexp: oidcIssuer,
CertIdentity: identity,
CertIdentityRegexp: identityRegexp,
CertGithubWorkflowRepository: ghWorkflowRepository,
}
v := &verify.VerifyCommand{
CertVerifyOptions: certVerifyOptions,
IgnoreTlog: false, // Ignore transparency log is set to false by default for keyless signature verification
CertGithubWorkflowRepository: ghWorkflowRepository,
}
// if the user wants to use the transparency log, set the flag to false
if useTlog {
v.IgnoreTlog = false
}
err := log.CaptureOutput(l, true, func() error {
return v.Exec(ctx, []string{ref})
})
if err != nil {
return err
}
return nil
}
return RetryOperation(ctx, rso, ro, operation)
}
// SaveImage saves image and any signatures/attestations to the store.
func SaveImage(ctx context.Context, s *store.Layout, ref string, platform string) error {
func SaveImage(ctx context.Context, s *store.Layout, ref string, platform string, rso *flags.StoreRootOpts, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
if !ro.IgnoreErrors {
envVar := os.Getenv(consts.HaulerIgnoreErrors)
if envVar == "true" {
ro.IgnoreErrors = true
}
}
operation := func() error {
cosignBinaryPath, err := getCosignPath(ctx)
o := &options.SaveOptions{
Directory: s.Root,
}
// check to see if the image is multi-arch
isMultiArch, err := image.IsMultiArchImage(ref)
if err != nil {
return err
}
l.Debugf("multi-arch image [%v]", isMultiArch)
// Conditionally add platform.
if platform != "" && isMultiArch {
l.Debugf("platform for image [%s]", platform)
o.Platform = platform
}
err = cli.SaveCmd(ctx, *o, ref)
if err != nil {
return err
}
cmd := exec.Command(cosignBinaryPath, "save", ref, "--dir", s.Root)
// Conditionally add platform.
if platform != "" {
cmd.Args = append(cmd.Args, "--platform", platform)
}
output, err := cmd.CombinedOutput()
if err != nil {
if strings.Contains(string(output), "specified reference is not a multiarch image") {
// Rerun the command without the platform flag
cmd = exec.Command(cosignBinaryPath, "save", ref, "--dir", s.Root)
output, err = cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("error adding image to store: %v, output: %s", err, output)
}
} else {
return fmt.Errorf("error adding image to store: %v, output: %s", err, output)
}
}
return nil
}
return RetryOperation(ctx, operation)
return RetryOperation(ctx, rso, ro, operation)
}
// LoadImage loads store to a remote registry.
func LoadImages(ctx context.Context, s *store.Layout, registry string, ropts content.RegistryOptions) error {
func LoadImages(ctx context.Context, s *store.Layout, registry string, only string, ropts content.RegistryOptions, ro *flags.CliRootOpts) error {
l := log.FromContext(ctx)
cosignBinaryPath, err := getCosignPath(ctx)
if err != nil {
return err
o := &options.LoadOptions{
Directory: s.Root,
Registry: options.RegistryOptions{
Name: registry,
},
}
cmd := exec.Command(cosignBinaryPath, "load", "--registry", registry, "--dir", s.Root)
// Conditionally add extra flags.
if len(only) > 0 {
o.LoadOnly = only
}
// Conditionally add extra registry flags.
if ropts.Insecure {
cmd.Args = append(cmd.Args, "--allow-insecure-registry=true")
o.Registry.AllowInsecure = true
}
if ropts.PlainHTTP {
cmd.Args = append(cmd.Args, "--allow-http-registry=true")
o.Registry.AllowHTTPRegistry = true
}
stdout, err := cmd.StdoutPipe()
if err != nil {
return err
}
stderr, err := cmd.StderrPipe()
if err != nil {
return err
}
// start the command after having set up the pipe
if err := cmd.Start(); err != nil {
return err
}
// read command's stdout line by line
output := bufio.NewScanner(stdout)
for output.Scan() {
l.Infof(output.Text()) // write each line to your log, or anything you need
}
if err := output.Err(); err != nil {
cmd.Wait()
return err
if ropts.Username != "" {
o.Registry.AuthConfig.Username = ropts.Username
o.Registry.AuthConfig.Password = ropts.Password
}
// read command's stderr line by line
errors := bufio.NewScanner(stderr)
for errors.Scan() {
l.Errorf(errors.Text()) // write each line to your log, or anything you need
}
if err := errors.Err(); err != nil {
cmd.Wait()
return err
}
// Wait for the command to finish
err = cmd.Wait()
// execute the cosign load and capture the output in our logger
err := log.CaptureOutput(l, false, func() error {
return cli.LoadCmd(ctx, *o, "")
})
if err != nil {
return err
}
@@ -136,110 +163,49 @@ func LoadImages(ctx context.Context, s *store.Layout, registry string, ropts con
return nil
}
// RegistryLogin - performs cosign login
func RegistryLogin(ctx context.Context, s *store.Layout, registry string, ropts content.RegistryOptions) error {
cosignBinaryPath, err := getCosignPath(ctx)
if err != nil {
return err
}
cmd := exec.Command(cosignBinaryPath, "login", registry, "-u", ropts.Username, "-p", ropts.Password)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("error logging into registry: %v, output: %s", err, output)
}
return nil
}
func RetryOperation(ctx context.Context, operation func() error) error {
func RetryOperation(ctx context.Context, rso *flags.StoreRootOpts, ro *flags.CliRootOpts, operation func() error) error {
l := log.FromContext(ctx)
for attempt := 1; attempt <= maxRetries; attempt++ {
if !ro.IgnoreErrors {
envVar := os.Getenv(consts.HaulerIgnoreErrors)
if envVar == "true" {
ro.IgnoreErrors = true
}
}
// Validate retries and fall back to a default
retries := rso.Retries
if retries <= 0 {
retries = consts.DefaultRetries
}
for attempt := 1; attempt <= rso.Retries; attempt++ {
err := operation()
if err == nil {
// If the operation succeeds, return nil (no error).
// If the operation succeeds, return nil (no error)
return nil
}
// Log the error for the current attempt.
l.Errorf("Error (attempt %d/%d): %v", attempt, maxRetries, err)
if ro.IgnoreErrors {
if strings.HasPrefix(err.Error(), "function execution failed: no matching signatures: rekor client not provided for online verification") {
l.Warnf("warning (attempt %d/%d)... failed tlog verification", attempt, rso.Retries)
} else {
l.Warnf("warning (attempt %d/%d)... %v", attempt, rso.Retries, err)
}
} else {
if strings.HasPrefix(err.Error(), "function execution failed: no matching signatures: rekor client not provided for online verification") {
l.Errorf("error (attempt %d/%d)... failed tlog verification", attempt, rso.Retries)
} else {
l.Errorf("error (attempt %d/%d)... %v", attempt, rso.Retries, err)
}
}
// If this is not the last attempt, wait before retrying.
if attempt < maxRetries {
time.Sleep(retryDelay)
// If this is not the last attempt, wait before retrying
if attempt < rso.Retries {
time.Sleep(time.Second * consts.RetriesInterval)
}
}
// If all attempts fail, return an error.
return fmt.Errorf("operation failed after %d attempts", maxRetries)
// If all attempts fail, return an error
return fmt.Errorf("operation unsuccessful after %d attempts", rso.Retries)
}
func EnsureBinaryExists(ctx context.Context, bin embed.FS) (error) {
// Set up a path for the binary to be copied.
binaryPath, err := getCosignPath(ctx)
if err != nil {
return fmt.Errorf("Error: %v\n", err)
}
// Determine the architecture so that we pull the correct embedded binary.
arch := runtime.GOARCH
rOS := runtime.GOOS
binaryName := "cosign"
if rOS == "windows" {
binaryName = fmt.Sprintf("cosign-%s-%s.exe", rOS, arch)
} else {
binaryName = fmt.Sprintf("cosign-%s-%s", rOS, arch)
}
// retrieve the embedded binary
f, err := bin.ReadFile(fmt.Sprintf("binaries/%s", binaryName))
if err != nil {
return fmt.Errorf("Error: %v\n", err)
}
// write the binary to the filesystem
err = os.WriteFile(binaryPath, f, 0755)
if err != nil {
return fmt.Errorf("Error: %v\n", err)
}
return nil
}
// getCosignPath returns the binary path
func getCosignPath(ctx context.Context) (string, error) {
// Get the current user's information
currentUser, err := user.Current()
if err != nil {
return "", fmt.Errorf("Error: %v\n", err)
}
// Get the user's home directory
homeDir := currentUser.HomeDir
// Construct the path to the .hauler directory
haulerDir := filepath.Join(homeDir, ".hauler")
// Create the .hauler directory if it doesn't exist
if _, err := os.Stat(haulerDir); os.IsNotExist(err) {
// .hauler directory does not exist, create it
if err := os.MkdirAll(haulerDir, 0755); err != nil {
return "", fmt.Errorf("Error creating .hauler directory: %v\n", err)
}
}
// Determine the binary name.
rOS := runtime.GOOS
binaryName := "cosign"
if rOS == "windows" {
binaryName = "cosign.exe"
}
// construct path to binary
binaryPath := filepath.Join(haulerDir, binaryName)
return binaryPath, nil
}

View File

@@ -13,8 +13,8 @@ import (
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/consts"
)
type directory struct {

View File

@@ -7,8 +7,8 @@ import (
"os"
"path/filepath"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/consts"
)
type File struct{}

View File

@@ -11,9 +11,9 @@ import (
"github.com/pkg/errors"
"oras.land/oras-go/pkg/content"
content2 "github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/layer"
content2 "hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/layer"
)
type Client struct {

View File

@@ -6,7 +6,7 @@ import (
"path/filepath"
"testing"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"hauler.dev/go/hauler/pkg/getter"
)
func TestClient_Detect(t *testing.T) {

View File

@@ -9,8 +9,8 @@ import (
"path/filepath"
"strings"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/consts"
)
type Http struct{}
@@ -25,6 +25,11 @@ func (h Http) Name(u *url.URL) string {
return ""
}
name, _ := url.PathUnescape(u.String())
if err != nil {
return ""
}
contentType := resp.Header.Get("Content-Type")
for _, v := range strings.Split(contentType, ",") {
t, _, err := mime.ParseMediaType(v)
@@ -36,7 +41,7 @@ func (h Http) Name(u *url.URL) string {
}
// TODO: Not this
return filepath.Base(u.String())
return filepath.Base(name)
}
func (h Http) Open(ctx context.Context, u *url.URL) (io.ReadCloser, error) {

View File

@@ -7,7 +7,7 @@ import (
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/artifacts"
"hauler.dev/go/hauler/pkg/artifacts"
)
/*

View File

@@ -6,7 +6,7 @@ import (
v1 "github.com/google/go-containerregistry/pkg/v1"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/consts"
"hauler.dev/go/hauler/pkg/consts"
)
type Opener func() (io.ReadCloser, error)

View File

@@ -7,6 +7,8 @@ import (
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"hauler.dev/go/hauler/pkg/consts"
)
// Logger provides an interface for all used logger features regardless of logging backend
@@ -30,7 +32,8 @@ type Fields map[string]string
// NewLogger returns a new Logger
func NewLogger(out io.Writer) Logger {
output := zerolog.ConsoleWriter{Out: os.Stdout}
zerolog.TimeFieldFormat = consts.CustomTimeFormat
output := zerolog.ConsoleWriter{Out: os.Stdout, TimeFormat: consts.CustomTimeFormat}
l := log.Output(output)
return &logger{
zl: l.With().Timestamp().Logger(),

99
pkg/log/logcapture.go Normal file
View File

@@ -0,0 +1,99 @@
package log
import (
"bufio"
"fmt"
"io"
"os"
"strings"
"sync"
)
// CustomWriter forwards log messages to the application's logger
type CustomWriter struct {
logger Logger
level string
}
func (cw *CustomWriter) Write(p []byte) (n int, err error) {
message := strings.TrimSpace(string(p))
if message != "" {
if cw.level == "error" {
cw.logger.Errorf("%s", message)
} else if cw.level == "info" {
cw.logger.Infof("%s", message)
} else {
cw.logger.Debugf("%s", message)
}
}
return len(p), nil
}
// logStream reads lines from a reader and writes them to the provided writer
func logStream(reader io.Reader, customWriter *CustomWriter, wg *sync.WaitGroup) {
defer wg.Done()
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
customWriter.Write([]byte(scanner.Text()))
}
if err := scanner.Err(); err != nil && err != io.EOF {
customWriter.logger.Errorf("error reading log stream: %v", err)
}
}
// CaptureOutput redirects stdout and stderr to custom loggers and executes the provided function
func CaptureOutput(logger Logger, debug bool, fn func() error) error {
// Create pipes for capturing stdout and stderr
stdoutReader, stdoutWriter, err := os.Pipe()
if err != nil {
return fmt.Errorf("failed to create stdout pipe: %w", err)
}
stderrReader, stderrWriter, err := os.Pipe()
if err != nil {
return fmt.Errorf("failed to create stderr pipe: %w", err)
}
// Save original stdout and stderr
origStdout := os.Stdout
origStderr := os.Stderr
// Redirect stdout and stderr
os.Stdout = stdoutWriter
os.Stderr = stderrWriter
// Use WaitGroup to wait for logging goroutines to finish
var wg sync.WaitGroup
wg.Add(2)
// Start logging goroutines
if !debug {
go logStream(stdoutReader, &CustomWriter{logger: logger, level: "info"}, &wg)
go logStream(stderrReader, &CustomWriter{logger: logger, level: "error"}, &wg)
} else {
go logStream(stdoutReader, &CustomWriter{logger: logger, level: "debug"}, &wg)
go logStream(stderrReader, &CustomWriter{logger: logger, level: "debug"}, &wg)
}
// Run the provided function in a separate goroutine
fnErr := make(chan error, 1)
go func() {
fnErr <- fn()
stdoutWriter.Close() // Close writers to signal EOF to readers
stderrWriter.Close()
}()
// Wait for logging goroutines to finish
wg.Wait()
// Restore original stdout and stderr
os.Stdout = origStdout
os.Stderr = origStderr
// Check for errors from the function
if err := <-fnErr; err != nil {
return fmt.Errorf("function execution failed: %w", err)
}
return nil
}

Some files were not shown because too many files have changed in this diff Show More