Compare commits

...

167 Commits

Author SHA1 Message Date
Adam Martin
3cf4afe6d1 Merge pull request #173 from amartin120/sync-annotations
image spec manifest annotations - key/platform/registry
2024-02-12 13:10:13 -05:00
Adam Martin
0c55d00d49 switch the 'apply the registry override first in a image sync
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-11 10:58:31 -05:00
Adam Martin
6c2b97042e switch the 'not a multi-arch image' log message to be debug
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-11 10:37:40 -05:00
Adam Martin
be22e56f27 fix whitspace issue
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-10 23:32:42 -05:00
Adam Martin
c8ea279c0d add better logging for save
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-10 23:30:34 -05:00
Adam Martin
59ff02b52b add annotations for registry
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-10 22:38:11 -05:00
Adam Martin
8b3398018a add annotations for key and platform
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-02-10 21:07:29 -05:00
Adam Martin
ae80b482e4 Merge pull request #168 from amartin120/dep-updates
dependency bumps for security vuln fixes
2024-01-30 16:51:48 -05:00
Adam Martin
1ae496fb8b dep bumps for security vuln fixes
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-01-30 14:41:33 -05:00
Adam Martin
7919dccffc Merge pull request #167 from amartin120/prerelease-flag
release process checks tag to determine pre-release
2024-01-30 11:11:13 -05:00
Adam Martin
fc7a19c755 check tag to determine pre-release
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-01-30 10:57:40 -05:00
Adam Martin
ade0feccf0 Merge pull request #166 from clemenko/main
Update install.sh for file cleaning
2024-01-30 09:04:41 -05:00
Andy Clemenko
f78fdf5e3d Update install.sh
adding the old hauler binary to the cleanup
2024-01-30 08:55:57 -05:00
Andy Clemenko
85d6bc0233 Update install.sh for file cleaning
removing LICENSE and README.md files.
2024-01-30 08:41:07 -05:00
Adam Martin
d1499b7738 Merge pull request #164 from amartin120/cosign-updates
Add `--platform` flag to image processes and RGS flavored cosign setup improvement.
2024-01-29 14:46:18 -05:00
Adam Martin
27acb239e4 clean up makefile 2024-01-29 13:41:53 -05:00
Adam Martin
e8d084847d remove extra debug statement
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-01-28 21:15:27 -05:00
Adam Martin
e70379870f another fix for the unit test gh action
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-01-28 19:51:22 -05:00
Adam Martin
a05d21c052 add platform flag for image add and sync
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-01-28 19:48:16 -05:00
Adam Martin
8256aa55ce adjust unit test gh action for latest updates 2024-01-28 19:46:55 -05:00
Adam Martin
0e6c3690b1 bump cosign version to v2.2.2+carbide.2 2024-01-28 19:45:05 -05:00
Adam Martin
a977cec50c improve cosign setup
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2024-01-28 12:08:31 -05:00
Adam Martin
5edc96d152 Merge pull request #162 from zackbradys/main
updated archive default name
2024-01-24 09:19:48 -05:00
Zack Hodgson Brady
fbafa60da5 updated archive default name 2024-01-23 22:49:20 -05:00
Adam Martin
cc917af0f2 Merge pull request #159 from amartin120/store-fileserver
Store fileserver
2024-01-22 15:12:45 -05:00
Adam Martin
f76160d8be Merge pull request #160 from amartin120/add-license
add license file
2024-01-22 15:12:05 -05:00
Adam Martin
b24b25d557 add license file 2024-01-22 15:06:09 -05:00
Adam Martin
d9e298b725 adjust to make registry and fileserver subcommands 2024-01-22 13:40:58 -05:00
Adam Martin
e14453f730 add fileserver option for store serve 2024-01-22 11:31:46 -05:00
Zack Brady
990ade9cd0 merge pull request #152 from zackbradys/main
updated readme and hauler `install.sh`
2023-12-20 19:56:57 -05:00
Zack Hodgson Brady
aecd37d192 added homebrew install instructions 2023-12-20 19:46:55 -05:00
Zack Brady
02f4946ead Merge branch 'rancherfederal:main' into main 2023-12-20 00:31:44 -05:00
Zack Hodgson Brady
978dc659f8 updated hauler version and automated default version 2023-12-19 21:24:04 -05:00
Adam Martin
f982f51d57 Merge pull request #150 from amartin120/info-type-filter
add simple type filter to store info
2023-12-19 13:07:46 -05:00
Adam Martin
2174e96f0e add simple type filter to store info
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-19 09:59:06 -05:00
Adam Martin
8cfe4432fc Merge pull request #149 from amartin120/registry-serve-fix
fix for validating foreign blobs
2023-12-18 15:51:38 -05:00
Adam Martin
f129484224 Merge pull request #148 from amartin120/fix-chart-tags
fix for charts with a + in the version
2023-12-18 15:51:20 -05:00
Adam Martin
4dbff83459 fix for validating foreign blobs
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-18 15:27:32 -05:00
Adam Martin
e229c2a1da fix for chart tags with a +
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-15 16:17:34 -05:00
Zack Brady
2a93e74b62 merge pull request #147 from zackbradys/main
updated/fixed install.sh
2023-12-14 23:36:53 -05:00
Zack Hodgson Brady
4d5d9eda7b updated readme for hauler install 2023-12-14 23:05:01 -05:00
Zack Hodgson Brady
a7cbfcb042 updated/fixed hauler install.sh 2023-12-14 23:04:36 -05:00
Adam Martin
7751b12e5e Merge pull request #146 from amartin120/more-updates-0.4.1
Improved logging for store copy / Updated store info to handle multi-arch images
2023-12-14 15:05:24 -05:00
Adam Martin
6e3d3fc7b8 updated store info to handle multi arch images
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-14 11:15:37 -05:00
Adam Martin
0f7f363d6c improved logging for hauler store copy
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-11 18:15:34 -05:00
Adam Martin
ab975a1dc7 Merge pull request #144 from amartin120/add-autocompletion
add autocompletion
2023-12-05 12:19:01 -05:00
Adam Martin
2d92d41245 Merge pull request #142 from amartin120/performance-fix
performance fix / version display improvement
2023-12-05 12:18:34 -05:00
Adam Martin
e2176d211a keep consistent with other subcommands
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-05 11:29:01 -05:00
Adam Martin
93ae968580 add autocompletion
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-05 10:37:29 -05:00
Adam Martin
b0a37d21af performance fix for images
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-04 11:19:57 -05:00
Adam Martin
aa16575c6f cleaned up version command more
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-12-04 11:19:43 -05:00
Adam Martin
2959cfc346 Merge pull request #141 from amartin120/goreleaser-versioning-fix
fix hauler version display
2023-11-30 14:01:14 -05:00
Adam Martin
c04211a55e Merge pull request #140 from amartin120/retry-logic
Retry logic / Auth Flag Fix / Sync Cleanup
2023-11-30 14:00:31 -05:00
Adam Martin
c497f53972 fix hauler version display
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-11-30 13:39:23 -05:00
Adam Martin
f1fbd7e9c2 don't flush store on each sync
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-11-30 10:02:04 -05:00
Adam Martin
f348fb8d4d registry auth fix for copy
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-11-28 22:29:00 -05:00
Adam Martin
fe60b1fd1a add retry logic
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-11-28 10:02:21 -05:00
Zack Brady
756c0171c3 merge pull request #139 from zackbradys/main
added new installation method (`install.sh`)
2023-11-16 14:01:06 -05:00
Zack Hodgson Brady
c394965f88 more improvements to script 2023-11-12 17:18:41 -05:00
Zack Hodgson Brady
43e2dc56ec upgraded install script functionality 2023-11-12 03:50:32 -05:00
Zack Hodgson Brady
795a88218f updated readme for new install script 2023-11-12 02:48:28 -05:00
Zack Hodgson Brady
ec2ada9dcb cleaned up install script variables 2023-11-12 00:26:28 -05:00
Zack Hodgson Brady
45cea89752 added initial install script 2023-11-12 00:06:49 -05:00
Adam Martin
6062c20e02 Merge pull request #138 from rancherfederal/fix-github-path
fix carbide cosign repo path and perms
2023-11-06 09:08:41 -05:00
Adam Martin
be486df762 fix carbide cosign repo path and perms
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-11-06 09:07:13 -05:00
Adam Martin
4d950f7b0a Add OCI hauler manifests. (#136)
* pull carbide flavored hauler manifests from reg
* remove temp constant
* remove temp hardcoding
* add comments for new sync flags
* fixes for version and registry serve
* band-aid for store info... needs love
* add sbom to info logic
* adjust a few text descriptions
* adjust tag names with +
* removed testing file

Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-11-03 12:44:05 -07:00
Adam Martin
f8c16a1a24 Merge pull request #135 from rancherfederal/cosign-verify
Add cosign verify functionality.
2023-11-03 15:27:48 -04:00
Adam Martin
6e8c7db81f Merge branch 'main' of github.com:rancherfederal/hauler into cosign-verify 2023-11-03 13:56:21 -04:00
Adam Martin
4772657548 Add cosign for handling image functionality. (#134)
* pull back in ocil
* updates to OCIL funcs to handle cosign changes
* add cosign logic
* adjust Makefile to be a little more generic
* cli updates to accomodate the cosign additions
* add cosign drop-in funcs
* impl for cosign functions for images & store copy
* fixes and logging for cosign verify <iamge>
* fix cosign verify logging
* update go.mod

Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-11-03 10:43:32 -07:00
Zack Brady
337494cefd merge pull request #132 from rancherfederal/zackbradys-readme-updates
readme and docs updates
2023-10-26 00:43:53 -04:00
Zack Brady
865afb4a2d updated readme for extra info 2023-10-26 00:42:58 -04:00
Zack Brady
d8b0193a92 merge pull request #133 from rancherfederal/zackbradys-github-updates
updated github templates
2023-10-25 18:01:34 -04:00
Zack Brady
b616f54085 updated readme for deprecated commands
Co-authored-by: Jacob Blain Christen <dweomer5@gmail.com>
2023-10-25 17:03:35 -04:00
Zack Brady
870f2ebda8 last typo fixes 2023-10-21 02:37:42 -04:00
Zack Brady
b7a8fc0a60 fixed typos 2023-10-20 12:32:31 -04:00
Zack Brady
04c97b8a97 fixed typos 2023-10-20 12:22:10 -04:00
Zack Brady
d46ccd03a5 updated github templates 2023-10-20 04:59:51 -04:00
Zack Brady
99288f9b9d removed old docs 2023-10-20 03:56:01 -04:00
Zack Brady
2cc5e902ad updated readme 2023-10-20 03:49:43 -04:00
Adam Martin
f2b0c44af3 polish up cosign verify for hauler store sync
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-10-12 12:05:35 -04:00
Adam Martin
356c46fe28 update go.mod
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-10-12 10:34:40 -04:00
Adam Martin
323b93ae20 fix cosign verify logging
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-10-11 13:44:21 -04:00
Adam Martin
bb9a088a84 fixes and logging for cosign verify <iamge>
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-10-11 13:44:21 -04:00
Adam Martin
96d92e3248 impl for cosign functions for images & store copy
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-10-11 13:44:21 -04:00
Adam Martin
220eeedb2c add cosign drop-in funcs
Signed-off-by: Adam Martin <adam.martin@rancherfederal.com>
2023-10-11 13:44:21 -04:00
Adam Martin
3049846a46 cli updates to accomodate the cosign additions 2023-10-11 13:44:21 -04:00
Adam Martin
ece463bc1c adjust Makefile to be a little more generic 2023-10-11 13:44:21 -04:00
Adam Martin
58c55d7aeb add cosign logic 2023-10-11 13:44:21 -04:00
Adam Martin
214ed48829 updates to OCIL funcs to handle cosign changes 2023-10-11 13:43:19 -04:00
Adam Martin
7d6bbbc6fc pull back in ocil 2023-10-11 13:40:42 -04:00
Jacob Blain Christen
995477db22 Merge pull request #131 from rancherfederal/dep-updates
dependency updates
2023-10-11 09:36:25 -07:00
Adam Martin
9862e61f23 update github action deps as well 2023-10-06 15:06:27 -04:00
Adam Martin
fe7122da8a update dependencies 2023-10-06 14:53:17 -04:00
Jacob Blain Christen
2999b90e30 Merge pull request #130 from rancherfederal/deprecate-non-store-stuff
deprecation notices for `dl` and the non-store version of `serve`
2023-09-28 11:51:33 -07:00
Adam Martin
4beb4d4200 deprecation notices for dl and non-store serve 2023-09-27 09:07:33 -04:00
Brandon
4ed1b0a1a4 Update walkthrough.md 2022-08-27 10:40:15 -04:00
Brandon
925ce53aeb Merge pull request #127 from neoakris/content_doc_example
Adding example of imperative generation of declarative config file to doc
2022-04-25 15:52:36 -04:00
Chris McGrath
3888e23907 reworded code comment to be more accurate 2022-04-25 15:26:06 -04:00
Chris McGrath
88f482f4af fixed syntax issue 2022-04-25 15:22:27 -04:00
Chris McGrath
425c92e8a6 added missing 'cat contents.yaml' to example 2022-04-25 15:08:08 -04:00
Chris McGrath
011a4d8725 adding imperative generation of declarative config example to doc 2022-04-25 15:03:39 -04:00
Brandon
c60ccc8085 Merge pull request #116 from noslzzp/main
Update README.md
2022-02-03 18:48:54 -05:00
NoSLZZP
6ebcd5088d Update README.md 2022-02-03 17:23:41 -05:00
Josh Wolf
d8bbb16e6e Merge pull request #110 from joshrwolf/override-files
pre 0.3 general bug fixes
2022-01-25 11:05:56 -07:00
Josh Wolf
105fb3a119 ensure thick charts follow proper reference naming convention 2022-01-25 11:00:26 -07:00
Josh Wolf
c341929a57 add optional args to file name generation and discovery 2022-01-25 08:07:43 -07:00
Josh Wolf
dff591d08b ensure k3s collection contents have default repo specified (#109)
ensure k3s collection contents have default repo specified
2022-01-24 17:03:07 -07:00
Josh Wolf
50b5f87c86 Merge pull request #108 from joshrwolf/helm
update helm dependency to 3.8.0, add support for helm authentication when storing charts
2022-01-24 16:40:35 -07:00
Josh Wolf
320a4af36a add support for helm authentication when storing charts 2022-01-24 16:31:03 -07:00
Josh Wolf
a1be863812 update helm dependency to 3.8.0 2022-01-24 16:29:57 -07:00
Josh Wolf
513175399b add basic configuration for fileserer 2022-01-24 08:24:42 -07:00
Matt Nikkel
c3a0a09216 Merge pull request #92 from nikkelma/image-txt-collection
Add ImageTxt collection
2022-01-20 10:31:12 -05:00
Matt Nikkel
94268e38ba Fix panic on empty target sources map 2022-01-13 13:57:28 -05:00
Matt Nikkel
ac52ad8260 Add ImageTxt tests 2022-01-13 13:57:27 -05:00
Matt Nikkel
597a5aa06d Handle ImageTxts objects in sync subcommand 2022-01-13 13:56:20 -05:00
Matt Nikkel
6d9270106b Add ImageTxt collection + storing logic 2022-01-13 13:43:22 -05:00
Matt Nikkel
cee4bddbc0 Add ImageTxts collection API definition 2022-01-13 13:20:24 -05:00
Josh Wolf
917e686da6 Merge pull request #106 from joshrwolf/ocil
factor out core oci logic into independent library (rancherfederal/ocil)
2022-01-12 11:37:30 -07:00
Josh Wolf
39dc1aac23 ensure charts are always given a version tag 2022-01-12 11:32:26 -07:00
Josh Wolf
8edc4927a8 move store/cache flags from global to store scoped 2022-01-12 10:30:05 -07:00
Josh Wolf
8b372d8a20 factor out core oci logic into independent library (rancherfederal/ocil) 2022-01-12 09:47:09 -07:00
Josh Wolf
96d231efdf Merge pull request #102 from joshrwolf/content-location-tagging
standardize content naming for unnamed content
2021-12-13 15:32:40 -07:00
Josh Wolf
1030ed92a8 add some standardization to referencing unreferenced content 2021-12-13 13:23:08 -07:00
Josh Wolf
313c40bba8 standardize content naming for unnamed content 2021-12-13 12:00:41 -07:00
Josh Wolf
e6596549a3 Merge pull request #100 from joshrwolf/charts
add support for local charts from directory or archives
2021-12-13 11:57:53 -07:00
Josh Wolf
d31a17f411 ensure sync doesn't panic when given invalid or empty yaml content 2021-12-10 18:58:51 -07:00
Josh Wolf
d2d3183ef1 add support for local charts from directory or archives 2021-12-10 10:50:04 -07:00
Josh Wolf
e9bd38ca75 Merge pull request #98 from joshrwolf/oci
improve `store` implementation
2021-12-09 11:31:10 -07:00
Josh Wolf
697a9fe034 ensure each copy test is independent 2021-12-09 11:26:48 -07:00
Josh Wolf
98322f7b28 rename redundant Store.Store to Store.Content 2021-12-09 11:12:37 -07:00
Josh Wolf
7eabbdc0aa restructure cli copy messages to print descriptor information 2021-12-09 11:09:50 -07:00
Josh Wolf
cd93d7aaea make our implementation of oci content store public, remove redundant wrapper Store methods in favor of OCI implementation, add tests for store.Copy*() 2021-12-09 11:09:09 -07:00
Matt Nikkel
4d676c632f Add docs for public content fields 2021-12-08 14:52:09 -05:00
Josh Wolf
352c0141a9 Merge pull request #96 from nikkelma/public-content-types
Make content types pubic, expose configuration fields
2021-12-08 12:46:38 -07:00
Matt Nikkel
40fb078106 Add chart name, repo, version fields 2021-12-08 14:35:30 -05:00
Matt Nikkel
49f9e96576 Add image ref field 2021-12-08 14:35:14 -05:00
Matt Nikkel
fd22f93348 Make file ref field public 2021-12-08 14:34:54 -05:00
Matt Nikkel
822a24d79d Expose image OCI implementor publicly 2021-12-08 14:33:43 -05:00
Matt Nikkel
4e14688a9d Expose file OCI implementor publicly 2021-12-08 14:32:23 -05:00
Josh Wolf
61cbc6f614 Merge pull request #95 from joshrwolf/info
enhance `store info` command to actually show useful information
2021-12-08 11:25:13 -07:00
Josh Wolf
6c1640f694 ensure filetests share a setup/teardown 2021-12-08 11:21:36 -07:00
Josh Wolf
8e4d3bee01 refactor cli command to properly output with more informative info 2021-12-08 11:01:43 -07:00
Josh Wolf
1d7ea22bb0 ensure content type for files is properly detected by getter, add test verifying this 2021-12-08 11:01:08 -07:00
Josh Wolf
85ae4205cd remove store.List in favor of store.Walk, restructure store.Walk to walk index descriptors instead of manifests 2021-12-08 11:00:32 -07:00
Josh Wolf
e6e7ff6317 Merge pull request #87 from joshrwolf/oci-layout
refactor store/transport to use oci-layouts
2021-12-08 09:36:44 -07:00
Josh Wolf
395547ff90 better default support for registries requiring auth, and configurable for non-keychain uses 2021-12-08 09:33:21 -07:00
Josh Wolf
bb83d5ce5b allow file content to be passed a custom config 2021-12-08 09:25:45 -07:00
Josh Wolf
49f7b5ea0e add more public methods for building config files from any marshallable source 2021-12-08 09:25:27 -07:00
Josh Wolf
97341fd9b1 change default mappers behavior to failsafe (to filestore or nil) 2021-12-08 09:25:01 -07:00
Josh Wolf
a6831454e5 use internal oci store for store content backing 2021-12-08 09:24:16 -07:00
Josh Wolf
e812c2107c embrace the thick chart 2021-12-03 23:21:20 -07:00
Josh Wolf
a8e9d853db update dependencies to play nicely with controller-manager 2021-12-03 23:10:55 -07:00
Josh Wolf
9d5fae4c1d fix download/extract to use MapperStore 2021-12-03 20:19:55 -07:00
Josh Wolf
bdbac0a460 Merge branch 'main' into oci-layout 2021-12-03 14:20:03 -07:00
Josh Wolf
d55e7572e6 remove custom file store in favor of less hacky IoContentWriter extended on top of existing file store 2021-12-03 14:01:06 -07:00
Josh Wolf
c7ae551e6f move types to constants 2021-12-03 14:00:20 -07:00
Josh Wolf
f324078efc Merge pull request #85 from rancherfederal/fix-list-paging
Fix list request to registry to properly page
2021-12-02 09:48:05 -07:00
Josh Wolf
f0abcf162a move servers to internal, we're not blowing any minds here 2021-12-02 08:12:26 -07:00
Josh Wolf
8e692eecb4 add codecov 2021-12-01 23:01:14 -07:00
Josh Wolf
34836dacb0 add getter, store, and file tests 2021-12-01 22:49:16 -07:00
Josh Wolf
5855f79156 allow reference string to be passed to AddArtifact instead of name.ParseReference for ease of use, move reference validation within AddArtifact 2021-12-01 22:49:15 -07:00
Josh Wolf
d27ad7c7e8 add basic store tests 2021-12-01 22:49:15 -07:00
Josh Wolf
3c6ced89a9 Merge branch 'main' into oci-layout 2021-12-01 14:57:46 -07:00
Josh Wolf
d87d8a2041 primary: refactor store and transport to use oci-layouts and add fileserver feature
minors:
* add optional 'extraImages' to ThickCharts
* refactor File content into generic getter interfaces
* refactor artifact.Config into an actual usable interface (by File content)
* refactor 'copy' cli command to use oras mappers
* refactor 'serve' cli command to server registry and/or fileserver
2021-12-01 14:53:06 -07:00
Matt Nikkel
dc02554118 Fix list request to registry to properly page 2021-11-29 19:04:18 -05:00
Josh Wolf
de366c7b9b Merge pull request #74 from rancherfederal/cache-dir-fix
Update wording to conform to XDG cache dir spec
2021-11-19 12:36:58 -07:00
Matt Nikkel
07213d0da6 Update wording to conform to XDG cache dir spec 2021-11-17 12:31:06 -05:00
101 changed files with 5866 additions and 3787 deletions

View File

@@ -1,31 +1,33 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
name: Bug Report
about: Create a report to help us improve!
title: '[BUG]'
labels: 'kind/bug'
assignees: ''
---
<!-- Thanks for helping us to improve Hauler! We welcome all bug reports. Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. -->
<!-- Thank you for helping us to improve Hauler! We welcome all bug reports. Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. -->
**Environmental Info:**
**Environmental Info:**
*
**Hauler Version:**
*
**System CPU architecture, OS, and Version:**
<!-- Provide the output from "uname -a" on the system where Hauler is installed -->
* <!-- Provide the output from "uname -a" on the system where Hauler is installed -->
**Describe the bug:**
<!-- A clear and concise description of what the bug is. -->
* <!-- A clear and concise description of the bug. -->
**Steps To Reproduce:**
* <!-- A clear and concise way to reproduce the bug. -->
**Expected behavior:**
<!-- A clear and concise description of what you expected to happen. -->
* <!-- A clear and concise description of what you expected to happen, without the bug. -->
**Actual behavior:**
<!-- A clear and concise description of what actually happened. -->
* <!-- A clear and concise description of what actually happened. -->
**Additional context / logs:**
<!-- Add any other context and/or logs about the problem here. -->
* <!-- Add any other context and/or logs about the problem here. -->

View File

@@ -0,0 +1,21 @@
---
name: Feature Request
about: Create a report to help us improve!
title: '[RFE]'
labels: 'kind/rfe'
assignees: ''
---
<!-- Thanks for helping us to improve Hauler! We welcome all requests for enhancements (RFEs). Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. -->
**Is your feature request related to a problem? Please describe.**
* <!-- A clear and concise description of the problem. -->
**Describe the solution you'd like**
* <!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
* <!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
* <!-- Add any other context or screenshots about the feature request here. -->

View File

@@ -1,23 +1,20 @@
* **Please check if the PR fulfills these requirements**
- [ ] The commit message follows our guidelines
- [ ] Tests for the changes have been added (for bug fixes / features)
- [ ] Docs have been added / updated (for bug fixes / features)
**Please check below, if the PR fulfills these requirements:**
- [ ] The commit message follows the guidelines.
- [ ] Tests for the changes have been added (for bug fixes / features).
- [ ] Docs have been added / updated (for bug fixes / features).
* **What kind of change does this PR introduce?** (Bug fix, feature, docs update, ...)
**What kind of change does this PR introduce?**
* <!-- Bug fix, feature, docs update, ... -->
**What is the current behavior?**
* <!-- You can also link to an open issue here -->
**What is the new behavior (if this is a feature change)?**
* <!-- What changes did this PR introduce? -->
* **What is the current behavior?** (You can also link to an open issue here)
**Does this PR introduce a breaking change?**
* <!-- What changes might users need to make in their application due to this PR? -->
* **What is the new behavior (if this is a feature change)?**
* **Does this PR introduce a breaking change?** (What changes might users need to make in their application due to this PR?)
* **Other information**:
**Other information**:
* <!-- Any additional information -->

View File

@@ -9,24 +9,22 @@ on:
jobs:
goreleaser:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
-
name: Checkout
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
-
name: Set up Go
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.17.x
-
name: Run GoReleaser
go-version: 1.21.x
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v2
with:
distribution: goreleaser
version: latest
args: release --rm-dist
args: release --rm-dist -p 1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
HOMEBREW_TAP_GITHUB_TOKEN: ${{ secrets.HOMEBREW_TAP_GITHUB_TOKEN }}

41
.github/workflows/unittest.yaml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Unit Test
on:
push:
paths-ignore:
- "**.md"
- ".github/**"
- "!.github/workflows/unittest.yaml"
pull_request:
paths-ignore:
- "**.md"
- ".github/**"
- "!.github/workflows/unitcoverage.yaml"
workflow_dispatch: {}
jobs:
test:
name: Unit Tests
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.21.x
- name: Run Unit Tests
run: |
mkdir -p cmd/hauler/binaries
touch cmd/hauler/binaries/dummy.txt
go test -race -covermode=atomic -coverprofile=coverage.out ./pkg/... ./internal/... ./cmd/...
- name: On Failure, Launch Debug Session
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3
timeout-minutes: 5
- name: Upload Results To Codecov
uses: codecov/codecov-action@v1
with:
files: ./coverage.out
verbose: true # optional (default = false)

7
.gitignore vendored
View File

@@ -20,11 +20,12 @@ airgap-scp.sh
# test artifacts
*.tar*
*.out
# generated
dist/
./bundle/
tmp/
bin/
pkg.yaml
haul/
/store/
/registry/
cmd/hauler/binaries

View File

@@ -3,9 +3,14 @@ before:
hooks:
- go mod tidy
- go mod download
- rm -rf cmd/hauler/binaries
release:
prerelease: auto
env:
- vpkg=github.com/rancherfederal/hauler/pkg/version
- vpkg=github.com/rancherfederal/hauler/internal/version
- cosign_version=v2.2.2+carbide.2
builds:
- main: cmd/hauler/main.go
@@ -17,12 +22,18 @@ builds:
- amd64
- arm64
ldflags:
- -s -w -X {{ .Env.vpkg }}.GitVersion={{ .Version }} -X {{ .Env.vpkg }}.commit={{ .ShortCommit }} -X {{ .Env.vpkg }}.buildDate={{ .Date }}
- -s -w -X {{ .Env.vpkg }}.gitVersion={{ .Version }} -X {{ .Env.vpkg }}.gitCommit={{ .ShortCommit }} -X {{ .Env.vpkg }}.gitTreeState={{if .IsGitDirty}}dirty{{else}}clean{{end}} -X {{ .Env.vpkg }}.buildDate={{ .Date }}
hooks:
pre:
- mkdir -p cmd/hauler/binaries
- wget -P cmd/hauler/binaries/ https://github.com/rancher-government-carbide/cosign/releases/download/{{ .Env.cosign_version }}/cosign-{{ .Os }}-{{ .Arch }}{{ if eq .Os "windows" }}.exe{{ end }}
post:
- rm -rf cmd/hauler/binaries
env:
- CGO_ENABLED=0
universal_binaries:
- replace: true
- replace: false
changelog:
skip: false

177
LICENSE Normal file
View File

@@ -0,0 +1,177 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

View File

@@ -1,23 +1,27 @@
SHELL:=/bin/bash
GO_BUILD_ENV=GOOS=linux GOARCH=amd64
GO_FILES=$(shell go list ./... | grep -v /vendor/)
BUILD_VERSION=$(shell cat VERSION)
BUILD_TAG=$(BUILD_VERSION)
COSIGN_VERSION=v2.2.2+carbide.2
.SILENT:
all: fmt vet install test
build:
rm -rf cmd/hauler/binaries;\
mkdir -p cmd/hauler/binaries;\
wget -P cmd/hauler/binaries/ https://github.com/rancher-government-carbide/cosign/releases/download/$(COSIGN_VERSION)/cosign-$(shell go env GOOS)-$(shell go env GOARCH);\
mkdir bin;\
$(GO_BUILD_ENV) go build -o bin ./cmd/...;\
CGO_ENABLED=0 go build -o bin ./cmd/...;\
build-all: fmt vet
goreleaser build --rm-dist --snapshot
install:
$(GO_BUILD_ENV) go install
rm -rf cmd/hauler/binaries;\
mkdir -p cmd/hauler/binaries;\
wget -P cmd/hauler/binaries/ https://github.com/rancher-government-carbide/cosign/releases/download/$(COSIGN_VERSION)/cosign-$(shell go env GOOS)-$(shell go env GOARCH);\
CGO_ENABLED=0 go install ./cmd/...;\
vet:
go vet $(GO_FILES)

View File

@@ -1,28 +1,43 @@
# Hauler: Airgap Assistant
# Rancher Government Hauler
> ⚠️ This project is still in active development and _not_ GA. While a lot of the core features are ready, we're still adding a _ton_, and we may make breaking api and feature changes version to version.
## Airgap Swiss Army Knife
`hauler` simplifies the airgap experience without forcing you to adopt a specific workflow for your infrastructure or application.
> ⚠️ This project is still in active development and *not* Generally Available (GA). Most of the core functionality and features are ready, but may have breaking changes. Please review the [Release Notes](https://github.com/rancherfederal/hauler/releases) for more information!
To accomplish this, it focuses strictly on two of the biggest airgap pain points:
`Rancher Government Hauler` simplifies the airgap experience without requiring users to adopt a specific workflow. **Hauler** simplifies the airgapping process, by representing assets (images, charts, files, etc...) as content and collections to allow users to easily fetch, store, package, and distribute these assets with declarative manifests or through the command line.
* content collection
* content distribution
`Hauler` does this by storing contents and collections as OCI Artifacts and allows users to serve contents and collections with an embedded registry and fileserver. Additionally, `Hauler` has the ability to store and inspect various non-image OCI Artifacts.
As OCI registries have become ubiquitous nowadays for storing and distributing containers. Their success and widespread adoption has led many projects to expand beyond containers.
For more information, please review the **[Hauler Documentation](https://rancherfederal.github.io/hauler-docs)!**
`hauler` capitalizes on this, and leverages the [`oci`](https://github.com/opencontainers) spec to be a simple, zero dependency tool to collect, transport, and distribute your artifacts.
## Installation
## Getting started
### Linux/Darwin
```bash
# installs latest release
curl -sfL https://get.hauler.dev | bash
```
See the [quickstart](docs/walkthrough.md#Quickstart) for a quick way to get started with some of `haulers` capabilities.
### Homebrew
```bash
# installs latest release
brew tap rancherfederal/homebrew-tap
brew install hauler
```
For a guided example of all of `haulers` capabilities, check out the [guided example](docs/walkthrough.md#guided-examples).
### Windows
```bash
# coming soon
```
## Acknowledgements
`hauler` wouldn't be possible without the open source community, but there are a few dependent projects that stand out:
`Hauler` wouldn't be possible without the open-source community, but there are a few projects that stand out:
* [go-containerregistry](https://github.com/google/go-containerregistry)
* [oras](https://github.com/oras-project/oras)
* [cosign](https://github.com/sigstore/cosign)
* [oras cli](https://github.com/oras-project/oras)
* [cosign](https://github.com/sigstore/cosign)
## Notices
**WARNING - Upcoming Deprecated Command(s):**
`hauler download` (alternatively, `dl`) and `hauler serve` (_not_ `hauler store serve`) commands are deprecated and will be removed in a future release.

View File

@@ -1,32 +1,21 @@
package cli
import (
"context"
"errors"
"os"
"path/filepath"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/cache"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/store"
)
type rootOpts struct {
logLevel string
cacheDir string
storeDir string
}
const defaultStoreLocation = "haul"
var ro = &rootOpts{}
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "hauler",
Short: "",
Short: "Airgap Swiss Army Knife",
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
l := log.FromContext(cmd.Context())
l.SetLevel(ro.logLevel)
@@ -40,74 +29,13 @@ func New() *cobra.Command {
pf := cmd.PersistentFlags()
pf.StringVarP(&ro.logLevel, "log-level", "l", "info", "")
pf.StringVar(&ro.cacheDir, "cache", "", "Location of where to store cache data (defaults to $XDG_CACHE_DIR/hauler)")
pf.StringVarP(&ro.storeDir, "store", "s", "", "Location to create store at (defaults to $PWD/store)")
// Add subcommands
addDownload(cmd)
addStore(cmd)
addServe(cmd)
addVersion(cmd)
addCompletion(cmd)
return cmd
}
func (o *rootOpts) getStore(ctx context.Context) (*store.Store, error) {
l := log.FromContext(ctx)
dir := o.storeDir
if dir == "" {
l.Debugf("no store path specified, defaulting to $PWD/store")
pwd, err := os.Getwd()
if err != nil {
return nil, err
}
dir = filepath.Join(pwd, defaultStoreLocation)
}
abs, err := filepath.Abs(dir)
if err != nil {
return nil, err
}
l.Debugf("using store at %s", abs)
if _, err := os.Stat(abs); errors.Is(err, os.ErrNotExist) {
err := os.Mkdir(abs, os.ModePerm)
if err != nil {
return nil, err
}
} else if err != nil {
return nil, err
}
// TODO: Do we want this to be configurable?
c, err := o.getCache(ctx)
if err != nil {
return nil, err
}
s := store.NewStore(ctx, abs, store.WithCache(c))
return s, nil
}
func (o *rootOpts) getCache(ctx context.Context) (cache.Cache, error) {
dir := o.cacheDir
if dir == "" {
// Default to $XDG_CACHE_DIR
cachedir, err := os.UserCacheDir()
if err != nil {
return nil, err
}
abs, _ := filepath.Abs(filepath.Join(cachedir, "hauler"))
if err := os.MkdirAll(abs, os.ModePerm); err != nil {
return nil, err
}
dir = abs
}
c := cache.NewFilesystem(dir)
return c, nil
}

View File

@@ -0,0 +1,123 @@
package cli
import (
"fmt"
"os"
"github.com/spf13/cobra"
)
func addCompletion(parent *cobra.Command) {
cmd := &cobra.Command{
Use: "completion",
Short: "Generates completion scripts for various shells",
Long: `The completion sub-command generates completion scripts for various shells.`,
}
cmd.AddCommand(
addCompletionZsh(),
addCompletionBash(),
addCompletionFish(),
addCompletionPowershell(),
)
parent.AddCommand(cmd)
}
func completionError(err error) ([]string, cobra.ShellCompDirective) {
cobra.CompError(err.Error())
return nil, cobra.ShellCompDirectiveError
}
func addCompletionZsh() *cobra.Command {
cmd := &cobra.Command{
Use: "zsh",
Short: "Generates zsh completion scripts",
Long: `The completion sub-command generates completion scripts for zsh.`,
Example: `To load completion run
. <(hauler completion zsh)
To configure your zsh shell to load completions for each session add to your zshrc
# ~/.zshrc or ~/.profile
command -v hauler >/dev/null && . <(hauler completion zsh)
or write a cached file in one of the completion directories in your ${fpath}:
echo "${fpath// /\n}" | grep -i completion
hauler completion zsh > _hauler
mv _hauler ~/.oh-my-zsh/completions # oh-my-zsh
mv _hauler ~/.zprezto/modules/completion/external/src/ # zprezto`,
Run: func(cmd *cobra.Command, args []string) {
cmd.GenZshCompletion(os.Stdout)
// Cobra doesn't source zsh completion file, explicitly doing it here
fmt.Println("compdef _hauler hauler")
},
}
return cmd
}
func addCompletionBash() *cobra.Command {
cmd := &cobra.Command{
Use: "bash",
Short: "Generates bash completion scripts",
Long: `The completion sub-command generates completion scripts for bash.`,
Example: `To load completion run
. <(hauler completion bash)
To configure your bash shell to load completions for each session add to your bashrc
# ~/.bashrc or ~/.profile
command -v hauler >/dev/null && . <(hauler completion bash)`,
Run: func(cmd *cobra.Command, args []string) {
cmd.GenBashCompletion(os.Stdout)
},
}
return cmd
}
func addCompletionFish() *cobra.Command {
cmd := &cobra.Command{
Use: "fish",
Short: "Generates fish completion scripts",
Long: `The completion sub-command generates completion scripts for fish.`,
Example: `To configure your fish shell to load completions for each session write this script to your completions dir:
hauler completion fish > ~/.config/fish/completions/hauler.fish
See http://fishshell.com/docs/current/index.html#completion-own for more details`,
Run: func(cmd *cobra.Command, args []string) {
cmd.GenFishCompletion(os.Stdout, true)
},
}
return cmd
}
func addCompletionPowershell() *cobra.Command {
cmd := &cobra.Command{
Use: "powershell",
Short: "Generates powershell completion scripts",
Long: `The completion sub-command generates completion scripts for powershell.`,
Example: `To load completion run
. <(hauler completion powershell)
To configure your powershell shell to load completions for each session add to your powershell profile
Windows:
cd "$env:USERPROFILE\Documents\WindowsPowerShell\Modules"
hauler completion powershell >> hauler-completion.ps1
Linux:
cd "${XDG_CONFIG_HOME:-"$HOME/.config/"}/powershell/modules"
hauler completion powershell >> hauler-completions.ps1`,
Run: func(cmd *cobra.Command, args []string) {
cmd.GenPowerShellCompletion(os.Stdout)
},
}
return cmd
}

View File

@@ -12,7 +12,10 @@ func addDownload(parent *cobra.Command) {
cmd := &cobra.Command{
Use: "download",
Short: "Download OCI content from a registry and populate it on disk",
Long: `Locate OCI content based on it's reference in a compatible registry and download the contents to disk.
Long: `*** WARNING: Deprecated Command ***
The 'download (dl)' command is deprecated and will be removed in a future release of Hauler.
Locate OCI content based on it's reference in a compatible registry and download the contents to disk.
Note that the content type determines it's format on disk. Hauler's built in content types act as follows:
@@ -21,13 +24,13 @@ Note that the content type determines it's format on disk. Hauler's built in co
- Chart: as a .tar.gz named after the chart (ex: loki:2.0.2 --> loki-2.0.2.tar.gz)`,
Example: `
# Download a file
hauler dl my-file.yaml:latest
hauler dl localhost:5000/my-file.yaml:latest
# Download an image
hauler dl rancher/k3s:v1.22.2-k3s2
hauler dl localhost:5000/rancher/k3s:v1.22.2-k3s2
# Download a chart
hauler dl longhorn:1.2.0`,
hauler dl localhost:5000/hauler/longhorn:1.2.0`,
Aliases: []string{"dl"},
Args: cobra.ExactArgs(1),
RunE: func(cmd *cobra.Command, arg []string) error {

View File

@@ -3,47 +3,60 @@ package download
import (
"context"
"encoding/json"
"fmt"
"path"
"github.com/containerd/containerd/remotes/docker"
"github.com/google/go-containerregistry/pkg/authn"
"github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/google/go-containerregistry/pkg/v1/tarball"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
"oras.land/oras-go/pkg/content"
"oras.land/oras-go/pkg/oras"
"github.com/rancherfederal/hauler/pkg/artifact/types"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/internal/mapper"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/reference"
)
type Opts struct {
DestinationDir string
OutputFile string
Username string
Password string
Insecure bool
PlainHTTP bool
}
func (o *Opts) AddArgs(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVar(&o.DestinationDir, "dir", "", "Directory to save contents to (defaults to current directory)")
f.StringVarP(&o.OutputFile, "output", "o", "", "(Optional) Override name of file to save.")
f.StringVarP(&o.DestinationDir, "output", "o", "", "Directory to save contents to (defaults to current directory)")
f.StringVarP(&o.Username, "username", "u", "", "Username when copying to an authenticated remote registry")
f.StringVarP(&o.Password, "password", "p", "", "Password when copying to an authenticated remote registry")
f.BoolVar(&o.Insecure, "insecure", false, "Toggle allowing insecure connections when copying to a remote registry")
f.BoolVar(&o.PlainHTTP, "plain-http", false, "Toggle allowing plain http connections when copying to a remote registry")
}
func Cmd(ctx context.Context, o *Opts, reference string) error {
func Cmd(ctx context.Context, o *Opts, ref string) error {
l := log.FromContext(ctx)
cs := content.NewFileStore(o.DestinationDir)
defer cs.Close()
ref, err := name.ParseReference(reference)
ropts := content.RegistryOptions{
Username: o.Username,
Password: o.Password,
Insecure: o.Insecure,
PlainHTTP: o.PlainHTTP,
}
rs, err := content.NewRegistry(ropts)
if err != nil {
return err
}
desc, err := remote.Get(ref)
r, err := reference.Parse(ref)
if err != nil {
return err
}
desc, err := remote.Get(r, remote.WithAuthFromKeychain(authn.DefaultKeychain), remote.WithContext(ctx))
if err != nil {
return err
}
@@ -58,66 +71,17 @@ func Cmd(ctx context.Context, o *Opts, reference string) error {
return err
}
// TODO: These need to be factored out into each of the contents own logic
switch manifest.Config.MediaType {
case types.DockerConfigJSON, types.OCIManifestSchema1:
l.Debugf("identified [image] (%s) content", manifest.Config.MediaType)
img, err := remote.Image(ref, remote.WithAuthFromKeychain(authn.DefaultKeychain))
if err != nil {
return err
}
outputFile := o.OutputFile
if outputFile == "" {
outputFile = fmt.Sprintf("%s:%s.tar", path.Base(ref.Context().RepositoryStr()), ref.Identifier())
}
if err := tarball.WriteToFile(outputFile, ref, img); err != nil {
return err
}
l.Infof("downloaded image [%s] to [%s]", ref.Name(), outputFile)
case types.FileConfigMediaType:
l.Debugf("identified [file] (%s) content", manifest.Config.MediaType)
fs := content.NewFileStore(o.DestinationDir)
resolver := docker.NewResolver(docker.ResolverOptions{})
_, descs, err := oras.Pull(ctx, resolver, ref.Name(), fs)
if err != nil {
return err
}
ldescs := len(descs)
for i, desc := range descs {
// NOTE: This is safe without a map key check b/c we're not allowing unnamed content from oras.Pull
l.Infof("downloaded (%d/%d) files to [%s]", i+1, ldescs, desc.Annotations[ocispec.AnnotationTitle])
}
case types.ChartLayerMediaType, types.ChartConfigMediaType:
l.Debugf("identified [chart] (%s) content", manifest.Config.MediaType)
fs := content.NewFileStore(o.DestinationDir)
resolver := docker.NewResolver(docker.ResolverOptions{})
_, descs, err := oras.Pull(ctx, resolver, ref.Name(), fs)
if err != nil {
return err
}
cn := path.Base(ref.Name())
for _, d := range descs {
if n, ok := d.Annotations[ocispec.AnnotationTitle]; ok {
cn = n
}
}
l.Infof("downloaded chart [%s] to [%s]", ref.String(), cn)
default:
return fmt.Errorf("unrecognized content type: %s", manifest.Config.MediaType)
mapperStore, err := mapper.FromManifest(manifest, o.DestinationDir)
if err != nil {
return err
}
pushedDesc, err := oras.Copy(ctx, rs, r.Name(), mapperStore, "",
oras.WithAdditionalCachedMediaTypes(consts.DockerManifestSchema2))
if err != nil {
return err
}
l.Infof("downloaded [%s] with digest [%s]", pushedDesc.MediaType, pushedDesc.Digest.String())
return nil
}

View File

@@ -1,38 +0,0 @@
package download
import (
"context"
"testing"
)
func TestCmd(t *testing.T) {
ctx := context.Background()
type args struct {
ctx context.Context
o *Opts
reference string
}
tests := []struct {
name string
args args
wantErr bool
}{
{
name: "should work",
args: args{
ctx: ctx,
o: &Opts{DestinationDir: ""},
reference: "localhost:3000/hauler/file.txt:latest",
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if err := Cmd(tt.args.ctx, tt.args.o, tt.args.reference); (err != nil) != tt.wantErr {
t.Errorf("Cmd() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}

57
cmd/hauler/cli/serve.go Normal file
View File

@@ -0,0 +1,57 @@
package cli
import (
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/cmd/hauler/cli/serve"
)
func addServe(parent *cobra.Command) {
cmd := &cobra.Command{
Use: "serve",
Short: "Run one or more of hauler's embedded servers types",
Long: `*** WARNING: Deprecated Command ***
The 'serve' command is deprecated and will be removed in a future release of Hauler.`,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Help()
},
}
cmd.AddCommand(
addServeFiles(),
addServeRegistry(),
)
parent.AddCommand(cmd)
}
func addServeFiles() *cobra.Command {
o := &serve.FilesOpts{}
cmd := &cobra.Command{
Use: "files",
Short: "Start a fileserver",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
return serve.FilesCmd(ctx, o)
},
}
o.AddFlags(cmd)
return cmd
}
func addServeRegistry() *cobra.Command {
o := &serve.RegistryOpts{}
cmd := &cobra.Command{
Use: "registry",
Short: "Start a registry",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
return serve.RegistryCmd(ctx, o)
},
}
o.AddFlags(cmd)
return cmd
}

View File

@@ -0,0 +1,37 @@
package serve
import (
"context"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/internal/server"
)
type FilesOpts struct {
Root string
Port int
}
func (o *FilesOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Root, "root", "r", ".", "Path to root of the directory to serve")
f.IntVarP(&o.Port, "port", "p", 8080, "Port to listen on")
}
func FilesCmd(ctx context.Context, o *FilesOpts) error {
cfg := server.FileConfig{
Root: o.Root,
Port: o.Port,
}
s, err := server.NewFile(ctx, cfg)
if err != nil {
return err
}
if err := s.ListenAndServe(); err != nil {
return err
}
return nil
}

View File

@@ -0,0 +1,81 @@
package serve
import (
"context"
"fmt"
"net/http"
"os"
"github.com/distribution/distribution/v3/configuration"
dcontext "github.com/distribution/distribution/v3/context"
"github.com/distribution/distribution/v3/version"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/internal/server"
)
type RegistryOpts struct {
Root string
Port int
ConfigFile string
}
func (o *RegistryOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.Root, "root", "r", ".", "Path to root of the directory to serve")
f.IntVarP(&o.Port, "port", "p", 5000, "Port to listen on")
f.StringVarP(&o.ConfigFile, "config", "c", "", "Path to a config file, will override all other configs")
}
func RegistryCmd(ctx context.Context, o *RegistryOpts) error {
ctx = dcontext.WithVersion(ctx, version.Version)
cfg := o.defaultConfig()
if o.ConfigFile != "" {
ucfg, err := loadConfig(o.ConfigFile)
if err != nil {
return err
}
cfg = ucfg
}
s, err := server.NewRegistry(ctx, cfg)
if err != nil {
return err
}
if err := s.ListenAndServe(); err != nil {
return err
}
return nil
}
func loadConfig(filename string) (*configuration.Configuration, error) {
f, err := os.Open(filename)
if err != nil {
return nil, err
}
return configuration.Parse(f)
}
func (o *RegistryOpts) defaultConfig() *configuration.Configuration {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": o.Root},
// TODO: Ensure this is toggleable via cli arg if necessary
// "maintenance": configuration.Parameters{"readonly.enabled": false},
},
}
cfg.Log.Level = "info"
cfg.HTTP.Addr = fmt.Sprintf(":%d", o.Port)
cfg.HTTP.Headers = http.Header{
"X-Content-Type-Options": []string{"nosniff"},
"Accept": []string{"application/vnd.dsse.envelope.v1+json, application/json"},
}
return cfg
}

View File

@@ -2,10 +2,14 @@ package cli
import (
"github.com/spf13/cobra"
"helm.sh/helm/v3/pkg/action"
"fmt"
"github.com/rancherfederal/hauler/cmd/hauler/cli/store"
)
var rootStoreOpts = &store.RootOpts{}
func addStore(parent *cobra.Command) {
cmd := &cobra.Command{
Use: "store",
@@ -15,6 +19,7 @@ func addStore(parent *cobra.Command) {
return cmd.Help()
},
}
rootStoreOpts.AddArgs(cmd)
cmd.AddCommand(
addStoreSync(),
@@ -22,7 +27,7 @@ func addStore(parent *cobra.Command) {
addStoreLoad(),
addStoreSave(),
addStoreServe(),
addStoreList(),
addStoreInfo(),
addStoreCopy(),
// TODO: Remove this in favor of sync?
@@ -33,7 +38,7 @@ func addStore(parent *cobra.Command) {
}
func addStoreExtract() *cobra.Command {
o := &store.ExtractOpts{}
o := &store.ExtractOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "extract",
@@ -43,7 +48,7 @@ func addStoreExtract() *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}
@@ -57,7 +62,7 @@ func addStoreExtract() *cobra.Command {
}
func addStoreSync() *cobra.Command {
o := &store.SyncOpts{}
o := &store.SyncOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "sync",
@@ -65,7 +70,7 @@ func addStoreSync() *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}
@@ -79,7 +84,7 @@ func addStoreSync() *cobra.Command {
}
func addStoreLoad() *cobra.Command {
o := &store.LoadOpts{}
o := &store.LoadOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "load",
@@ -88,12 +93,13 @@ func addStoreLoad() *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}
_ = s
return store.LoadCmd(ctx, o, s.DataDir, args...)
return store.LoadCmd(ctx, o, args...)
},
}
o.AddFlags(cmd)
@@ -102,29 +108,69 @@ func addStoreLoad() *cobra.Command {
}
func addStoreServe() *cobra.Command {
o := &store.ServeOpts{}
cmd := &cobra.Command{
Use: "serve",
Short: "Expose the content of a local store through an OCI compliant server",
Short: "Expose the content of a local store through an OCI compliant registry or file server",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
if err != nil {
return err
}
return store.ServeCmd(ctx, o, s)
return cmd.Help()
},
}
o.AddFlags(cmd)
cmd.AddCommand(
addStoreServeRegistry(),
addStoreServeFiles(),
)
return cmd
}
// RegistryCmd serves the embedded registry
func addStoreServeRegistry() *cobra.Command {
o := &store.ServeRegistryOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "registry",
Short: "Serve the embedded registry",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := o.Store(ctx)
if err != nil {
return err
}
return store.ServeRegistryCmd(ctx, o, s)
},
}
o.AddFlags(cmd)
return cmd
}
// FileServerCmd serves the file server
func addStoreServeFiles() *cobra.Command {
o := &store.ServeFilesOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "fileserver",
Short: "Serve the file server",
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := o.Store(ctx)
if err != nil {
return err
}
return store.ServeFilesCmd(ctx, o, s)
},
}
o.AddFlags(cmd)
return cmd
}
func addStoreSave() *cobra.Command {
o := &store.SaveOpts{}
o := &store.SaveOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "save",
@@ -133,12 +179,13 @@ func addStoreSave() *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}
_ = s
return store.SaveCmd(ctx, o, o.FileName, s.DataDir)
return store.SaveCmd(ctx, o, o.FileName)
},
}
o.AddArgs(cmd)
@@ -146,23 +193,30 @@ func addStoreSave() *cobra.Command {
return cmd
}
func addStoreList() *cobra.Command {
o := &store.ListOpts{}
func addStoreInfo() *cobra.Command {
o := &store.InfoOpts{RootOpts: rootStoreOpts}
var allowedValues = []string{"image", "chart", "file", "all"}
cmd := &cobra.Command{
Use: "list",
Short: "List all content references in a store",
Use: "info",
Short: "Print out information about the store",
Args: cobra.ExactArgs(0),
Aliases: []string{"ls"},
Aliases: []string{"i", "list", "ls"},
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}
return store.ListCmd(ctx, o, s)
for _, allowed := range allowedValues {
if o.TypeFilter == allowed {
return store.InfoCmd(ctx, o, s)
}
}
return fmt.Errorf("type must be one of %v", allowedValues)
},
}
o.AddFlags(cmd)
@@ -171,7 +225,7 @@ func addStoreList() *cobra.Command {
}
func addStoreCopy() *cobra.Command {
o := &store.CopyOpts{}
o := &store.CopyOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "copy",
@@ -180,7 +234,7 @@ func addStoreCopy() *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}
@@ -212,7 +266,7 @@ func addStoreAdd() *cobra.Command {
}
func addStoreAddFile() *cobra.Command {
o := &store.AddFileOpts{}
o := &store.AddFileOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "file",
@@ -221,7 +275,7 @@ func addStoreAddFile() *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}
@@ -235,7 +289,7 @@ func addStoreAddFile() *cobra.Command {
}
func addStoreAddImage() *cobra.Command {
o := &store.AddImageOpts{}
o := &store.AddImageOpts{RootOpts: rootStoreOpts}
cmd := &cobra.Command{
Use: "image",
@@ -244,7 +298,7 @@ func addStoreAddImage() *cobra.Command {
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}
@@ -258,13 +312,22 @@ func addStoreAddImage() *cobra.Command {
}
func addStoreAddChart() *cobra.Command {
o := &store.AddChartOpts{}
o := &store.AddChartOpts{
RootOpts: rootStoreOpts,
ChartOpts: &action.ChartPathOptions{},
}
cmd := &cobra.Command{
Use: "chart",
Short: "Add a chart to the content store",
Short: "Add a local or remote chart to the content store",
Example: `
# add a chart
# add a local chart
hauler store add chart path/to/chart/directory
# add a local compressed chart
hauler store add chart path/to/chart.tar.gz
# add a remote chart
hauler store add chart longhorn --repo "https://charts.longhorn.io"
# add a specific version of a chart
@@ -274,7 +337,7 @@ hauler store add chart rancher --repo "https://releases.rancher.com/server-chart
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
s, err := ro.getStore(ctx)
s, err := o.Store(ctx)
if err != nil {
return err
}

View File

@@ -2,21 +2,25 @@ package store
import (
"context"
"path/filepath"
"github.com/google/go-containerregistry/pkg/name"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/spf13/cobra"
"helm.sh/helm/v3/pkg/action"
"github.com/rancherfederal/hauler/pkg/artifacts/file"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"github.com/rancherfederal/hauler/pkg/content/chart"
"github.com/rancherfederal/hauler/pkg/content/file"
"github.com/rancherfederal/hauler/pkg/content/image"
"github.com/rancherfederal/hauler/pkg/cosign"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/reference"
)
type AddFileOpts struct {
*RootOpts
Name string
}
@@ -25,147 +29,142 @@ func (o *AddFileOpts) AddFlags(cmd *cobra.Command) {
f.StringVarP(&o.Name, "name", "n", "", "(Optional) Name to assign to file in store")
}
func AddFileCmd(ctx context.Context, o *AddFileOpts, s *store.Store, reference string) error {
s.Open()
defer s.Close()
func AddFileCmd(ctx context.Context, o *AddFileOpts, s *store.Layout, reference string) error {
cfg := v1alpha1.File{
Ref: reference,
Name: o.Name,
Path: reference,
}
return storeFile(ctx, s, cfg)
}
func storeFile(ctx context.Context, s *store.Store, fi v1alpha1.File) error {
func storeFile(ctx context.Context, s *store.Layout, fi v1alpha1.File) error {
l := log.FromContext(ctx)
if fi.Name == "" {
base := filepath.Base(fi.Ref)
fi.Name = filepath.Base(fi.Ref)
l.Warnf("no name specified for file reference [%s], using base filepath: [%s]", fi.Ref, base)
copts := getter.ClientOptions{
NameOverride: fi.Name,
}
oci, err := file.NewFile(fi.Ref, fi.Name)
f := file.NewFile(fi.Path, file.WithClient(getter.NewClient(copts)))
ref, err := reference.NewTagged(f.Name(fi.Path), reference.DefaultTag)
if err != nil {
return err
}
ref, err := name.ParseReference(fi.Name, name.WithDefaultRegistry(""))
desc, err := s.AddOCI(ctx, f, ref.Name())
if err != nil {
return err
}
desc, err := s.AddArtifact(ctx, oci, ref)
if err != nil {
return err
}
l.Infof("file [%s] added at: [%s]", ref.Name(), desc.Annotations[ocispec.AnnotationTitle])
l.Infof("added 'file' to store at [%s], with digest [%s]", ref.Name(), desc.Digest.String())
return nil
}
type AddImageOpts struct {
Name string
*RootOpts
Name string
Key string
Platform string
}
func (o *AddImageOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
_ = f
f.StringVarP(&o.Key, "key", "k", "", "(Optional) Path to the key for digital signature verification")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specific platform to save. i.e. linux/amd64. Defaults to all if flag is omitted.")
}
func AddImageCmd(ctx context.Context, o *AddImageOpts, s *store.Store, reference string) error {
s.Open()
defer s.Close()
func AddImageCmd(ctx context.Context, o *AddImageOpts, s *store.Layout, reference string) error {
l := log.FromContext(ctx)
cfg := v1alpha1.Image{
Ref: reference,
Name: reference,
}
return storeImage(ctx, s, cfg)
// Check if the user provided a key.
if o.Key != "" {
// verify signature using the provided key.
err := cosign.VerifySignature(ctx, s, o.Key, cfg.Name)
if err != nil {
return err
}
l.Infof("signature verified for image [%s]", cfg.Name)
}
return storeImage(ctx, s, cfg, o.Platform)
}
func storeImage(ctx context.Context, s *store.Store, i v1alpha1.Image) error {
func storeImage(ctx context.Context, s *store.Layout, i v1alpha1.Image, platform string) error {
l := log.FromContext(ctx)
oci, err := image.NewImage(i.Ref)
r, err := name.ParseReference(i.Name)
if err != nil {
return err
}
ref, err := name.ParseReference(i.Ref)
err = cosign.SaveImage(ctx, s, r.Name(), platform)
if err != nil {
return err
}
desc, err := s.AddArtifact(ctx, oci, ref)
if err != nil {
return err
}
l.Infof("image [%s] added at: [%s]", ref.Name(), desc.Annotations[ocispec.AnnotationTitle])
l.Infof("added 'image' to store at [%s]", r.Name())
return nil
}
type AddChartOpts struct {
Version string
RepoURL string
*RootOpts
// TODO: Support helm auth
Username string
Password string
PassCredentialsAll bool
CertFile string
KeyFile string
CaFile string
InsecureSkipTLSverify bool
RepositoryConfig string
RepositoryCache string
ChartOpts *action.ChartPathOptions
}
func (o *AddChartOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.RepoURL, "repo", "r", "", "Chart repository URL")
f.StringVar(&o.Version, "version", "", "(Optional) Version of the chart to download, defaults to latest if not specified")
f.StringVar(&o.ChartOpts.RepoURL, "repo", "", "chart repository url where to locate the requested chart")
f.StringVar(&o.ChartOpts.Version, "version", "", "specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used")
f.BoolVar(&o.ChartOpts.Verify, "verify", false, "verify the package before using it")
f.StringVar(&o.ChartOpts.Username, "username", "", "chart repository username where to locate the requested chart")
f.StringVar(&o.ChartOpts.Password, "password", "", "chart repository password where to locate the requested chart")
f.StringVar(&o.ChartOpts.CertFile, "cert-file", "", "identify HTTPS client using this SSL certificate file")
f.StringVar(&o.ChartOpts.KeyFile, "key-file", "", "identify HTTPS client using this SSL key file")
f.BoolVar(&o.ChartOpts.InsecureSkipTLSverify, "insecure-skip-tls-verify", false, "skip tls certificate checks for the chart download")
f.StringVar(&o.ChartOpts.CaFile, "ca-file", "", "verify certificates of HTTPS-enabled servers using this CA bundle")
}
func AddChartCmd(ctx context.Context, o *AddChartOpts, s *store.Store, chartName string) error {
s.Open()
defer s.Close()
func AddChartCmd(ctx context.Context, o *AddChartOpts, s *store.Layout, chartName string) error {
// TODO: Reduce duplicates between api chart and upstream helm opts
cfg := v1alpha1.Chart{
Name: chartName,
RepoURL: o.RepoURL,
Version: o.Version,
RepoURL: o.ChartOpts.RepoURL,
Version: o.ChartOpts.Version,
}
return storeChart(ctx, s, cfg)
return storeChart(ctx, s, cfg, o.ChartOpts)
}
func storeChart(ctx context.Context, s *store.Store, ch v1alpha1.Chart) error {
func storeChart(ctx context.Context, s *store.Layout, cfg v1alpha1.Chart, opts *action.ChartPathOptions) error {
l := log.FromContext(ctx)
oci, err := chart.NewChart(ch.Name, ch.RepoURL, ch.Version)
// TODO: This shouldn't be necessary
opts.RepoURL = cfg.RepoURL
opts.Version = cfg.Version
chrt, err := chart.NewChart(cfg.Name, opts)
if err != nil {
return err
}
tag := ch.Version
if tag == "" {
tag = name.DefaultTag
}
ref, err := name.ParseReference(ch.Name, name.WithDefaultRegistry(""), name.WithDefaultTag(tag))
c, err := chrt.Load()
if err != nil {
return err
}
desc, err := s.AddArtifact(ctx, oci, ref)
ref, err := reference.NewTagged(c.Name(), c.Metadata.Version)
if err != nil {
return err
}
desc, err := s.AddOCI(ctx, chrt, ref.Name())
if err != nil {
return err
}
l.Infof("chart [%s] added at: [%s]", ref.Name(), desc.Annotations[ocispec.AnnotationTitle])
l.Infof("added 'chart' to store at [%s], with digest [%s]", ref.Name(), desc.Digest.String())
return nil
}

View File

@@ -2,56 +2,76 @@ package store
import (
"context"
"fmt"
"strings"
"github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/spf13/cobra"
"oras.land/oras-go/pkg/content"
"github.com/rancherfederal/hauler/pkg/cosign"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/store"
)
type CopyOpts struct{}
type CopyOpts struct {
*RootOpts
Username string
Password string
Insecure bool
PlainHTTP bool
}
func (o *CopyOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
_ = f
// TODO: Regex matching
f.StringVarP(&o.Username, "username", "u", "", "Username when copying to an authenticated remote registry")
f.StringVarP(&o.Password, "password", "p", "", "Password when copying to an authenticated remote registry")
f.BoolVar(&o.Insecure, "insecure", false, "Toggle allowing insecure connections when copying to a remote registry")
f.BoolVar(&o.PlainHTTP, "plain-http", false, "Toggle allowing plain http connections when copying to a remote registry")
}
func CopyCmd(ctx context.Context, o *CopyOpts, s *store.Store, registry string) error {
func CopyCmd(ctx context.Context, o *CopyOpts, s *store.Layout, targetRef string) error {
l := log.FromContext(ctx)
s.Open()
defer s.Close()
components := strings.SplitN(targetRef, "://", 2)
switch components[0] {
case "dir":
l.Debugf("identified directory target reference")
fs := content.NewFile(components[1])
defer fs.Close()
refs, err := s.List(ctx)
if err != nil {
return err
}
for _, r := range refs {
ref, err := name.ParseReference(r, name.WithDefaultRegistry(s.Registry()))
if err != nil {
return err
}
o, err := remote.Image(ref)
if err != nil {
return err
}
rref, err := name.ParseReference(r, name.WithDefaultRegistry(registry))
if err != nil {
return err
}
l.Infof("copying [%s] -> [%s]", ref.Name(), rref.Name())
if err := remote.Write(rref, o); err != nil {
return err
}
_, err := s.CopyAll(ctx, fs, nil)
if err != nil {
return err
}
case "registry":
l.Debugf("identified registry target reference")
ropts := content.RegistryOptions{
Username: o.Username,
Password: o.Password,
Insecure: o.Insecure,
PlainHTTP: o.PlainHTTP,
}
if ropts.Username != "" {
err := cosign.RegistryLogin(ctx, s, components[1], ropts)
if err != nil {
return err
}
}
err := cosign.LoadImages(ctx, s, components[1], ropts)
if err != nil {
return err
}
default:
return fmt.Errorf("detecting protocol from [%s]", targetRef)
}
l.Infof("copied artifacts to [%s]", components[1])
return nil
}

View File

@@ -2,36 +2,78 @@ package store
import (
"context"
"strings"
"encoding/json"
"fmt"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/cmd/hauler/cli/download"
"github.com/rancherfederal/hauler/pkg/layout"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/internal/mapper"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/reference"
)
type ExtractOpts struct {
*RootOpts
DestinationDir string
}
func (o *ExtractOpts) AddArgs(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVar(&o.DestinationDir, "dir", "", "Directory to save contents to (defaults to current directory)")
f.StringVarP(&o.DestinationDir, "output", "o", "", "Directory to save contents to (defaults to current directory)")
}
func ExtractCmd(ctx context.Context, o *ExtractOpts, s *store.Store, reference string) error {
s.Open()
defer s.Close()
func ExtractCmd(ctx context.Context, o *ExtractOpts, s *store.Layout, ref string) error {
l := log.FromContext(ctx)
eref, err := layout.RelocateReference(reference, s.Registry())
r, err := reference.Parse(ref)
if err != nil {
return err
}
gopts := &download.Opts{
DestinationDir: o.DestinationDir,
found := false
if err := s.Walk(func(reference string, desc ocispec.Descriptor) error {
if !strings.Contains(reference, r.Name()) {
return nil
}
found = true
rc, err := s.Fetch(ctx, desc)
if err != nil {
return err
}
defer rc.Close()
var m ocispec.Manifest
if err := json.NewDecoder(rc).Decode(&m); err != nil {
return err
}
mapperStore, err := mapper.FromManifest(m, o.DestinationDir)
if err != nil {
return err
}
pushedDesc, err := s.Copy(ctx, reference, mapperStore, "")
if err != nil {
return err
}
l.Infof("extracted [%s] from store with digest [%s]", pushedDesc.MediaType, pushedDesc.Digest.String())
return nil
}); err != nil {
return err
}
return download.Cmd(ctx, gopts, eref.Name())
if !found {
return fmt.Errorf("reference [%s] not found in store (hint: use `hauler store info` to list store contents)", ref)
}
return nil
}

View File

@@ -0,0 +1,84 @@
package store
import (
"context"
"errors"
"os"
"path/filepath"
"github.com/rancherfederal/hauler/pkg/layer"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/log"
)
const (
DefaultStoreName = "store"
DefaultCacheDir = "hauler"
)
type RootOpts struct {
StoreDir string
CacheDir string
}
func (o *RootOpts) AddArgs(cmd *cobra.Command) {
pf := cmd.PersistentFlags()
pf.StringVar(&o.CacheDir, "cache", "", "Location of where to store cache data (defaults to $XDG_CACHE_DIR/hauler)")
pf.StringVarP(&o.StoreDir, "store", "s", DefaultStoreName, "Location to create store at")
}
func (o *RootOpts) Store(ctx context.Context) (*store.Layout, error) {
l := log.FromContext(ctx)
dir := o.StoreDir
abs, err := filepath.Abs(dir)
if err != nil {
return nil, err
}
l.Debugf("using store at %s", abs)
if _, err := os.Stat(abs); errors.Is(err, os.ErrNotExist) {
err := os.Mkdir(abs, os.ModePerm)
if err != nil {
return nil, err
}
} else if err != nil {
return nil, err
}
// TODO: Do we want this to be configurable?
c, err := o.Cache(ctx)
if err != nil {
return nil, err
}
s, err := store.NewLayout(abs, store.WithCache(c))
if err != nil {
return nil, err
}
return s, nil
}
func (o *RootOpts) Cache(ctx context.Context) (layer.Cache, error) {
dir := o.CacheDir
if dir == "" {
// Default to $XDG_CACHE_HOME
cachedir, err := os.UserCacheDir()
if err != nil {
return nil, err
}
abs, _ := filepath.Abs(filepath.Join(cachedir, DefaultCacheDir))
if err := os.MkdirAll(abs, os.ModePerm); err != nil {
return nil, err
}
dir = abs
}
c := layer.NewFilesystemCache(dir)
return c, nil
}

View File

@@ -0,0 +1,246 @@
package store
import (
"context"
"encoding/json"
"fmt"
"github.com/olekukonko/tablewriter"
"os"
"sort"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/reference"
)
type InfoOpts struct {
*RootOpts
OutputFormat string
TypeFilter string
SizeUnit string
}
func (o *InfoOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.OutputFormat, "output", "o", "table", "Output format (table, json)")
f.StringVarP(&o.TypeFilter, "type", "t", "all", "Filter on type (image, chart, file)")
// TODO: Regex/globbing
}
func InfoCmd(ctx context.Context, o *InfoOpts, s *store.Layout) error {
var items []item
if err := s.Walk(func(ref string, desc ocispec.Descriptor) error {
if _, ok := desc.Annotations[ocispec.AnnotationRefName]; !ok {
return nil
}
rc, err := s.Fetch(ctx, desc)
if err != nil {
return err
}
defer rc.Close()
// handle multi-arch images
if desc.MediaType == consts.OCIImageIndexSchema || desc.MediaType == consts.DockerManifestListSchema2 {
var idx ocispec.Index
if err := json.NewDecoder(rc).Decode(&idx); err != nil {
return err
}
for _, internalDesc := range idx.Manifests {
rc, err := s.Fetch(ctx, internalDesc)
if err != nil {
return err
}
defer rc.Close()
var internalManifest ocispec.Manifest
if err := json.NewDecoder(rc).Decode(&internalManifest); err != nil {
return err
}
i := newItem(s, desc, internalManifest, fmt.Sprintf("%s/%s", internalDesc.Platform.OS, internalDesc.Platform.Architecture), o)
var emptyItem item
if i != emptyItem {
items = append(items, i)
}
}
// handle "non" multi-arch images
} else if desc.MediaType == consts.DockerManifestSchema2 || desc.MediaType == consts.OCIManifestSchema1 {
var m ocispec.Manifest
if err := json.NewDecoder(rc).Decode(&m); err != nil {
return err
}
rc, err := s.FetchManifest(ctx, m)
if err != nil {
return err
}
defer rc.Close()
// Unmarshal the OCI image content
var internalManifest ocispec.Image
if err := json.NewDecoder(rc).Decode(&internalManifest); err != nil {
return err
}
if internalManifest.Architecture != "" {
i := newItem(s, desc, m, fmt.Sprintf("%s/%s", internalManifest.OS, internalManifest.Architecture), o)
var emptyItem item
if i != emptyItem {
items = append(items, i)
}
} else {
i := newItem(s, desc, m, "-", o)
var emptyItem item
if i != emptyItem {
items = append(items, i)
}
}
// handle the rest
} else {
var m ocispec.Manifest
if err := json.NewDecoder(rc).Decode(&m); err != nil {
return err
}
i := newItem(s, desc, m, "-", o)
var emptyItem item
if i != emptyItem {
items = append(items, i)
}
}
return nil
}); err != nil {
return err
}
// sort items by ref and arch
sort.Sort(byReferenceAndArch(items))
var msg string
switch o.OutputFormat {
case "json":
msg = buildJson(items...)
fmt.Println(msg)
default:
buildTable(items...)
}
return nil
}
func buildTable(items ...item) {
// Create a table for the results
table := tablewriter.NewWriter(os.Stdout)
table.SetHeader([]string{"Reference", "Type", "Platform", "# Layers", "Size"})
table.SetHeaderAlignment(tablewriter.ALIGN_LEFT)
table.SetRowLine(false)
table.SetAutoMergeCellsByColumnIndex([]int{0})
for _, i := range items {
if i.Type != "" {
row := []string{
i.Reference,
i.Type,
i.Platform,
fmt.Sprintf("%d", i.Layers),
i.Size,
}
table.Append(row)
}
}
table.Render()
}
func buildJson(item ...item) string {
data, err := json.MarshalIndent(item, "", " ")
if err != nil {
return ""
}
return string(data)
}
type item struct {
Reference string
Type string
Platform string
Layers int
Size string
}
type byReferenceAndArch []item
func (a byReferenceAndArch) Len() int { return len(a) }
func (a byReferenceAndArch) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a byReferenceAndArch) Less(i, j int) bool {
if a[i].Reference == a[j].Reference {
return a[i].Platform < a[j].Platform
}
return a[i].Reference < a[j].Reference
}
func newItem(s *store.Layout, desc ocispec.Descriptor, m ocispec.Manifest, plat string, o *InfoOpts) item {
// skip listing cosign items
if desc.Annotations["kind"] == "dev.cosignproject.cosign/atts" ||
desc.Annotations["kind"] == "dev.cosignproject.cosign/sigs" ||
desc.Annotations["kind"] == "dev.cosignproject.cosign/sboms" {
return item{}
}
var size int64 = 0
for _, l := range m.Layers {
size = +l.Size
}
// Generate a human-readable content type
var ctype string
switch m.Config.MediaType {
case consts.DockerConfigJSON:
ctype = "image"
case consts.ChartConfigMediaType:
ctype = "chart"
case consts.FileLocalConfigMediaType, consts.FileHttpConfigMediaType:
ctype = "file"
default:
ctype = "image"
}
ref, err := reference.Parse(desc.Annotations[ocispec.AnnotationRefName])
if err != nil {
return item{}
}
if o.TypeFilter != "all" && ctype != o.TypeFilter {
return item{}
}
return item{
Reference: ref.Name(),
Type: ctype,
Platform: plat,
Layers: len(m.Layers),
Size: byteCountSI(size),
}
}
func byteCountSI(b int64) string {
const unit = 1000
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %cB",
float64(b)/float64(div), "kMGTPE"[exp])
}

View File

@@ -1,47 +0,0 @@
package store
import (
"context"
"fmt"
"os"
"text/tabwriter"
"github.com/google/go-containerregistry/pkg/name"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/store"
)
type ListOpts struct{}
func (o *ListOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
_ = f
// TODO: Regex matching
}
func ListCmd(ctx context.Context, o *ListOpts, s *store.Store) error {
s.Open()
defer s.Close()
refs, err := s.List(ctx)
if err != nil {
return err
}
tw := tabwriter.NewWriter(os.Stdout, 0, 8, 0, '\t', 0)
defer tw.Flush()
fmt.Fprintf(tw, "Reference\tTag/Digest\n")
for _, r := range refs {
ref, err := name.ParseReference(r, name.WithDefaultRegistry(""))
if err != nil {
return err
}
fmt.Fprintf(tw, "%s\t%s\n", ref.Context().String(), ref.Identifier())
}
return nil
}

View File

@@ -2,40 +2,33 @@ package store
import (
"context"
"os"
"github.com/mholt/archiver/v3"
"github.com/rancherfederal/hauler/pkg/content"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/log"
)
type LoadOpts struct {
OutputDir string
*RootOpts
}
func (o *LoadOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.OutputDir, "output", "o", "", "Directory to unload archived contents to (defaults to $PWD/haul)")
_ = f
}
// LoadCmd
// TODO: Just use mholt/archiver for now, even though we don't need most of it
func LoadCmd(ctx context.Context, o *LoadOpts, dir string, archiveRefs ...string) error {
func LoadCmd(ctx context.Context, o *LoadOpts, archiveRefs ...string) error {
l := log.FromContext(ctx)
// TODO: Support more formats?
a := archiver.NewTarZstd()
a.OverwriteExisting = true
odir := dir
if o.OutputDir != "" {
odir = o.OutputDir
}
for _, archiveRef := range archiveRefs {
l.Infof("loading content from [%s] to [%s]", archiveRef, odir)
err := a.Unarchive(archiveRef, odir)
l.Infof("loading content from [%s] to [%s]", archiveRef, o.StoreDir)
err := unarchiveLayoutTo(ctx, archiveRef, o.StoreDir)
if err != nil {
return err
}
@@ -43,3 +36,29 @@ func LoadCmd(ctx context.Context, o *LoadOpts, dir string, archiveRefs ...string
return nil
}
// unarchiveLayoutTo accepts an archived oci layout and extracts the contents to an existing oci layout, preserving the index
func unarchiveLayoutTo(ctx context.Context, archivePath string, dest string) error {
tmpdir, err := os.MkdirTemp("", "hauler")
if err != nil {
return err
}
defer os.RemoveAll(tmpdir)
if err := archiver.Unarchive(archivePath, tmpdir); err != nil {
return err
}
s, err := store.NewLayout(tmpdir)
if err != nil {
return err
}
ts, err := content.NewOCI(dest)
if err != nil {
return err
}
_, err = s.CopyAll(ctx, ts, nil)
return err
}

View File

@@ -12,18 +12,19 @@ import (
)
type SaveOpts struct {
*RootOpts
FileName string
}
func (o *SaveOpts) AddArgs(cmd *cobra.Command) {
f := cmd.Flags()
f.StringVarP(&o.FileName, "filename", "f", "pkg.tar.zst", "Name of archive")
f.StringVarP(&o.FileName, "filename", "f", "haul.tar.zst", "Name of archive")
}
// SaveCmd
// TODO: Just use mholt/archiver for now, even though we don't need most of it
func SaveCmd(ctx context.Context, o *SaveOpts, outputFile string, dir string) error {
func SaveCmd(ctx context.Context, o *SaveOpts, outputFile string) error {
l := log.FromContext(ctx)
// TODO: Support more formats?
@@ -40,7 +41,7 @@ func SaveCmd(ctx context.Context, o *SaveOpts, outputFile string, dir string) er
return err
}
defer os.Chdir(cwd)
if err := os.Chdir(dir); err != nil {
if err := os.Chdir(o.StoreDir); err != nil {
return err
}
@@ -49,6 +50,6 @@ func SaveCmd(ctx context.Context, o *SaveOpts, outputFile string, dir string) er
return err
}
l.Infof("saved haul [%s] -> [%s]", dir, absOutputfile)
l.Infof("saved store [%s] -> [%s]", o.StoreDir, absOutputfile)
return nil
}

View File

@@ -7,31 +7,54 @@ import (
"os"
"github.com/distribution/distribution/v3/configuration"
"github.com/distribution/distribution/v3/registry"
dcontext "github.com/distribution/distribution/v3/context"
_ "github.com/distribution/distribution/v3/registry/storage/driver/base"
_ "github.com/distribution/distribution/v3/registry/storage/driver/filesystem"
_ "github.com/distribution/distribution/v3/registry/storage/driver/inmemory"
"github.com/distribution/distribution/v3/version"
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/internal/server"
"github.com/rancherfederal/hauler/pkg/log"
)
type ServeOpts struct {
type ServeRegistryOpts struct {
*RootOpts
Port int
RootDir string
ConfigFile string
Daemon bool
storedir string
}
func (o *ServeOpts) AddFlags(cmd *cobra.Command) {
func (o *ServeRegistryOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.IntVarP(&o.Port, "port", "p", 5000, "Port to listen on")
f.IntVarP(&o.Port, "port", "p", 5000, "Port to listen on.")
f.StringVar(&o.RootDir, "directory", "registry", "Directory to use for backend. Defaults to $PWD/registry")
f.StringVarP(&o.ConfigFile, "config", "c", "", "Path to a config file, will override all other configs")
f.BoolVarP(&o.Daemon, "daemon", "d", false, "Toggle serving as a daemon")
}
// ServeCmd does
func ServeCmd(ctx context.Context, o *ServeOpts, s *store.Store) error {
cfg := o.defaultConfig(s)
func ServeRegistryCmd(ctx context.Context, o *ServeRegistryOpts, s *store.Layout) error {
l := log.FromContext(ctx)
ctx = dcontext.WithVersion(ctx, version.Version)
tr := server.NewTempRegistry(ctx, o.RootDir)
if err := tr.Start(); err != nil {
return err
}
opts := &CopyOpts{}
if err := CopyCmd(ctx, opts, s, "registry://"+tr.Registry()); err != nil {
return err
}
tr.Close()
cfg := o.defaultRegistryConfig()
if o.ConfigFile != "" {
ucfg, err := loadConfig(o.ConfigFile)
if err != nil {
@@ -40,11 +63,12 @@ func ServeCmd(ctx context.Context, o *ServeOpts, s *store.Store) error {
cfg = ucfg
}
r, err := registry.NewRegistry(ctx, cfg)
l.Infof("starting registry on port [%d]", o.Port)
r, err := server.NewRegistry(ctx, cfg)
if err != nil {
return err
}
if err = r.ListenAndServe(); err != nil {
return err
}
@@ -52,6 +76,49 @@ func ServeCmd(ctx context.Context, o *ServeOpts, s *store.Store) error {
return nil
}
type ServeFilesOpts struct {
*RootOpts
Port int
RootDir string
storedir string
}
func (o *ServeFilesOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.IntVarP(&o.Port, "port", "p", 8080, "Port to listen on.")
f.StringVar(&o.RootDir, "directory", "store-files", "Directory to use for backend. Defaults to $PWD/store-files")
}
func ServeFilesCmd(ctx context.Context, o *ServeFilesOpts, s *store.Layout) error {
l := log.FromContext(ctx)
ctx = dcontext.WithVersion(ctx, version.Version)
opts := &CopyOpts{}
if err := CopyCmd(ctx, opts, s, "dir://"+o.RootDir); err != nil {
return err
}
cfg := server.FileConfig{
Root: o.RootDir,
Port: o.Port,
}
f, err := server.NewFile(ctx, cfg)
if err != nil {
return err
}
l.Infof("starting file server on port [%d]", o.Port)
if err := f.ListenAndServe(); err != nil {
return err
}
return nil
}
func loadConfig(filename string) (*configuration.Configuration, error) {
f, err := os.Open(filename)
if err != nil {
@@ -61,17 +128,21 @@ func loadConfig(filename string) (*configuration.Configuration, error) {
return configuration.Parse(f)
}
func (o *ServeOpts) defaultConfig(s *store.Store) *configuration.Configuration {
func (o *ServeRegistryOpts) defaultRegistryConfig() *configuration.Configuration {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": s.DataDir},
"filesystem": configuration.Parameters{"rootdirectory": o.RootDir},
// TODO: Ensure this is toggleable via cli arg if necessary
"maintenance": configuration.Parameters{"readonly.enabled": true},
// "maintenance": configuration.Parameters{"readonly.enabled": false},
},
}
// Add validation configuration
cfg.Validation.Manifests.URLs.Allow = []string{".+"}
cfg.Log.Level = "info"
cfg.HTTP.Addr = fmt.Sprintf(":%d", o.Port)
cfg.HTTP.Headers = http.Header{

View File

@@ -6,148 +6,264 @@ import (
"fmt"
"io"
"os"
"strings"
"github.com/mitchellh/go-homedir"
"github.com/spf13/cobra"
"helm.sh/helm/v3/pkg/action"
"k8s.io/apimachinery/pkg/util/yaml"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
tchart "github.com/rancherfederal/hauler/pkg/collection/chart"
"github.com/rancherfederal/hauler/pkg/collection/imagetxt"
"github.com/rancherfederal/hauler/pkg/collection/k3s"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/content"
"github.com/rancherfederal/hauler/pkg/cosign"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/reference"
"github.com/rancherfederal/hauler/pkg/store"
)
type SyncOpts struct {
*RootOpts
ContentFiles []string
Key string
Products []string
Platform string
}
func (o *SyncOpts) AddFlags(cmd *cobra.Command) {
f := cmd.Flags()
f.StringSliceVarP(&o.ContentFiles, "files", "f", []string{}, "Path to content files")
f.StringVarP(&o.Key, "key", "k", "", "(Optional) Path to the key for signature verification")
f.StringSliceVar(&o.Products, "products", []string{}, "Used for RGS Carbide customers to supply a product and version and Hauler will retrieve the images. i.e. '--product rancher=v2.7.6'")
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specific platform to save. i.e. linux/amd64. Defaults to all if flag is omitted.")
}
func SyncCmd(ctx context.Context, o *SyncOpts, s *store.Store) error {
func SyncCmd(ctx context.Context, o *SyncOpts, s *store.Layout) error {
l := log.FromContext(ctx)
// Start from an empty store (contents are cached elsewhere)
l.Debugf("flushing any existing content in store: %s", s.DataDir)
if err := s.Flush(ctx); err != nil {
return err
// if passed products, check for a remote manifest to retrieve and use.
for _, product := range o.Products {
l.Infof("processing content file for product: '%s'", product)
parts := strings.Split(product, "=")
tag := strings.ReplaceAll(parts[1], "+", "-")
manifestLoc := fmt.Sprintf("%s/hauler/%s-manifest.yaml:%s", consts.CarbideRegistry, parts[0], tag)
l.Infof("retrieving product manifest from: '%s'", manifestLoc)
img := v1alpha1.Image{
Name: manifestLoc,
}
err := storeImage(ctx, s, img, o.Platform)
if err != nil {
return err
}
err = ExtractCmd(ctx, &ExtractOpts{RootOpts: o.RootOpts}, s, fmt.Sprintf("hauler/%s-manifest.yaml:%s", parts[0],tag))
if err != nil {
return err
}
filename := fmt.Sprintf("%s-manifest.yaml", parts[0])
fi, err := os.Open(filename)
if err != nil {
return err
}
err = processContent(ctx, fi, o, s)
if err != nil {
return err
}
}
s.Open()
defer s.Close()
// if passed a local manifest, process it
for _, filename := range o.ContentFiles {
l.Debugf("processing content file: '%s'", filename)
fi, err := os.Open(filename)
if err != nil {
return err
}
reader := yaml.NewYAMLReader(bufio.NewReader(fi))
var docs [][]byte
for {
raw, err := reader.Read()
if err == io.EOF {
break
}
if err != nil {
return err
}
docs = append(docs, raw)
}
for _, doc := range docs {
obj, err := content.Load(doc)
if err != nil {
return err
}
l.Infof("syncing [%s] to [%s]", obj.GroupVersionKind().String(), s.DataDir)
// TODO: Should type switch instead...
switch obj.GroupVersionKind().Kind {
case v1alpha1.FilesContentKind:
var cfg v1alpha1.Files
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, f := range cfg.Spec.Files {
err := storeFile(ctx, s, f)
if err != nil {
return err
}
}
case v1alpha1.ImagesContentKind:
var cfg v1alpha1.Images
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, i := range cfg.Spec.Images {
err := storeImage(ctx, s, i)
if err != nil {
return err
}
}
case v1alpha1.ChartsContentKind:
var cfg v1alpha1.Charts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, ch := range cfg.Spec.Charts {
err := storeChart(ctx, s, ch)
if err != nil {
return err
}
}
case v1alpha1.K3sCollectionKind:
var cfg v1alpha1.K3s
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
k, err := k3s.NewK3s(cfg.Spec.Version)
if err != nil {
return err
}
if _, err := s.AddCollection(ctx, k); err != nil {
return err
}
case v1alpha1.ChartsCollectionKind:
var cfg v1alpha1.ThickCharts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, cfg := range cfg.Spec.Charts {
tc, err := tchart.NewChart(cfg.Name, cfg.RepoURL, cfg.Version)
if err != nil {
return err
}
if _, err := s.AddCollection(ctx, tc); err != nil {
return err
}
}
default:
return fmt.Errorf("unrecognized content/collection type: %s", obj.GroupVersionKind().String())
}
err = processContent(ctx, fi, o, s)
if err != nil {
return err
}
}
return nil
}
func processContent(ctx context.Context, fi *os.File, o *SyncOpts, s *store.Layout) error {
l := log.FromContext(ctx)
reader := yaml.NewYAMLReader(bufio.NewReader(fi))
var docs [][]byte
for {
raw, err := reader.Read()
if err == io.EOF {
break
}
if err != nil {
return err
}
docs = append(docs, raw)
}
for _, doc := range docs {
obj, err := content.Load(doc)
if err != nil {
l.Debugf("skipping sync of unknown content")
continue
}
l.Infof("syncing [%s] to store", obj.GroupVersionKind().String())
// TODO: Should type switch instead...
switch obj.GroupVersionKind().Kind {
case v1alpha1.FilesContentKind:
var cfg v1alpha1.Files
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, f := range cfg.Spec.Files {
err := storeFile(ctx, s, f)
if err != nil {
return err
}
}
case v1alpha1.ImagesContentKind:
var cfg v1alpha1.Images
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
a := cfg.GetAnnotations()
for _, i := range cfg.Spec.Images {
// Check if the user provided a registry. If a registry is provided in the annotation, use it for the images that don't have a registry in their ref name.
if a[consts.ImageAnnotationRegistry] != "" {
newRef,_ := reference.Parse(i.Name)
if newRef.Context().RegistryStr() == "" {
newRef,_ = reference.Relocate(i.Name, a[consts.ImageAnnotationRegistry])
}
i.Name = newRef.Name()
}
// Check if the user provided a key. The flag from the CLI takes precedence over the annotation. The individual image key takes precedence over both.
if a[consts.ImageAnnotationKey] != "" || o.Key != "" || i.Key != "" {
key := o.Key // cli flag
// if no cli flag but there was an annotation, use the annotation.
if o.Key == "" && a[consts.ImageAnnotationKey] != "" {
key, err = homedir.Expand(a[consts.ImageAnnotationKey])
}
// the individual image key trumps all
if i.Key != "" {
key, err = homedir.Expand(i.Key)
}
l.Debugf("key for image [%s]", key)
// verify signature using the provided key.
err := cosign.VerifySignature(ctx, s, key, i.Name)
if err != nil {
l.Errorf("signature verification failed for image [%s]. ** hauler will skip adding this image to the store **:\n%v", i.Name, err)
continue
}
l.Infof("signature verified for image [%s]", i.Name)
}
// Check if the user provided a platform. The flag from the CLI takes precedence over the annotation. The individual image platform takes precedence over both.
platform := o.Platform // cli flag
// if no cli flag but there was an annotation, use the annotation.
if o.Platform == "" && a[consts.ImageAnnotationPlatform] != "" {
platform = a[consts.ImageAnnotationPlatform]
}
// the individual image platform trumps all
if i.Platform != "" {
platform = i.Platform
}
l.Debugf("platform for image [%s]", platform)
err = storeImage(ctx, s, i, platform)
if err != nil {
return err
}
}
// sync with local index
s.CopyAll(ctx, s.OCI, nil)
case v1alpha1.ChartsContentKind:
var cfg v1alpha1.Charts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, ch := range cfg.Spec.Charts {
// TODO: Provide a way to configure syncs
err := storeChart(ctx, s, ch, &action.ChartPathOptions{})
if err != nil {
return err
}
}
case v1alpha1.K3sCollectionKind:
var cfg v1alpha1.K3s
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
k, err := k3s.NewK3s(cfg.Spec.Version)
if err != nil {
return err
}
if _, err := s.AddOCICollection(ctx, k); err != nil {
return err
}
case v1alpha1.ChartsCollectionKind:
var cfg v1alpha1.ThickCharts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, cfg := range cfg.Spec.Charts {
tc, err := tchart.NewThickChart(cfg, &action.ChartPathOptions{
RepoURL: cfg.RepoURL,
Version: cfg.Version,
})
if err != nil {
return err
}
if _, err := s.AddOCICollection(ctx, tc); err != nil {
return err
}
}
case v1alpha1.ImageTxtsContentKind:
var cfg v1alpha1.ImageTxts
if err := yaml.Unmarshal(doc, &cfg); err != nil {
return err
}
for _, cfgIt := range cfg.Spec.ImageTxts {
it, err := imagetxt.New(cfgIt.Ref,
imagetxt.WithIncludeSources(cfgIt.Sources.Include...),
imagetxt.WithExcludeSources(cfgIt.Sources.Exclude...),
)
if err != nil {
return fmt.Errorf("convert ImageTxt %s: %v", cfg.Name, err)
}
if _, err := s.AddOCICollection(ctx, it); err != nil {
return fmt.Errorf("add ImageTxt %s to store: %v", cfg.Name, err)
}
}
default:
return fmt.Errorf("unrecognized content/collection type: %s", obj.GroupVersionKind().String())
}
}
return nil
}

View File

@@ -5,7 +5,7 @@ import (
"github.com/spf13/cobra"
"github.com/rancherfederal/hauler/pkg/version"
"github.com/rancherfederal/hauler/internal/version"
)
func addVersion(parent *cobra.Command) {
@@ -13,24 +13,27 @@ func addVersion(parent *cobra.Command) {
cmd := &cobra.Command{
Use: "version",
Short: "Print current hauler version",
Long: "Print current hauler version",
Short: "Print the current version",
Aliases: []string{"v"},
RunE: func(cmd *cobra.Command, args []string) error {
v := version.GetVersionInfo()
response := v.String()
v.Name = cmd.Root().Name()
v.Description = cmd.Root().Short
v.FontName = "starwars"
cmd.SetOut(cmd.OutOrStdout())
if json {
data, err := v.JSONString()
out, err := v.JSONString()
if err != nil {
return err
return fmt.Errorf("unable to generate JSON from version info: %w", err)
}
response = data
cmd.Println(out)
} else {
cmd.Println(v.String())
}
fmt.Print(response)
return nil
},
}
cmd.Flags().BoolVar(&json, "json", false, "toggle output in JSON")
parent.AddCommand(cmd)

View File

@@ -3,11 +3,16 @@ package main
import (
"context"
"os"
"embed"
"github.com/rancherfederal/hauler/cmd/hauler/cli"
"github.com/rancherfederal/hauler/pkg/cosign"
"github.com/rancherfederal/hauler/pkg/log"
)
//go:embed binaries/*
var binaries embed.FS
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@@ -15,6 +20,11 @@ func main() {
logger := log.NewLogger(os.Stdout)
ctx = logger.WithContext(ctx)
// ensure cosign binary is available
if err := cosign.EnsureBinaryExists(ctx, binaries); err != nil {
logger.Errorf("%v", err)
}
if err := cli.New().ExecuteContext(ctx); err != nil {
logger.Errorf("%v", err)
}

View File

@@ -1,177 +0,0 @@
# Walkthrough
## Installation
The latest version of `hauler` is available as statically compiled binaries for most combinations of operating systems and architectures on the GitHub [releases](https://github.com/rancherfederal/hauler/releases) page.
## Quickstart
The tl;dr for how to use `hauler` to fetch, transport, and distribute `content`:
```bash
# fetch some content
hauler store add file "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
hauler store add chart longhorn --repo "https://charts.longhorn.io"
hauler store add image "rancher/cowsay"
# transport the content
hauler store save
# <-airgap the haul.tar.zst file generated->
# load the content
hauler store load
# serve the content
hauler store serve
```
While the example above fits into a quickstart, it falls short of demonstrating all the capabilities `hauler` has to offer, including taking advantage of its fully declarative nature. Keep reading the [Guided Examples](#Guided-Examples) below for a more thorough walkthrough of `haulers` full capabilities.
## Guided Examples
Since `hauler`'s primary objective is to simplify the content collection/distribution airgap process, a lot of the design revolves around the typical airgap workflow:
```bash
fetch -> save - | <airgap> | -> validate/load -> distribute
```
This is accomplished as follows:
```bash
# fetch content
hauler store add ...
# compress and archive content
hauler store save
# <airgap>
# validate/load content
hauler store load ...
# distribute content
hauler store serve
```
At this point you're probably wondering: what is `content`? In `hauler` land, there are a few important terms given to important resources:
* `artifact`: anything that can be represented as an [`oci artifact`](https://github.com/opencontainers/artifacts)
* `content`: built in "primitive" types of `artifacts` that `hauler` understands
### Built in content
As of today, `hauler` understands three types of `content`, one with a strong legacy of community support and consensus ([`image-spec`]()), one with a finalized spec and experimental support ([`chart-spec`]()), and one generic type created just for `hauler`. These `content` types are outlined below:
__`files`__:
Generic content that can be represented as a file, either sourced locally or remotely.
```bash
# local file
hauler store add file path/to/local/file.txt
# remote file
hauler store add file https://get.k3s.io
```
__`images`__:
Any OCI compatible image can be fetched remotely.
```bash
# "shorthand" image references
hauler store add image rancher/k3s:v1.22.2-k3s1
# fully qualified image references
hauler store add image ghcr.io/fluxcd/flux-cli@sha256:02aa820c3a9c57d67208afcfc4bce9661658c17d15940aea369da259d2b976dd
```
__`charts`__:
Helm charts represented as OCI content.
```bash
# add a helm chart (defaults to latest version)
hauler store add chart loki --repo "https://grafana.github.io/helm-charts"
# add a specific version of a helm chart
hauler store add chart loki --repo "https://grafana.github.io/helm-charts" --version 2.8.1
# install directly from the oci content
HELM_EXPERIMENTAL_OCI=1 helm install loki oci://localhost:3000/library/loki --version 2.8.1
```
> Note: `hauler` supports the currently experimental format of helm as OCI content, but can also be represented as the usual tarball if necessary
### Content API
While imperatively adding `content` to `hauler` is a simple way to get started, the recommended long term approach is to use the provided api that each `content` has, in conjunction with the `sync` command.
```bash
# create a haul from declaratively defined content
hauler store sync -f testdata/contents.yaml
```
> For a commented view of the `contents` api, take a look at the `testdata` folder in the root of the project.
The API for each type of built-in `content` allows you to easily and declaratively define all the `content` that exist within a `haul`, and ensures a more gitops compatible workflow for managing the lifecycle of your `hauls`.
### Collections
Earlier we referred to `content` as "primitives". While the quotes justify the loose definition of that term, we call it that because they can be used to build groups of `content`, which we call `collections`.
`collections` are groups of 1 or more `contents` that collectively represent something desirable. Just like `content`, there are a handful that are built in to `hauler`.
Since `collections` usually contain more purposefully crafted `contents`, we restrict their use to the declarative commands (`sync`):
```bash
# sync a collection
hauler store sync -f my-collection.yaml
# sync sets of content/collection
hauler store sync -f collection.yaml -f content.yaml
```
__`thickcharts`__:
Thick Charts represent the combination of `charts` and `images`. When storing a thick chart, the chart _and_ the charts dependent images will be fetched and stored by `hauler`.
```yaml
# thick-chart.yaml
apiVersion: collection.hauler.cattle.io/v1alpha1
kind: ThickCharts
metadata:
name: loki
spec:
charts:
- name: loki
repoURL: https://grafana.github.io/helm-charts
```
When syncing the collection above, `hauler` will identify the images the chart depends on and store those too
> The method for identifying images is constantly changing, as of today, the chart is rendered and a configurable set of container defining json path's are processed. The most common paths are recognized by hauler, but this can be configured for the more niche CRDs out there.
__`k3s`__:
Combining `files` and `images`, full clusters can also be captured by `hauler` for further simplifying the already simple nature of `k3s`.
```yaml
# k3s.yaml
---
apiVersion: collection.hauler.cattle.io/v1alpha1
kind: K3s
metadata:
name: k3s
spec:
version: stable
```
Using the collection above, the dependent files (`k3s` executable and `https://get.k3s.io` script) will be fetched, as well as all the dependent images.
> We know not everyone uses the get.k3s.io script to provision k3s, in the future this may change, but until then you're welcome to mix and match the `collection` with any of your own additional `content`
#### User defined `collections`
Although `content` and `collections` can only be used when they are baked in to `hauler`, the goal is to allow these to be securely user-defined, allowing you to define your own desirable `collection` types, and leave the heavy lifting to `hauler`. Check out our [roadmap](../ROADMAP.md) and [milestones]() for more info on that.

236
go.mod
View File

@@ -1,165 +1,175 @@
module github.com/rancherfederal/hauler
go 1.17
go 1.21
require (
github.com/containerd/containerd v1.5.7
github.com/distribution/distribution/v3 v3.0.0-20210926092439-1563384b69df
github.com/google/go-containerregistry v0.6.0
github.com/mholt/archiver/v3 v3.5.0
github.com/common-nighthawk/go-figure v0.0.0-20210622060536-734e95fb86be
github.com/containerd/containerd v1.7.11
github.com/distribution/distribution/v3 v3.0.0-20221208165359-362910506bc2
github.com/docker/go-metrics v0.0.1
github.com/google/go-containerregistry v0.16.1
github.com/gorilla/handlers v1.5.1
github.com/gorilla/mux v1.8.0
github.com/mholt/archiver/v3 v3.5.1
github.com/mitchellh/go-homedir v1.1.0
github.com/olekukonko/tablewriter v0.0.5
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.1
github.com/rancher/wrangler v0.8.4
github.com/rs/zerolog v1.26.0
github.com/sirupsen/logrus v1.8.1
github.com/spf13/cobra v1.2.1
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
helm.sh/helm/v3 v3.7.1
k8s.io/apimachinery v0.22.2
k8s.io/client-go v0.22.2
oras.land/oras-go v0.4.0
sigs.k8s.io/controller-runtime v0.10.3
github.com/opencontainers/image-spec v1.1.0-rc6
github.com/pkg/errors v0.9.1
github.com/rs/zerolog v1.31.0
github.com/sirupsen/logrus v1.9.3
github.com/spf13/afero v1.10.0
github.com/spf13/cobra v1.8.0
golang.org/x/sync v0.6.0
helm.sh/helm/v3 v3.14.0
k8s.io/apimachinery v0.29.0
k8s.io/client-go v0.29.0
oras.land/oras-go v1.2.5
)
require (
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/BurntSushi/toml v0.3.1 // indirect
github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd // indirect
github.com/BurntSushi/toml v1.3.2 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.1.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.2 // indirect
github.com/Masterminds/squirrel v1.5.0 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/Masterminds/semver/v3 v3.2.1 // indirect
github.com/Masterminds/sprig/v3 v3.2.3 // indirect
github.com/Masterminds/squirrel v1.5.4 // indirect
github.com/Microsoft/hcsshim v0.11.4 // indirect
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d // indirect
github.com/andybalholm/brotli v1.0.0 // indirect
github.com/andybalholm/brotli v1.0.1 // indirect
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bshuster-repo/logrus-logstash-hook v1.0.0 // indirect
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd // indirect
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b // indirect
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 // indirect
github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.7.0 // indirect
github.com/cyphar/filepath-securejoin v0.2.2 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/cli v20.10.9+incompatible // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/docker/docker v20.10.9+incompatible // indirect
github.com/docker/docker-credential-helpers v0.6.4 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/distribution/reference v0.5.0 // indirect
github.com/docker/cli v25.0.1+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker v25.0.1+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1 // indirect
github.com/dsnet/compress v0.0.1 // indirect
github.com/evanphx/json-patch v4.11.0+incompatible // indirect
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v5.7.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
github.com/fatih/color v1.9.0 // indirect
github.com/felixge/httpsnoop v1.0.1 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/go-errors/errors v1.0.1 // indirect
github.com/go-logr/logr v0.4.0 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/go-sql-driver/mysql v1.6.0 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/felixge/httpsnoop v1.0.3 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-gorp/gorp/v3 v3.1.0 // indirect
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang/snappy v0.0.3 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.2 // indirect
github.com/gomodule/redigo v1.8.2 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/go-cmp v0.5.6 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/gorilla/handlers v1.5.1 // indirect
github.com/gorilla/mux v1.8.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/huandu/xstrings v1.3.2 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/jmoiron/sqlx v1.3.1 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/huandu/xstrings v1.4.0 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmoiron/sqlx v1.3.5 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.11 // indirect
github.com/klauspost/compress v1.13.6 // indirect
github.com/klauspost/pgzip v1.2.4 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.16.5 // indirect
github.com/klauspost/pgzip v1.2.5 // indirect
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lib/pq v1.10.0 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.13 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.1 // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/nwaples/rardecode v1.1.0 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pierrec/lz4/v4 v4.0.3 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.11.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.26.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect
github.com/rancher/lasso v0.0.0-20210616224652-fc3ebd901c08 // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/rubenv/sql-migrate v0.0.0-20210614095031-55d5740dbbcc // indirect
github.com/russross/blackfriday v1.5.2 // indirect
github.com/sergi/go-diff v1.2.0 // indirect
github.com/shopspring/decimal v1.2.0 // indirect
github.com/spf13/cast v1.4.1 // indirect
github.com/pierrec/lz4/v4 v4.1.2 // indirect
github.com/prometheus/client_golang v1.16.0 // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.10.1 // indirect
github.com/rubenv/sql-migrate v1.5.2 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/shopspring/decimal v1.3.1 // indirect
github.com/spf13/cast v1.5.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/testify v1.7.0 // indirect
github.com/ulikunitz/xz v0.5.7 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f // indirect
github.com/ulikunitz/xz v0.5.9 // indirect
github.com/vbatts/tar-split v0.11.3 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 // indirect
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca // indirect
github.com/xlab/treeprint v1.2.0 // indirect
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 // indirect
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50 // indirect
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f // indirect
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 // indirect
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519 // indirect
golang.org/x/net v0.0.0-20210913180222-943fd674d43e // indirect
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c // indirect
golang.org/x/sys v0.0.0-20211013075003-97ac67df715c // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.45.0 // indirect
go.opentelemetry.io/otel v1.19.0 // indirect
go.opentelemetry.io/otel/metric v1.19.0 // indirect
go.opentelemetry.io/otel/trace v1.19.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
golang.org/x/crypto v0.18.0 // indirect
golang.org/x/net v0.17.0 // indirect
golang.org/x/oauth2 v0.10.0 // indirect
golang.org/x/sys v0.16.0 // indirect
golang.org/x/term v0.16.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.3.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20210719143636-1d5a45f8e492 // indirect
google.golang.org/grpc v1.39.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/gorp.v1 v1.7.2 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d // indirect
google.golang.org/grpc v1.58.3 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/api v0.22.2 // indirect
k8s.io/apiextensions-apiserver v0.22.2 // indirect
k8s.io/apiserver v0.22.2 // indirect
k8s.io/cli-runtime v0.22.1 // indirect
k8s.io/component-base v0.22.2 // indirect
k8s.io/klog/v2 v2.9.0 // indirect
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e // indirect
k8s.io/kubectl v0.22.1 // indirect
k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a // indirect
sigs.k8s.io/kustomize/api v0.8.11 // indirect
sigs.k8s.io/kustomize/kyaml v0.11.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.1.2 // indirect
sigs.k8s.io/yaml v1.2.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/api v0.29.0 // indirect
k8s.io/apiextensions-apiserver v0.29.0 // indirect
k8s.io/apiserver v0.29.0 // indirect
k8s.io/cli-runtime v0.29.0 // indirect
k8s.io/component-base v0.29.0 // indirect
k8s.io/klog/v2 v2.110.1 // indirect
k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 // indirect
k8s.io/kubectl v0.29.0 // indirect
k8s.io/utils v0.0.0-20230726121419-3b25d923346b // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/kustomize/kyaml v0.14.3-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)

1594
go.sum

File diff suppressed because it is too large Load Diff

156
install.sh Executable file
View File

@@ -0,0 +1,156 @@
#!/bin/bash
# Usage:
# - curl -sfL... | ENV_VAR=... bash
# - ENV_VAR=... bash ./install.sh
# - ./install.sh ENV_VAR=...
# Example:
# Install Latest Release
# - curl -sfL https://get.hauler.dev | bash
# Install Specific Release
# - curl -sfL https://get.hauler.dev | HAULER_VERSION=0.4.2 bash
# Documentation:
# - https://hauler.dev
# - https://github.com/rancherfederal/hauler
# set functions for debugging/logging
function info {
echo && echo "[INFO] Hauler: $1"
}
function verbose {
echo "$1"
}
function warn {
echo && echo "[WARN] Hauler: $1"
}
function fatal {
echo && echo "[ERROR] Hauler: $1"
exit 1
}
# check for required dependencies
for cmd in curl sed awk openssl tar rm; do
if ! command -v "$cmd" &> /dev/null; then
fatal "$cmd is not installed"
fi
done
# start hauler installation
info "Starting Installation..."
# set version with an environment variable
version=${HAULER_VERSION:-$(curl -s https://api.github.com/repos/rancherfederal/hauler/releases/latest | grep '"tag_name":' | sed 's/.*"v\([^"]*\)".*/\1/')}
# set verision with an argument
while [[ $# -gt 0 ]]; do
case "$1" in
HAULER_VERSION=*)
version="${1#*=}"
shift
;;
*)
shift
;;
esac
done
# detect the operating system
platform=$(uname -s | tr '[:upper:]' '[:lower:]')
case $platform in
linux)
platform="linux"
;;
darwin)
platform="darwin"
;;
*)
fatal "Unsupported Platform: $platform"
;;
esac
# detect the architecture
arch=$(uname -m)
case $arch in
x86_64 | x86-32 | x64 | x32 | amd64)
arch="amd64"
;;
aarch64 | arm64)
arch="arm64"
;;
*)
fatal "Unsupported Architecture: $arch"
;;
esac
# display the version, platform, and architecture
verbose "- Version: v$version"
verbose "- Platform: $platform"
verbose "- Architecture: $arch"
# download the checksum file
if ! curl -sOL "https://github.com/rancherfederal/hauler/releases/download/v${version}/hauler_${version}_checksums.txt"; then
fatal "Failed to Download: hauler_${version}_checksums.txt"
fi
# download the archive file
if ! curl -sOL "https://github.com/rancherfederal/hauler/releases/download/v${version}/hauler_${version}_${platform}_${arch}.tar.gz"; then
fatal "Failed to Download: hauler_${version}_${platform}_${arch}.tar.gz"
fi
# start hauler checksum verification
info "Starting Checksum Verification..."
# Verify the Hauler checksum
expected_checksum=$(awk -v version="$version" -v platform="$platform" -v arch="$arch" '$2 == "hauler_"version"_"platform"_"arch".tar.gz" {print $1}' "hauler_${version}_checksums.txt")
determined_checksum=$(openssl dgst -sha256 "hauler_${version}_${platform}_${arch}.tar.gz" | awk '{print $2}')
if [ -z "$expected_checksum" ]; then
fatal "Failed to Locate Checksum: hauler_${version}_${platform}_${arch}.tar.gz"
elif [ "$determined_checksum" = "$expected_checksum" ]; then
verbose "- Expected Checksum: $expected_checksum"
verbose "- Determined Checksum: $determined_checksum"
verbose "- Successfully Verified Checksum: hauler_${version}_${platform}_${arch}.tar.gz"
else
verbose "- Expected: $expected_checksum"
verbose "- Determined: $determined_checksum"
fatal "Failed Checksum Verification: hauler_${version}_${platform}_${arch}.tar.gz"
fi
# uncompress the archive
tar -xzf "hauler_${version}_${platform}_${arch}.tar.gz" || fatal "Failed to Extract: hauler_${version}_${platform}_${arch}.tar.gz"
# install the binary
case "$platform" in
linux)
install hauler /usr/local/bin || fatal "Failed to Install Hauler to /usr/local/bin"
;;
darwin)
install hauler /usr/local/bin || fatal "Failed to Install Hauler to /usr/local/bin"
;;
*)
fatal "Unsupported Platform or Architecture: $platform/$arch"
;;
esac
# clean up checksum(s)
rm -rf "hauler_${version}_checksums.txt" || warn "Failed to Remove: hauler_${version}_checksums.txt"
# clean up archive file(s)
rm -rf "hauler_${version}_${platform}_${arch}.tar.gz" || warn "Failed to Remove: hauler_${version}_${platform}_${arch}.tar.gz"
# clean up other files
rm -rf LICENSE README.md hauler
# display success message
info "Successfully Installed at /usr/local/bin/hauler"
# display availability message
verbose "- Hauler v${version} is now available for use!"
# display hauler docs message
verbose "- Documentation: https://hauler.dev" && echo

View File

@@ -0,0 +1,85 @@
package mapper
import (
"context"
"io/ioutil"
"os"
"path/filepath"
"strings"
ccontent "github.com/containerd/containerd/content"
"github.com/containerd/containerd/remotes"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"oras.land/oras-go/pkg/content"
)
// NewMapperFileStore creates a new file store that uses mapper functions for each detected descriptor.
// This extends content.File, and differs in that it allows much more functionality into how each descriptor is written.
func NewMapperFileStore(root string, mapper map[string]Fn) *store {
fs := content.NewFile(root)
return &store{
File: fs,
mapper: mapper,
}
}
func (s *store) Pusher(ctx context.Context, ref string) (remotes.Pusher, error) {
var tag, hash string
parts := strings.SplitN(ref, "@", 2)
if len(parts) > 0 {
tag = parts[0]
}
if len(parts) > 1 {
hash = parts[1]
}
return &pusher{
store: s.File,
tag: tag,
ref: hash,
mapper: s.mapper,
}, nil
}
type store struct {
*content.File
mapper map[string]Fn
}
func (s *pusher) Push(ctx context.Context, desc ocispec.Descriptor) (ccontent.Writer, error) {
// TODO: This is suuuuuper ugly... redo this when oras v2 is out
if _, ok := content.ResolveName(desc); ok {
p, err := s.store.Pusher(ctx, s.ref)
if err != nil {
return nil, err
}
return p.Push(ctx, desc)
}
// If no custom mapper found, fall back to content.File mapper
if _, ok := s.mapper[desc.MediaType]; !ok {
return content.NewIoContentWriter(ioutil.Discard, content.WithOutputHash(desc.Digest)), nil
}
filename, err := s.mapper[desc.MediaType](desc)
if err != nil {
return nil, err
}
fullFileName := filepath.Join(s.store.ResolvePath(""), filename)
// TODO: Don't rewrite everytime, we can check the digest
f, err := os.OpenFile(fullFileName, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
if err != nil {
return nil, errors.Wrap(err, "pushing file")
}
w := content.NewIoContentWriter(f, content.WithInputHash(desc.Digest), content.WithOutputHash(desc.Digest))
return w, nil
}
type pusher struct {
store *content.File
tag string
ref string
mapper map[string]Fn
}

View File

@@ -0,0 +1,83 @@
package mapper
import (
"fmt"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"oras.land/oras-go/pkg/target"
"github.com/rancherfederal/hauler/pkg/consts"
)
type Fn func(desc ocispec.Descriptor) (string, error)
// FromManifest will return the appropriate content store given a reference and source type adequate for storing the results on disk
func FromManifest(manifest ocispec.Manifest, root string) (target.Target, error) {
// TODO: Don't rely solely on config mediatype
switch manifest.Config.MediaType {
case consts.DockerConfigJSON, consts.OCIManifestSchema1:
s := NewMapperFileStore(root, Images())
defer s.Close()
return s, nil
case consts.ChartLayerMediaType, consts.ChartConfigMediaType:
s := NewMapperFileStore(root, Chart())
defer s.Close()
return s, nil
default:
s := NewMapperFileStore(root, nil)
defer s.Close()
return s, nil
}
}
func Images() map[string]Fn {
m := make(map[string]Fn)
manifestMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
return "manifest.json", nil
})
for _, l := range []string{consts.DockerManifestSchema2, consts.DockerManifestListSchema2, consts.OCIManifestSchema1} {
m[l] = manifestMapperFn
}
layerMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
return fmt.Sprintf("%s.tar.gz", desc.Digest.String()), nil
})
for _, l := range []string{consts.OCILayer, consts.DockerLayer} {
m[l] = layerMapperFn
}
configMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
return "config.json", nil
})
for _, l := range []string{consts.DockerConfigJSON} {
m[l] = configMapperFn
}
return m
}
func Chart() map[string]Fn {
m := make(map[string]Fn)
chartMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
f := "chart.tar.gz"
if _, ok := desc.Annotations[ocispec.AnnotationTitle]; ok {
f = desc.Annotations[ocispec.AnnotationTitle]
}
return f, nil
})
provMapperFn := Fn(func(desc ocispec.Descriptor) (string, error) {
return "prov.json", nil
})
m[consts.ChartLayerMediaType] = chartMapperFn
m[consts.ProvLayerMediaType] = provMapperFn
return m
}

41
internal/server/file.go Normal file
View File

@@ -0,0 +1,41 @@
package server
import (
"context"
"fmt"
"net/http"
"os"
"time"
"github.com/gorilla/handlers"
"github.com/gorilla/mux"
)
type FileConfig struct {
Root string
Host string
Port int
}
// NewFile returns a fileserver
// TODO: Better configs
func NewFile(ctx context.Context, cfg FileConfig) (Server, error) {
r := mux.NewRouter()
r.PathPrefix("/").Handler(handlers.LoggingHandler(os.Stdout, http.StripPrefix("/", http.FileServer(http.Dir(cfg.Root)))))
if cfg.Root == "" {
cfg.Root = "."
}
if cfg.Port == 0 {
cfg.Port = 8080
}
srv := &http.Server{
Handler: r,
Addr: fmt.Sprintf(":%d", cfg.Port),
WriteTimeout: 15 * time.Second,
ReadTimeout: 15 * time.Second,
}
return srv, nil
}

122
internal/server/registry.go Normal file
View File

@@ -0,0 +1,122 @@
package server
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"strings"
"time"
"github.com/distribution/distribution/v3/configuration"
"github.com/distribution/distribution/v3/registry"
"github.com/distribution/distribution/v3/registry/handlers"
"github.com/docker/go-metrics"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
func NewRegistry(ctx context.Context, cfg *configuration.Configuration) (*registry.Registry, error) {
r, err := registry.NewRegistry(ctx, cfg)
if err != nil {
return nil, err
}
if cfg.HTTP.Debug.Prometheus.Enabled {
path := cfg.HTTP.Debug.Prometheus.Path
if path == "" {
path = "/metrics"
}
http.Handle(path, metrics.Handler())
}
return r, nil
}
type tmpRegistryServer struct {
*httptest.Server
}
func NewTempRegistry(ctx context.Context, root string) *tmpRegistryServer {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": root},
},
}
// Add validation configuration
cfg.Validation.Manifests.URLs.Allow = []string{".+"}
cfg.Log.Level = "error"
cfg.HTTP.Headers = http.Header{
"X-Content-Type-Options": []string{"nosniff"},
}
l, err := logrus.ParseLevel("panic")
if err != nil {
l = logrus.ErrorLevel
}
logrus.SetLevel(l)
app := handlers.NewApp(ctx, cfg)
app.RegisterHealthChecks()
handler := alive("/", app)
s := httptest.NewUnstartedServer(handler)
return &tmpRegistryServer{
Server: s,
}
}
// Registry returns the URL of the server without the protocol, suitable for content references
func (t *tmpRegistryServer) Registry() string {
return strings.Replace(t.Server.URL, "http://", "", 1)
}
func (t *tmpRegistryServer) Start() error {
t.Server.Start()
err := retry(5, 1*time.Second, func() (err error) {
resp, err := http.Get(t.Server.URL + "/v2")
if err != nil {
return err
}
resp.Body.Close()
if resp.StatusCode == http.StatusOK {
return nil
}
return errors.New("to start temporary registry")
})
return err
}
func (t *tmpRegistryServer) Stop() {
t.Server.Close()
}
func alive(path string, handler http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == path {
w.Header().Set("Cache-Control", "no-cache")
w.WriteHeader(http.StatusOK)
return
}
handler.ServeHTTP(w, r)
})
}
func retry(attempts int, sleep time.Duration, f func() error) (err error) {
for i := 0; i < attempts; i++ {
if i > 0 {
time.Sleep(sleep)
sleep *= 2
}
err = f()
if err == nil {
return nil
}
}
return fmt.Errorf("after %d attempts, last error: %s", attempts, err)
}

View File

@@ -0,0 +1,5 @@
package server
type Server interface {
ListenAndServe() error
}

229
internal/version/version.go Normal file
View File

@@ -0,0 +1,229 @@
/*
Copyright 2022 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package version
import (
"encoding/json"
"fmt"
"os"
"runtime"
"runtime/debug"
"strings"
"sync"
"text/tabwriter"
"time"
"github.com/common-nighthawk/go-figure"
)
const unknown = "unknown"
// Base version information.
//
// This is the fallback data used when version information from git is not
// provided via go ldflags.
var (
// Output of "git describe". The prerequisite is that the
// branch should be tagged using the correct versioning strategy.
gitVersion = "devel"
// SHA1 from git, output of $(git rev-parse HEAD)
gitCommit = unknown
// State of git tree, either "clean" or "dirty"
gitTreeState = unknown
// Build date in ISO8601 format, output of $(date -u +'%Y-%m-%dT%H:%M:%SZ')
buildDate = unknown
// flag to print the ascii name banner
asciiName = "true"
// goVersion is the used golang version.
goVersion = unknown
// compiler is the used golang compiler.
compiler = unknown
// platform is the used os/arch identifier.
platform = unknown
once sync.Once
info = Info{}
)
type Info struct {
GitVersion string `json:"gitVersion"`
GitCommit string `json:"gitCommit"`
GitTreeState string `json:"gitTreeState"`
BuildDate string `json:"buildDate"`
GoVersion string `json:"goVersion"`
Compiler string `json:"compiler"`
Platform string `json:"platform"`
ASCIIName string `json:"-"`
FontName string `json:"-"`
Name string `json:"-"`
Description string `json:"-"`
}
func getBuildInfo() *debug.BuildInfo {
bi, ok := debug.ReadBuildInfo()
if !ok {
return nil
}
return bi
}
func getGitVersion(bi *debug.BuildInfo) string {
if bi == nil {
return unknown
}
// TODO: remove this when the issue https://github.com/golang/go/issues/29228 is fixed
if bi.Main.Version == "(devel)" || bi.Main.Version == "" {
return gitVersion
}
return bi.Main.Version
}
func getCommit(bi *debug.BuildInfo) string {
return getKey(bi, "vcs.revision")
}
func getDirty(bi *debug.BuildInfo) string {
modified := getKey(bi, "vcs.modified")
if modified == "true" {
return "dirty"
}
if modified == "false" {
return "clean"
}
return unknown
}
func getBuildDate(bi *debug.BuildInfo) string {
buildTime := getKey(bi, "vcs.time")
t, err := time.Parse("2006-01-02T15:04:05Z", buildTime)
if err != nil {
return unknown
}
return t.Format("2006-01-02T15:04:05")
}
func getKey(bi *debug.BuildInfo, key string) string {
if bi == nil {
return unknown
}
for _, iter := range bi.Settings {
if iter.Key == key {
return iter.Value
}
}
return unknown
}
// GetVersionInfo represents known information on how this binary was built.
func GetVersionInfo() Info {
once.Do(func() {
buildInfo := getBuildInfo()
gitVersion = getGitVersion(buildInfo)
if gitCommit == unknown {
gitCommit = getCommit(buildInfo)
}
if gitTreeState == unknown {
gitTreeState = getDirty(buildInfo)
}
if buildDate == unknown {
buildDate = getBuildDate(buildInfo)
}
if goVersion == unknown {
goVersion = runtime.Version()
}
if compiler == unknown {
compiler = runtime.Compiler
}
if platform == unknown {
platform = fmt.Sprintf("%s/%s", runtime.GOOS, runtime.GOARCH)
}
info = Info{
ASCIIName: asciiName,
GitVersion: gitVersion,
GitCommit: gitCommit,
GitTreeState: gitTreeState,
BuildDate: buildDate,
GoVersion: goVersion,
Compiler: compiler,
Platform: platform,
}
})
return info
}
// String returns the string representation of the version info
func (i *Info) String() string {
b := strings.Builder{}
w := tabwriter.NewWriter(&b, 0, 0, 2, ' ', 0)
// name and description are optional.
if i.Name != "" {
if i.ASCIIName == "true" {
f := figure.NewFigure(strings.ToUpper(i.Name), i.FontName, true)
_, _ = fmt.Fprint(w, f.String())
}
_, _ = fmt.Fprint(w, i.Name)
if i.Description != "" {
_, _ = fmt.Fprintf(w, ": %s", i.Description)
}
_, _ = fmt.Fprint(w, "\n\n")
}
_, _ = fmt.Fprintf(w, "GitVersion:\t%s\n", i.GitVersion)
_, _ = fmt.Fprintf(w, "GitCommit:\t%s\n", i.GitCommit)
_, _ = fmt.Fprintf(w, "GitTreeState:\t%s\n", i.GitTreeState)
_, _ = fmt.Fprintf(w, "BuildDate:\t%s\n", i.BuildDate)
_, _ = fmt.Fprintf(w, "GoVersion:\t%s\n", i.GoVersion)
_, _ = fmt.Fprintf(w, "Compiler:\t%s\n", i.Compiler)
_, _ = fmt.Fprintf(w, "Platform:\t%s\n", i.Platform)
_ = w.Flush()
return b.String()
}
// JSONString returns the JSON representation of the version info
func (i *Info) JSONString() (string, error) {
b, err := json.MarshalIndent(i, "", " ")
if err != nil {
return "", err
}
return string(b), nil
}
func (i *Info) CheckFontName(fontName string) bool {
assetNames := figure.AssetNames()
for _, font := range assetNames {
if strings.Contains(font, fontName) {
return true
}
}
fmt.Fprintln(os.Stderr, "font not valid, using default")
return false
}

View File

@@ -21,24 +21,27 @@ type ChartSpec struct {
}
type Chart struct {
Name string `json:"name"`
RepoURL string `json:"repoURL"`
Version string `json:"version"`
Name string `json:"name,omitempty"`
RepoURL string `json:"repoURL,omitempty"`
Version string `json:"version,omitempty"`
}
type ThickCharts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ChartSpec `json:"spec,omitempty"`
Spec ThickChartSpec `json:"spec,omitempty"`
}
type ThickChartSpec struct {
ThickCharts []ThickChart `json:"charts,omitempty"`
Charts []ThickChart `json:"charts,omitempty"`
}
type ThickChart struct {
Name string `json:"name"`
RepoURL string `json:"repoURL"`
Version string `json:"version"`
Chart `json:",inline,omitempty"`
ExtraImages []ChartImage `json:"extraImages,omitempty"`
}
type ChartImage struct {
Reference string `json:"ref"`
}

View File

@@ -18,6 +18,10 @@ type FileSpec struct {
}
type File struct {
Ref string `json:"ref"`
// Path is the path to the file contents, can be a local or remote path
Path string `json:"path"`
// Name is an optional field specifying the name of the file when specified,
// it will override any dynamic name discovery from Path
Name string `json:"name,omitempty"`
}

View File

@@ -2,7 +2,6 @@ package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
const (
@@ -13,7 +12,7 @@ const (
var (
ContentGroupVersion = schema.GroupVersion{Group: ContentGroup, Version: Version}
SchemeBuilder = &scheme.Builder{GroupVersion: ContentGroupVersion}
// SchemeBuilder = &scheme.Builder{GroupVersion: ContentGroupVersion}
CollectionGroupVersion = schema.GroupVersion{Group: CollectionGroup, Version: Version}
)

View File

@@ -18,5 +18,14 @@ type ImageSpec struct {
}
type Image struct {
Ref string `json:"ref"`
// Name is the full location for the image, can be referenced by tags or digests
Name string `json:"name"`
// Path is the path to the cosign public key used for verifying image signatures
//Key string `json:"key,omitempty"`
Key string `json:"key"`
// Platform of the image to be pulled. If not specified, all platforms will be pulled.
//Platform string `json:"key,omitempty"`
Platform string `json:"platform"`
}

View File

@@ -0,0 +1,30 @@
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
ImageTxtsContentKind = "ImageTxts"
)
type ImageTxts struct {
*metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ImageTxtsSpec `json:"spec,omitempty"`
}
type ImageTxtsSpec struct {
ImageTxts []ImageTxt `json:"imageTxts,omitempty"`
}
type ImageTxt struct {
Ref string `json:"ref,omitempty"`
Sources ImageTxtSources `json:"sources,omitempty"`
}
type ImageTxtSources struct {
Include []string `json:"include,omitempty"`
Exclude []string `json:"exclude,omitempty"`
}

View File

@@ -1,10 +0,0 @@
package artifact
import v1 "github.com/google/go-containerregistry/pkg/v1"
type Config interface {
// Raw returns the config bytes
Raw() ([]byte, error)
Descriptor() (v1.Descriptor, error)
}

View File

@@ -1,37 +0,0 @@
package types
const (
OCIManifestSchema1 = "application/vnd.oci.image.manifest.v1+json"
DockerManifestSchema2 = "application/vnd.docker.distribution.manifest.v2+json"
DockerConfigJSON = "application/vnd.docker.container.image.v1+json"
// ChartConfigMediaType is the reserved media type for the Helm chart manifest config
ChartConfigMediaType = "application/vnd.cncf.helm.config.v1+json"
// ChartLayerMediaType is the reserved media type for Helm chart package content
ChartLayerMediaType = "application/vnd.cncf.helm.chart.content.v1.tar+gzip"
// ProvLayerMediaType is the reserved media type for Helm chart provenance files
ProvLayerMediaType = "application/vnd.cncf.helm.chart.provenance.v1.prov"
// FileLayerMediaType is the reserved media type for File content layers
FileLayerMediaType = "application/vnd.content.hauler.file.layer.v1"
// FileConfigMediaType is the reserved media type for File config
FileConfigMediaType = "application/vnd.content.hauler.file.config.v1+json"
// WasmArtifactLayerMediaType is the reserved media type for WASM artifact layers
WasmArtifactLayerMediaType = "application/vnd.wasm.content.layer.v1+wasm"
// WasmConfigMediaType is the reserved media type for WASM configs
WasmConfigMediaType = "application/vnd.wasm.config.v1+json"
UnknownManifest = "application/vnd.hauler.cattle.io.unknown.v1+json"
UnknownLayer = "application/vnd.content.hauler.unknown.layer"
OCIVendorPrefix = "vnd.oci"
DockerVendorPrefix = "vnd.docker"
HaulerVendorPrefix = "vnd.hauler"
OCIImageIndexFile = "index.json"
)

92
pkg/artifacts/config.go Normal file
View File

@@ -0,0 +1,92 @@
package artifacts
import (
"bytes"
"encoding/json"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/partial"
"github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/consts"
)
var _ partial.Describable = (*marshallableConfig)(nil)
type Config interface {
// Raw returns the config bytes
Raw() ([]byte, error)
Digest() (v1.Hash, error)
MediaType() (types.MediaType, error)
Size() (int64, error)
}
type Marshallable interface{}
type ConfigOption func(*marshallableConfig)
// ToConfig takes anything that is marshallabe and converts it into a Config
func ToConfig(i Marshallable, opts ...ConfigOption) Config {
mc := &marshallableConfig{Marshallable: i}
for _, o := range opts {
o(mc)
}
return mc
}
func WithConfigMediaType(mediaType string) ConfigOption {
return func(config *marshallableConfig) {
config.mediaType = mediaType
}
}
// marshallableConfig implements Config using helper methods
type marshallableConfig struct {
Marshallable
mediaType string
}
func (c *marshallableConfig) MediaType() (types.MediaType, error) {
mt := c.mediaType
if mt == "" {
mt = consts.UnknownManifest
}
return types.MediaType(mt), nil
}
func (c *marshallableConfig) Raw() ([]byte, error) {
return json.Marshal(c.Marshallable)
}
func (c *marshallableConfig) Digest() (v1.Hash, error) {
return Digest(c)
}
func (c *marshallableConfig) Size() (int64, error) {
return Size(c)
}
type WithRawConfig interface {
Raw() ([]byte, error)
}
func Digest(c WithRawConfig) (v1.Hash, error) {
b, err := c.Raw()
if err != nil {
return v1.Hash{}, err
}
digest, _, err := v1.SHA256(bytes.NewReader(b))
return digest, err
}
func Size(c WithRawConfig) (int64, error) {
b, err := c.Raw()
if err != nil {
return -1, err
}
return int64(len(b)), nil
}

116
pkg/artifacts/file/file.go Normal file
View File

@@ -0,0 +1,116 @@
package file
import (
"context"
gv1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/partial"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/rancherfederal/hauler/pkg/consts"
)
// interface guard
var _ artifacts.OCI = (*File)(nil)
// File implements the OCI interface for File API objects. API spec information is
// stored into the Path field.
type File struct {
Path string
computed bool
client *getter.Client
config artifacts.Config
blob gv1.Layer
manifest *gv1.Manifest
annotations map[string]string
}
func NewFile(path string, opts ...Option) *File {
client := getter.NewClient(getter.ClientOptions{})
f := &File{
client: client,
Path: path,
}
for _, opt := range opts {
opt(f)
}
return f
}
// Name is the name of the file's reference
func (f *File) Name(path string) string {
return f.client.Name(path)
}
func (f *File) MediaType() string {
return consts.OCIManifestSchema1
}
func (f *File) RawConfig() ([]byte, error) {
if err := f.compute(); err != nil {
return nil, err
}
return f.config.Raw()
}
func (f *File) Layers() ([]gv1.Layer, error) {
if err := f.compute(); err != nil {
return nil, err
}
var layers []gv1.Layer
layers = append(layers, f.blob)
return layers, nil
}
func (f *File) Manifest() (*gv1.Manifest, error) {
if err := f.compute(); err != nil {
return nil, err
}
return f.manifest, nil
}
func (f *File) compute() error {
if f.computed {
return nil
}
ctx := context.TODO()
blob, err := f.client.LayerFrom(ctx, f.Path)
if err != nil {
return err
}
layer, err := partial.Descriptor(blob)
if err != nil {
return err
}
cfg := f.client.Config(f.Path)
if cfg == nil {
cfg = f.client.Config(f.Path)
}
cfgDesc, err := partial.Descriptor(cfg)
if err != nil {
return err
}
m := &gv1.Manifest{
SchemaVersion: 2,
MediaType: gtypes.MediaType(f.MediaType()),
Config: *cfgDesc,
Layers: []gv1.Descriptor{*layer},
Annotations: f.annotations,
}
f.manifest = m
f.config = cfg
f.blob = blob
f.computed = true
return nil
}

View File

@@ -0,0 +1,166 @@
package file_test
import (
"bytes"
"context"
"io"
"net/http"
"net/http/httptest"
"net/url"
"os"
"path/filepath"
"testing"
"github.com/spf13/afero"
"github.com/rancherfederal/hauler/pkg/artifacts/file"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/rancherfederal/hauler/pkg/consts"
)
var (
filename = "myfile.yaml"
data = []byte(`data`)
ts *httptest.Server
tfs afero.Fs
mc *getter.Client
)
func TestMain(m *testing.M) {
teardown := setup()
defer teardown()
code := m.Run()
os.Exit(code)
}
func Test_file_Config(t *testing.T) {
tests := []struct {
name string
ref string
want string
wantErr bool
}{
{
name: "should properly type local file",
ref: filename,
want: consts.FileLocalConfigMediaType,
wantErr: false,
},
{
name: "should properly type remote file",
ref: ts.URL + "/" + filename,
want: consts.FileHttpConfigMediaType,
wantErr: false,
},
// TODO: Add directory test
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
f := file.NewFile(tt.ref, file.WithClient(mc))
f.MediaType()
m, err := f.Manifest()
if err != nil {
t.Fatal(err)
}
got := string(m.Config.MediaType)
if got != tt.want {
t.Errorf("unxpected mediatype; got %s, want %s", got, tt.want)
}
})
}
}
func Test_file_Layers(t *testing.T) {
tests := []struct {
name string
ref string
want []byte
wantErr bool
}{
{
name: "should load a local file and preserve contents",
ref: filename,
want: data,
wantErr: false,
},
{
name: "should load a remote file and preserve contents",
ref: ts.URL + "/" + filename,
want: data,
wantErr: false,
},
// TODO: Add directory test
}
for _, tt := range tests {
t.Run(tt.name, func(it *testing.T) {
f := file.NewFile(tt.ref, file.WithClient(mc))
layers, err := f.Layers()
if (err != nil) != tt.wantErr {
it.Fatalf("unexpected Layers() error: got %v, want %v", err, tt.wantErr)
}
rc, err := layers[0].Compressed()
if err != nil {
it.Fatal(err)
}
got, err := io.ReadAll(rc)
if err != nil {
it.Fatal(err)
}
if !bytes.Equal(got, tt.want) {
it.Fatalf("unexpected Layers(): got %v, want %v", layers, tt.want)
}
})
}
}
func setup() func() {
tfs = afero.NewMemMapFs()
afero.WriteFile(tfs, filename, data, 0644)
mf := &mockFile{File: getter.NewFile(), fs: tfs}
mockHttp := getter.NewHttp()
mhttp := afero.NewHttpFs(tfs)
fileserver := http.FileServer(mhttp.Dir("."))
http.Handle("/", fileserver)
ts = httptest.NewServer(fileserver)
mc = &getter.Client{
Options: getter.ClientOptions{},
Getters: map[string]getter.Getter{
"file": mf,
"http": mockHttp,
},
}
teardown := func() {
defer ts.Close()
}
return teardown
}
type mockFile struct {
*getter.File
fs afero.Fs
}
func (m mockFile) Open(ctx context.Context, u *url.URL) (io.ReadCloser, error) {
return m.fs.Open(filepath.Join(u.Host, u.Path))
}
func (m mockFile) Detect(u *url.URL) bool {
fi, err := m.fs.Stat(filepath.Join(u.Host, u.Path))
if err != nil {
return false
}
return !fi.IsDir()
}

View File

@@ -0,0 +1,165 @@
package getter
import (
"archive/tar"
"compress/gzip"
"context"
"io"
"net/url"
"os"
"path/filepath"
"time"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
)
type directory struct {
*File
}
func NewDirectory() *directory {
return &directory{File: NewFile()}
}
func (d directory) Open(ctx context.Context, u *url.URL) (io.ReadCloser, error) {
tmpfile, err := os.CreateTemp("", "hauler")
if err != nil {
return nil, err
}
digester := digest.Canonical.Digester()
zw := gzip.NewWriter(io.MultiWriter(tmpfile, digester.Hash()))
defer zw.Close()
tarDigester := digest.Canonical.Digester()
if err := tarDir(d.path(u), d.Name(u), io.MultiWriter(zw, tarDigester.Hash()), false); err != nil {
return nil, err
}
if err := zw.Close(); err != nil {
return nil, err
}
if err := tmpfile.Sync(); err != nil {
return nil, err
}
fi, err := os.Open(tmpfile.Name())
if err != nil {
return nil, err
}
// rc := &closer{
// t: io.TeeReader(tmpfile, fi),
// closes: []func() error{fi.Close, tmpfile.Close, zw.Close},
// }
return fi, nil
}
func (d directory) Detect(u *url.URL) bool {
if len(d.path(u)) == 0 {
return false
}
fi, err := os.Stat(d.path(u))
if err != nil {
return false
}
return fi.IsDir()
}
func (d directory) Config(u *url.URL) artifacts.Config {
c := &directoryConfig{
config{Reference: u.String()},
}
return artifacts.ToConfig(c, artifacts.WithConfigMediaType(consts.FileDirectoryConfigMediaType))
}
type directoryConfig struct {
config `json:",inline,omitempty"`
}
func tarDir(root string, prefix string, w io.Writer, stripTimes bool) error {
tw := tar.NewWriter(w)
defer tw.Close()
if err := filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
// Rename path
name, err := filepath.Rel(root, path)
if err != nil {
return err
}
name = filepath.Join(prefix, name)
name = filepath.ToSlash(name)
// Generate header
var link string
mode := info.Mode()
if mode&os.ModeSymlink != 0 {
if link, err = os.Readlink(path); err != nil {
return err
}
}
header, err := tar.FileInfoHeader(info, link)
if err != nil {
return errors.Wrap(err, path)
}
header.Name = name
header.Uid = 0
header.Gid = 0
header.Uname = ""
header.Gname = ""
if stripTimes {
header.ModTime = time.Time{}
header.AccessTime = time.Time{}
header.ChangeTime = time.Time{}
}
// Write file
if err := tw.WriteHeader(header); err != nil {
return errors.Wrap(err, "tar")
}
if mode.IsRegular() {
file, err := os.Open(path)
if err != nil {
return err
}
defer file.Close()
if _, err := io.Copy(tw, file); err != nil {
return errors.Wrap(err, path)
}
}
return nil
}); err != nil {
return err
}
return nil
}
type closer struct {
t io.Reader
closes []func() error
}
func (c *closer) Read(p []byte) (n int, err error) {
return c.t.Read(p)
}
func (c *closer) Close() error {
var err error
for _, c := range c.closes {
lastErr := c()
if err == nil {
err = lastErr
}
}
return err
}

View File

@@ -0,0 +1,53 @@
package getter
import (
"context"
"io"
"net/url"
"os"
"path/filepath"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
)
type File struct{}
func NewFile() *File {
return &File{}
}
func (f File) Name(u *url.URL) string {
return filepath.Base(f.path(u))
}
func (f File) Open(ctx context.Context, u *url.URL) (io.ReadCloser, error) {
return os.Open(f.path(u))
}
func (f File) Detect(u *url.URL) bool {
if len(f.path(u)) == 0 {
return false
}
fi, err := os.Stat(f.path(u))
if err != nil {
return false
}
return !fi.IsDir()
}
func (f File) path(u *url.URL) string {
return filepath.Join(u.Host, u.Path)
}
func (f File) Config(u *url.URL) artifacts.Config {
c := &fileConfig{
config{Reference: u.String()},
}
return artifacts.ToConfig(c, artifacts.WithConfigMediaType(consts.FileLocalConfigMediaType))
}
type fileConfig struct {
config `json:",inline,omitempty"`
}

View File

@@ -0,0 +1,148 @@
package getter
import (
"context"
"fmt"
"io"
"net/url"
v1 "github.com/google/go-containerregistry/pkg/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"oras.land/oras-go/pkg/content"
content2 "github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/layer"
)
type Client struct {
Getters map[string]Getter
Options ClientOptions
}
// ClientOptions provides options for the client
type ClientOptions struct {
NameOverride string
}
var (
ErrGetterTypeUnknown = errors.New("no getter type found matching reference")
)
type Getter interface {
Open(context.Context, *url.URL) (io.ReadCloser, error)
Detect(*url.URL) bool
Name(*url.URL) string
Config(*url.URL) content2.Config
}
func NewClient(opts ClientOptions) *Client {
defaults := map[string]Getter{
"file": NewFile(),
"directory": NewDirectory(),
"http": NewHttp(),
}
c := &Client{
Getters: defaults,
Options: opts,
}
return c
}
func (c *Client) LayerFrom(ctx context.Context, source string) (v1.Layer, error) {
u, err := url.Parse(source)
if err != nil {
return nil, err
}
g, err := c.getterFrom(u)
if err != nil {
if errors.Is(err, ErrGetterTypeUnknown) {
return nil, err
}
return nil, fmt.Errorf("create getter: %w", err)
}
opener := func() (io.ReadCloser, error) {
return g.Open(ctx, u)
}
annotations := make(map[string]string)
annotations[ocispec.AnnotationTitle] = c.Name(source)
switch g.(type) {
case *directory:
annotations[content.AnnotationUnpack] = "true"
}
l, err := layer.FromOpener(opener,
layer.WithMediaType(consts.FileLayerMediaType),
layer.WithAnnotations(annotations))
if err != nil {
return nil, err
}
return l, nil
}
func (c *Client) ContentFrom(ctx context.Context, source string) (io.ReadCloser, error) {
u, err := url.Parse(source)
if err != nil {
return nil, fmt.Errorf("parse source %s: %w", source, err)
}
g, err := c.getterFrom(u)
if err != nil {
if errors.Is(err, ErrGetterTypeUnknown) {
return nil, err
}
return nil, fmt.Errorf("create getter: %w", err)
}
return g.Open(ctx, u)
}
func (c *Client) getterFrom(srcUrl *url.URL) (Getter, error) {
for _, g := range c.Getters {
if g.Detect(srcUrl) {
return g, nil
}
}
return nil, errors.Wrapf(ErrGetterTypeUnknown, "source %s", srcUrl.String())
}
func (c *Client) Name(source string) string {
if c.Options.NameOverride != "" {
return c.Options.NameOverride
}
u, err := url.Parse(source)
if err != nil {
return source
}
for _, g := range c.Getters {
if g.Detect(u) {
return g.Name(u)
}
}
return source
}
func (c *Client) Config(source string) content2.Config {
u, err := url.Parse(source)
if err != nil {
return nil
}
for _, g := range c.Getters {
if g.Detect(u) {
return g.Config(u)
}
}
return nil
}
type config struct {
Reference string `json:"reference"`
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -0,0 +1,139 @@
package getter_test
import (
"net/url"
"os"
"path/filepath"
"testing"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
)
func TestClient_Detect(t *testing.T) {
teardown := setup(t)
defer teardown()
c := getter.NewClient(getter.ClientOptions{})
type args struct {
source string
}
tests := []struct {
name string
args args
want string
}{
{
name: "should identify a file",
args: args{
source: fileWithExt,
},
want: "file",
},
{
name: "should identify a directory",
args: args{
source: rootDir,
},
want: "directory",
},
{
name: "should identify an http fqdn",
args: args{
source: "http://my.cool.website",
},
want: "http",
},
{
name: "should identify an http fqdn",
args: args{
source: "https://my.cool.website",
},
want: "http",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := identify(c, tt.args.source); got != tt.want {
t.Errorf("identify() = %v, want %v", got, tt.want)
}
})
}
}
func identify(c *getter.Client, source string) string {
u, _ := url.Parse(source)
for t, g := range c.Getters {
if g.Detect(u) {
return t
}
}
return ""
}
func TestClient_Name(t *testing.T) {
teardown := setup(t)
defer teardown()
type args struct {
source string
opts getter.ClientOptions
}
tests := []struct {
name string
args args
want string
}{
{
name: "should correctly name a file with an extension",
args: args{
source: fileWithExt,
opts: getter.ClientOptions{},
},
want: "file.yaml",
},
{
name: "should correctly name a directory",
args: args{
source: rootDir,
opts: getter.ClientOptions{},
},
want: rootDir,
},
{
name: "should correctly override a files name",
args: args{
source: fileWithExt,
opts: getter.ClientOptions{NameOverride: "myfile"},
},
want: "myfile",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
c := getter.NewClient(tt.args.opts)
if got := c.Name(tt.args.source); got != tt.want {
t.Errorf("Name() = %v, want %v", got, tt.want)
}
})
}
}
var (
rootDir = "gettertests"
fileWithExt = filepath.Join(rootDir, "file.yaml")
)
func setup(t *testing.T) func() {
if err := os.MkdirAll(rootDir, os.ModePerm); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(fileWithExt, []byte(""), 0644); err != nil {
t.Fatal(err)
}
return func() {
os.RemoveAll(rootDir)
}
}

View File

@@ -0,0 +1,67 @@
package getter
import (
"context"
"io"
"mime"
"net/http"
"net/url"
"path/filepath"
"strings"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
)
type Http struct{}
func NewHttp() *Http {
return &Http{}
}
func (h Http) Name(u *url.URL) string {
resp, err := http.Head(u.String())
if err != nil {
return ""
}
contentType := resp.Header.Get("Content-Type")
for _, v := range strings.Split(contentType, ",") {
t, _, err := mime.ParseMediaType(v)
if err != nil {
break
}
// TODO: Identify known mimetypes for hints at a filename
_ = t
}
// TODO: Not this
return filepath.Base(u.String())
}
func (h Http) Open(ctx context.Context, u *url.URL) (io.ReadCloser, error) {
resp, err := http.Get(u.String())
if err != nil {
return nil, err
}
return resp.Body, nil
}
func (h Http) Detect(u *url.URL) bool {
switch u.Scheme {
case "http", "https":
return true
}
return false
}
func (h *Http) Config(u *url.URL) artifacts.Config {
c := &httpConfig{
config{Reference: u.String()},
}
return artifacts.ToConfig(c, artifacts.WithConfigMediaType(consts.FileHttpConfigMediaType))
}
type httpConfig struct {
config `json:",inline,omitempty"`
}

View File

@@ -0,0 +1,26 @@
package file
import (
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
)
type Option func(*File)
func WithClient(c *getter.Client) Option {
return func(f *File) {
f.client = c
}
}
func WithConfig(obj interface{}, mediaType string) Option {
return func(f *File) {
f.config = artifacts.ToConfig(obj, artifacts.WithConfigMediaType(mediaType))
}
}
func WithAnnotations(m map[string]string) Option {
return func(f *File) {
f.annotations = m
}
}

View File

@@ -0,0 +1,53 @@
package image
import (
"github.com/google/go-containerregistry/pkg/authn"
gname "github.com/google/go-containerregistry/pkg/name"
gv1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/rancherfederal/hauler/pkg/artifacts"
)
var _ artifacts.OCI = (*Image)(nil)
func (i *Image) MediaType() string {
mt, err := i.Image.MediaType()
if err != nil {
return ""
}
return string(mt)
}
func (i *Image) RawConfig() ([]byte, error) {
return i.RawConfigFile()
}
// Image implements the OCI interface for Image API objects. API spec information
// is stored into the Name field.
type Image struct {
Name string
gv1.Image
}
func NewImage(name string, opts ...remote.Option) (*Image, error) {
r, err := gname.ParseReference(name)
if err != nil {
return nil, err
}
defaultOpts := []remote.Option{
remote.WithAuthFromKeychain(authn.DefaultKeychain),
}
opts = append(opts, defaultOpts...)
img, err := remote.Image(r, opts...)
if err != nil {
return nil, err
}
return &Image{
Name: name,
Image: img,
}, nil
}

View File

@@ -0,0 +1 @@
package image_test

View File

@@ -0,0 +1,78 @@
package memory
import (
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/partial"
"github.com/google/go-containerregistry/pkg/v1/static"
"github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
)
var _ artifacts.OCI = (*Memory)(nil)
// Memory implements the OCI interface for a generic set of bytes stored in memory.
type Memory struct {
blob v1.Layer
annotations map[string]string
config artifacts.Config
}
type defaultConfig struct {
MediaType string `json:"mediaType,omitempty"`
}
func NewMemory(data []byte, mt string, opts ...Option) *Memory {
blob := static.NewLayer(data, types.MediaType(mt))
cfg := defaultConfig{MediaType: consts.MemoryConfigMediaType}
m := &Memory{
blob: blob,
config: artifacts.ToConfig(cfg),
}
for _, opt := range opts {
opt(m)
}
return m
}
func (m *Memory) MediaType() string {
return consts.OCIManifestSchema1
}
func (m *Memory) Manifest() (*v1.Manifest, error) {
layer, err := partial.Descriptor(m.blob)
if err != nil {
return nil, err
}
cfgDesc, err := partial.Descriptor(m.config)
if err != nil {
return nil, err
}
manifest := &v1.Manifest{
SchemaVersion: 2,
MediaType: types.MediaType(m.MediaType()),
Config: *cfgDesc,
Layers: []v1.Descriptor{*layer},
Annotations: m.annotations,
}
return manifest, nil
}
func (m *Memory) RawConfig() ([]byte, error) {
if m.config == nil {
return []byte(`{}`), nil
}
return m.config.Raw()
}
func (m *Memory) Layers() ([]v1.Layer, error) {
var layers []v1.Layer
layers = append(layers, m.blob)
return layers, nil
}

View File

@@ -0,0 +1,61 @@
package memory_test
import (
"math/rand"
"testing"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/opencontainers/go-digest"
"github.com/rancherfederal/hauler/pkg/artifacts/memory"
)
func TestMemory_Layers(t *testing.T) {
tests := []struct {
name string
want *v1.Manifest
wantErr bool
}{
{
name: "should preserve content",
want: nil,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
data, m := setup(t)
layers, err := m.Layers()
if err != nil {
t.Fatal(err)
}
if len(layers) != 1 {
t.Fatalf("Expected 1 layer, got %d", len(layers))
}
h, err := layers[0].Digest()
if err != nil {
t.Fatal(err)
}
d := digest.FromBytes(data)
if d.String() != h.String() {
t.Fatalf("bytes do not match, got %s, expected %s", h.String(), d.String())
}
})
}
}
func setup(t *testing.T) ([]byte, *memory.Memory) {
block := make([]byte, 2048)
_, err := rand.Read(block)
if err != nil {
t.Fatal(err)
}
mem := memory.NewMemory(block, "random")
return block, mem
}

View File

@@ -0,0 +1,17 @@
package memory
import "github.com/rancherfederal/hauler/pkg/artifacts"
type Option func(*Memory)
func WithConfig(obj interface{}, mediaType string) Option {
return func(m *Memory) {
m.config = artifacts.ToConfig(obj, artifacts.WithConfigMediaType(mediaType))
}
}
func WithAnnotations(annotations map[string]string) Option {
return func(m *Memory) {
m.annotations = annotations
}
}

View File

@@ -1,9 +1,6 @@
package artifact
package artifacts
import (
"github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1"
)
import "github.com/google/go-containerregistry/pkg/v1"
// OCI is the bare minimum we need to represent an artifact in an oci layout
// At a high level, it is not constrained by an Image's config, manifests, and layer ordinality
@@ -18,7 +15,7 @@ type OCI interface {
Layers() ([]v1.Layer, error)
}
type Collection interface {
type OCICollection interface {
// Contents returns the list of contents in the collection
Contents() (map[name.Reference]OCI, error)
Contents() (map[string]OCI, error)
}

5
pkg/cache/doc.go vendored
View File

@@ -1,5 +0,0 @@
package cache
/*
This package is _heavily_ influenced by go-containerregistry and it's cache implementation: https://github.com/google/go-containerregistry/tree/main/pkg/v1/cache
*/

View File

@@ -1,42 +1,40 @@
package chart
import (
gname "github.com/google/go-containerregistry/pkg/name"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/image"
"helm.sh/helm/v3/pkg/action"
"github.com/rancherfederal/hauler/pkg/artifact"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"github.com/rancherfederal/hauler/pkg/content/chart"
"github.com/rancherfederal/hauler/pkg/content/image"
"github.com/rancherfederal/hauler/pkg/reference"
)
var _ artifact.Collection = (*tchart)(nil)
var _ artifacts.OCICollection = (*tchart)(nil)
// tchart is a thick chart that includes all the dependent images as well as the chart itself
type tchart struct {
name string
repo string
version string
chart *chart.Chart
chart *chart.Chart
config v1alpha1.ThickChart
computed bool
contents map[gname.Reference]artifact.OCI
contents map[string]artifacts.OCI
}
func NewChart(name, repo, version string) (artifact.Collection, error) {
o, err := chart.NewChart(name, repo, version)
func NewThickChart(cfg v1alpha1.ThickChart, opts *action.ChartPathOptions) (artifacts.OCICollection, error) {
o, err := chart.NewChart(cfg.Chart.Name, opts)
if err != nil {
return nil, err
}
return &tchart{
name: name,
repo: repo,
version: version,
chart: o,
contents: make(map[gname.Reference]artifact.OCI),
config: cfg,
contents: make(map[string]artifacts.OCI),
}, nil
}
func (c *tchart) Contents() (map[gname.Reference]artifact.OCI, error) {
func (c *tchart) Contents() (map[string]artifacts.OCI, error) {
if err := c.compute(); err != nil {
return nil, err
}
@@ -51,32 +49,28 @@ func (c *tchart) compute() error {
if err := c.dependentImages(); err != nil {
return err
}
if err := c.chartContents(); err != nil {
return err
}
if err := c.extraImages(); err != nil {
return err
}
c.computed = true
return nil
}
func (c *tchart) chartContents() error {
oci, err := chart.NewChart(c.name, c.repo, c.version)
ch, err := c.chart.Load()
if err != nil {
return err
}
tag := c.version
if tag == "" {
tag = gname.DefaultTag
}
ref, err := gname.ParseReference(c.name, gname.WithDefaultRegistry(""), gname.WithDefaultTag(tag))
ref, err := reference.NewTagged(ch.Name(), ch.Metadata.Version)
if err != nil {
return err
}
c.contents[ref] = oci
c.contents[ref.Name()] = c.chart
return nil
}
@@ -92,17 +86,22 @@ func (c *tchart) dependentImages() error {
}
for _, img := range imgs.Spec.Images {
ref, err := gname.ParseReference(img.Ref)
i, err := image.NewImage(img.Name)
if err != nil {
return err
}
i, err := image.NewImage(img.Ref)
if err != nil {
return err
}
c.contents[ref] = i
c.contents[img.Name] = i
}
return nil
}
func (c *tchart) extraImages() error {
for _, img := range c.config.ExtraImages {
i, err := image.NewImage(img.Reference)
if err != nil {
return err
}
c.contents[img.Reference] = i
}
return nil
}

View File

@@ -1,20 +1,18 @@
package chart
import (
"bufio"
"bytes"
"encoding/json"
"io"
"strings"
"github.com/rancher/wrangler/pkg/yaml"
"helm.sh/helm/v3/pkg/action"
helmchart "helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/chartutil"
"helm.sh/helm/v3/pkg/kube/fake"
"helm.sh/helm/v3/pkg/storage"
"helm.sh/helm/v3/pkg/storage/driver"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/yaml"
"k8s.io/client-go/util/jsonpath"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
@@ -32,50 +30,37 @@ var defaultKnownImagePaths = []string{
// ImagesInChart will render a chart and identify all dependent images from it
func ImagesInChart(c *helmchart.Chart) (v1alpha1.Images, error) {
objs, err := template(c)
docs, err := template(c)
if err != nil {
return v1alpha1.Images{}, err
}
var imageRefs []string
for _, o := range objs {
d, err := o.(*unstructured.Unstructured).MarshalJSON()
var images []v1alpha1.Image
reader := yaml.NewYAMLReader(bufio.NewReader(strings.NewReader(docs)))
for {
raw, err := reader.Read()
if err == io.EOF {
break
}
if err != nil {
// TODO: Should we actually capture these errors?
continue
return v1alpha1.Images{}, err
}
var obj interface{}
if err := json.Unmarshal(d, &obj); err != nil {
continue
}
j := jsonpath.New("")
j.AllowMissingKeys(true)
for _, p := range defaultKnownImagePaths {
r, err := parseJSONPath(obj, j, p)
if err != nil {
continue
}
imageRefs = append(imageRefs, r...)
found := find(raw, defaultKnownImagePaths...)
for _, f := range found {
images = append(images, v1alpha1.Image{Name: f})
}
}
ims := v1alpha1.Images{
Spec: v1alpha1.ImageSpec{
Images: []v1alpha1.Image{},
Images: images,
},
}
for _, ref := range imageRefs {
ims.Spec.Images = append(ims.Spec.Images, v1alpha1.Image{Ref: ref})
}
return ims, nil
}
func template(c *helmchart.Chart) ([]runtime.Object, error) {
func template(c *helmchart.Chart) (string, error) {
s := storage.Init(driver.NewMemory())
templateCfg := &action.Configuration{
@@ -99,10 +84,33 @@ func template(c *helmchart.Chart) ([]runtime.Object, error) {
release, err := client.Run(c, vals)
if err != nil {
return nil, err
return "", err
}
return yaml.ToObjects(bytes.NewBufferString(release.Manifest))
return release.Manifest, nil
}
func find(data []byte, paths ...string) []string {
var (
pathMatches []string
obj interface{}
)
if err := yaml.Unmarshal(data, &obj); err != nil {
return nil
}
j := jsonpath.New("")
j.AllowMissingKeys(true)
for _, p := range paths {
r, err := parseJSONPath(obj, j, p)
if err != nil {
continue
}
pathMatches = append(pathMatches, r...)
}
return pathMatches
}
func parseJSONPath(data interface{}, parser *jsonpath.JSONPath, template string) ([]string, error) {

View File

@@ -0,0 +1,232 @@
package imagetxt
import (
"bufio"
"context"
"fmt"
"io"
"os"
"strings"
"sync"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/google/go-containerregistry/pkg/name"
artifact "github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/rancherfederal/hauler/pkg/artifacts/image"
)
type ImageTxt struct {
Ref string
IncludeSources map[string]bool
ExcludeSources map[string]bool
lock *sync.Mutex
client *getter.Client
computed bool
contents map[string]artifact.OCI
}
var _ artifact.OCICollection = (*ImageTxt)(nil)
type Option interface {
Apply(*ImageTxt) error
}
type withIncludeSources []string
func (o withIncludeSources) Apply(it *ImageTxt) error {
if it.IncludeSources == nil {
it.IncludeSources = make(map[string]bool)
}
for _, s := range o {
it.IncludeSources[s] = true
}
return nil
}
func WithIncludeSources(include ...string) Option {
return withIncludeSources(include)
}
type withExcludeSources []string
func (o withExcludeSources) Apply(it *ImageTxt) error {
if it.ExcludeSources == nil {
it.ExcludeSources = make(map[string]bool)
}
for _, s := range o {
it.ExcludeSources[s] = true
}
return nil
}
func WithExcludeSources(exclude ...string) Option {
return withExcludeSources(exclude)
}
func New(ref string, opts ...Option) (*ImageTxt, error) {
it := &ImageTxt{
Ref: ref,
client: getter.NewClient(getter.ClientOptions{}),
lock: &sync.Mutex{},
}
for i, o := range opts {
if err := o.Apply(it); err != nil {
return nil, fmt.Errorf("invalid option %d: %v", i, err)
}
}
return it, nil
}
func (it *ImageTxt) Contents() (map[string]artifact.OCI, error) {
it.lock.Lock()
defer it.lock.Unlock()
if !it.computed {
if err := it.compute(); err != nil {
return nil, fmt.Errorf("compute OCI layout: %v", err)
}
it.computed = true
}
return it.contents, nil
}
func (it *ImageTxt) compute() error {
// TODO - pass in logger from context
l := log.NewLogger(os.Stdout)
it.contents = make(map[string]artifact.OCI)
ctx := context.TODO()
rc, err := it.client.ContentFrom(ctx, it.Ref)
if err != nil {
return fmt.Errorf("fetch image.txt ref %s: %w", it.Ref, err)
}
defer rc.Close()
entries, err := splitImagesTxt(rc)
if err != nil {
return fmt.Errorf("parse image.txt ref %s: %v", it.Ref, err)
}
foundSources := make(map[string]bool)
for _, e := range entries {
for s := range e.Sources {
foundSources[s] = true
}
}
var pullAll bool
targetSources := make(map[string]bool)
if len(foundSources) == 0 || (len(it.IncludeSources) == 0 && len(it.ExcludeSources) == 0) {
// pull all found images
pullAll = true
if len(foundSources) == 0 {
l.Infof("image txt file appears to have no sources; pulling all found images")
if len(it.IncludeSources) != 0 || len(it.ExcludeSources) != 0 {
l.Warnf("ImageTxt provided include or exclude sources; ignoring")
}
} else if len(it.IncludeSources) == 0 && len(it.ExcludeSources) == 0 {
l.Infof("image-sources txt file not filtered; pulling all found images")
}
} else {
// determine sources to pull
if len(it.IncludeSources) != 0 && len(it.ExcludeSources) != 0 {
l.Warnf("ImageTxt provided include and exclude sources; using only include sources")
}
if len(it.IncludeSources) != 0 {
targetSources = it.IncludeSources
} else {
for s := range foundSources {
targetSources[s] = true
}
for s := range it.ExcludeSources {
delete(targetSources, s)
}
}
var targetSourcesArr []string
for s := range targetSources {
targetSourcesArr = append(targetSourcesArr, s)
}
l.Infof("pulling images covering sources %s", strings.Join(targetSourcesArr, ", "))
}
for _, e := range entries {
var matchesSourceFilter bool
if pullAll {
l.Infof("pulling image %s", e.Reference)
} else {
for s := range e.Sources {
if targetSources[s] {
matchesSourceFilter = true
l.Infof("pulling image %s (matched source %s)", e.Reference, s)
break
}
}
}
if pullAll || matchesSourceFilter {
curImage, err := image.NewImage(e.Reference.String())
if err != nil {
return fmt.Errorf("pull image %s: %v", e.Reference, err)
}
it.contents[e.Reference.String()] = curImage
}
}
return nil
}
type imageTxtEntry struct {
Reference name.Reference
Sources map[string]bool
}
func splitImagesTxt(r io.Reader) ([]imageTxtEntry, error) {
var entries []imageTxtEntry
scanner := bufio.NewScanner(r)
for scanner.Scan() {
curEntry := imageTxtEntry{
Sources: make(map[string]bool),
}
lineContent := scanner.Text()
if lineContent == "" || strings.HasPrefix(lineContent, "#") {
// skip past empty and commented lines
continue
}
splitContent := strings.Split(lineContent, " ")
if len(splitContent) > 2 {
return nil, fmt.Errorf(
"invalid image.txt format: must contain only an image reference and sources separated by space; invalid line: %q",
lineContent)
}
curRef, err := name.ParseReference(splitContent[0])
if err != nil {
return nil, fmt.Errorf("invalid reference %s: %v", splitContent[0], err)
}
curEntry.Reference = curRef
if len(splitContent) == 2 {
for _, source := range strings.Split(splitContent[1], ",") {
curEntry.Sources[source] = true
}
}
entries = append(entries, curEntry)
}
if err := scanner.Err(); err != nil {
return nil, fmt.Errorf("scan contents: %v", err)
}
return entries, nil
}

View File

@@ -0,0 +1,209 @@
package imagetxt
import (
"errors"
"fmt"
"net/http"
"net/http/httptest"
"os"
"testing"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/image"
)
var (
ErrRefNotFound = errors.New("ref not found")
ErrRefNotImage = errors.New("ref is not image")
ErrExtraRefsFound = errors.New("extra refs found in contents")
)
var (
testServer *httptest.Server
)
func TestMain(m *testing.M) {
setup()
code := m.Run()
teardown()
os.Exit(code)
}
func setup() {
dir := http.Dir("./testdata/http/")
h := http.FileServer(dir)
testServer = httptest.NewServer(h)
}
func teardown() {
if testServer != nil {
testServer.Close()
}
}
type failKind string
const (
failKindNew = failKind("New")
failKindContents = failKind("Contents")
)
func checkError(checkedFailKind failKind) func(*testing.T, error, bool, failKind) {
return func(cet *testing.T, err error, testShouldFail bool, testFailKind failKind) {
if err != nil {
// if error should not have happened at all OR error should have happened
// at a different point, test failed
if !testShouldFail || testFailKind != checkedFailKind {
cet.Fatalf("unexpected error at %s: %v", checkedFailKind, err)
}
// test should fail at this point, test passed
return
}
// if no error occurred but error should have happened at this point, test
// failed
if testShouldFail && testFailKind == checkedFailKind {
cet.Fatalf("unexpected nil error at %s", checkedFailKind)
}
}
}
func TestImageTxtCollection(t *testing.T) {
type testEntry struct {
Name string
Ref string
IncludeSources []string
ExcludeSources []string
ExpectedImages []string
ShouldFail bool
FailKind failKind
}
tt := []testEntry{
{
Name: "http ref basic",
Ref: fmt.Sprintf("%s/images-http.txt", testServer.URL),
ExpectedImages: []string{
"busybox",
"nginx:1.19",
"rancher/hyperkube:v1.21.7-rancher1",
"docker.io/rancher/klipper-lb:v0.3.4",
"quay.io/jetstack/cert-manager-controller:v1.6.1",
},
},
{
Name: "http ref sources format pull all",
Ref: fmt.Sprintf("%s/images-src-http.txt", testServer.URL),
ExpectedImages: []string{
"busybox",
"nginx:1.19",
"rancher/hyperkube:v1.21.7-rancher1",
"docker.io/rancher/klipper-lb:v0.3.4",
"quay.io/jetstack/cert-manager-controller:v1.6.1",
},
},
{
Name: "http ref sources format include sources A",
Ref: fmt.Sprintf("%s/images-src-http.txt", testServer.URL),
IncludeSources: []string{
"core", "rke",
},
ExpectedImages: []string{
"busybox",
"nginx:1.19",
"rancher/hyperkube:v1.21.7-rancher1",
},
},
{
Name: "http ref sources format include sources B",
Ref: fmt.Sprintf("%s/images-src-http.txt", testServer.URL),
IncludeSources: []string{
"nginx", "rancher", "cert-manager",
},
ExpectedImages: []string{
"nginx:1.19",
"rancher/hyperkube:v1.21.7-rancher1",
"docker.io/rancher/klipper-lb:v0.3.4",
"quay.io/jetstack/cert-manager-controller:v1.6.1",
},
},
{
Name: "http ref sources format exclude sources A",
Ref: fmt.Sprintf("%s/images-src-http.txt", testServer.URL),
ExcludeSources: []string{
"cert-manager",
},
ExpectedImages: []string{
"busybox",
"nginx:1.19",
"rancher/hyperkube:v1.21.7-rancher1",
"docker.io/rancher/klipper-lb:v0.3.4",
},
},
{
Name: "http ref sources format exclude sources B",
Ref: fmt.Sprintf("%s/images-src-http.txt", testServer.URL),
ExcludeSources: []string{
"core",
},
ExpectedImages: []string{
"nginx:1.19",
"rancher/hyperkube:v1.21.7-rancher1",
"docker.io/rancher/klipper-lb:v0.3.4",
"quay.io/jetstack/cert-manager-controller:v1.6.1",
},
},
{
Name: "local file ref",
Ref: "./testdata/images-file.txt",
ExpectedImages: []string{
"busybox",
"nginx:1.19",
"rancher/hyperkube:v1.21.7-rancher1",
"docker.io/rancher/klipper-lb:v0.3.4",
"quay.io/jetstack/cert-manager-controller:v1.6.1",
},
},
}
checkErrorNew := checkError(failKindNew)
checkErrorContents := checkError(failKindContents)
for _, curTest := range tt {
t.Run(curTest.Name, func(innerT *testing.T) {
curImageTxt, err := New(curTest.Ref,
WithIncludeSources(curTest.IncludeSources...),
WithExcludeSources(curTest.ExcludeSources...),
)
checkErrorNew(innerT, err, curTest.ShouldFail, curTest.FailKind)
ociContents, err := curImageTxt.Contents()
checkErrorContents(innerT, err, curTest.ShouldFail, curTest.FailKind)
if err := checkImages(ociContents, curTest.ExpectedImages); err != nil {
innerT.Fatal(err)
}
})
}
}
func checkImages(content map[string]artifacts.OCI, refs []string) error {
contentCopy := make(map[string]artifacts.OCI, len(content))
for k, v := range content {
contentCopy[k] = v
}
for _, ref := range refs {
target, ok := content[ref]
if !ok {
return fmt.Errorf("ref %s: %w", ref, ErrRefNotFound)
}
if _, ok := target.(*image.Image); !ok {
return fmt.Errorf("got underlying type %T: %w", target, ErrRefNotImage)
}
delete(contentCopy, ref)
}
if len(contentCopy) != 0 {
return ErrExtraRefsFound
}
return nil
}

View File

@@ -0,0 +1,5 @@
busybox
nginx:1.19
rancher/hyperkube:v1.21.7-rancher1
docker.io/rancher/klipper-lb:v0.3.4
quay.io/jetstack/cert-manager-controller:v1.6.1

View File

@@ -0,0 +1,5 @@
busybox core
nginx:1.19 core,nginx
rancher/hyperkube:v1.21.7-rancher1 rancher,rke
docker.io/rancher/klipper-lb:v0.3.4 rancher,k3s
quay.io/jetstack/cert-manager-controller:v1.6.1 cert-manager

View File

@@ -0,0 +1,5 @@
busybox
nginx:1.19
rancher/hyperkube:v1.21.7-rancher1
docker.io/rancher/klipper-lb:v0.3.4
quay.io/jetstack/cert-manager-controller:v1.6.1

View File

@@ -10,14 +10,17 @@ import (
"path"
"strings"
"github.com/google/go-containerregistry/pkg/name"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/artifacts/image"
"github.com/rancherfederal/hauler/pkg/artifact"
"github.com/rancherfederal/hauler/pkg/content/file"
"github.com/rancherfederal/hauler/pkg/content/image"
"github.com/rancherfederal/hauler/pkg/artifacts/file"
"github.com/rancherfederal/hauler/pkg/artifacts/file/getter"
"github.com/rancherfederal/hauler/pkg/reference"
)
var _ artifact.Collection = (*k3s)(nil)
var _ artifacts.OCICollection = (*k3s)(nil)
const (
releaseUrl = "https://github.com/k3s-io/k3s/releases/download"
@@ -37,18 +40,19 @@ type k3s struct {
arch string
computed bool
contents map[name.Reference]artifact.OCI
contents map[string]artifacts.OCI
channels map[string]string
client *getter.Client
}
func NewK3s(version string) (artifact.Collection, error) {
func NewK3s(version string) (artifacts.OCICollection, error) {
return &k3s{
version: version,
contents: make(map[name.Reference]artifact.OCI),
contents: make(map[string]artifacts.OCI),
}, nil
}
func (k *k3s) Contents() (map[name.Reference]artifact.OCI, error) {
func (k *k3s) Contents() (map[string]artifacts.OCI, error) {
if err := k.compute(); err != nil {
return nil, err
}
@@ -94,31 +98,18 @@ func (k *k3s) executable() error {
return ErrExecutableNotfound
}
f, err := file.NewFile(fref, "k3s")
if err != nil {
return err
}
ref, err := name.ParseReference("hauler/k3s", name.WithDefaultTag(k.dnsCompliantVersion()), name.WithDefaultRegistry(""))
if err != nil {
return err
}
f := file.NewFile(fref)
ref := fmt.Sprintf("%s/k3s:%s", reference.DefaultNamespace, k.dnsCompliantVersion())
k.contents[ref] = f
return nil
}
func (k *k3s) bootstrap() error {
f, err := file.NewFile(bootstrapUrl, "get-k3s.io")
if err != nil {
return err
}
ref, err := name.ParseReference("hauler/get-k3s.io", name.WithDefaultRegistry(""), name.WithDefaultTag("latest"))
if err != nil {
return err
}
c := getter.NewClient(getter.ClientOptions{NameOverride: "k3s-init.sh"})
f := file.NewFile(bootstrapUrl, file.WithClient(c))
ref := fmt.Sprintf("%s/k3s-init.sh:%s", reference.DefaultNamespace, reference.DefaultTag)
k.contents[ref] = f
return nil
}
@@ -135,16 +126,12 @@ func (k *k3s) images() error {
scanner := bufio.NewScanner(resp.Body)
for scanner.Scan() {
reference := scanner.Text()
ref, err := name.ParseReference(reference)
if err != nil {
return err
}
o, err := image.NewImage(reference)
if err != nil {
return err
}
k.contents[ref] = o
k.contents[reference] = o
}
return nil
}

View File

@@ -1,71 +0,0 @@
package k3s
import (
"context"
"os"
"testing"
"github.com/rancherfederal/hauler/pkg/artifact"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/store"
)
// TODO: This is not at all a good test, we really just need to test the added collections functionality (like image scanning)
func TestNewK3s(t *testing.T) {
ctx := context.Background()
l := log.NewLogger(os.Stdout)
ctx = l.WithContext(ctx)
tmpdir, err := os.MkdirTemp("", "hauler")
if err != nil {
t.Error(err)
}
defer os.Remove(tmpdir)
s := store.NewStore(ctx, tmpdir)
s.Open()
defer s.Close()
type args struct {
version string
}
tests := []struct {
name string
args args
want artifact.Collection
wantErr bool
}{
{
name: "should work",
args: args{
version: "v1.22.2+k3s2",
},
want: nil,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := NewK3s(tt.args.version)
if (err != nil) != tt.wantErr {
t.Errorf("NewK3s() error = %v, wantErr %v", err, tt.wantErr)
return
}
c, err := got.Contents()
if err != nil {
t.Fatal(err)
}
for r, o := range c {
if _, err := s.AddArtifact(ctx, o, r); err != nil {
t.Fatal(err)
}
}
// if !reflect.DeepEqual(got, tt.want) {
// t.Errorf("NewK3s() got = %v, want %v", got, tt.want)
// }
})
}
}

57
pkg/consts/consts.go Normal file
View File

@@ -0,0 +1,57 @@
package consts
const (
OCIManifestSchema1 = "application/vnd.oci.image.manifest.v1+json"
DockerManifestSchema2 = "application/vnd.docker.distribution.manifest.v2+json"
DockerManifestListSchema2 = "application/vnd.docker.distribution.manifest.list.v2+json"
OCIImageIndexSchema = "application/vnd.oci.image.index.v1+json"
DockerConfigJSON = "application/vnd.docker.container.image.v1+json"
DockerLayer = "application/vnd.docker.image.rootfs.diff.tar.gzip"
DockerForeignLayer = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
DockerUncompressedLayer = "application/vnd.docker.image.rootfs.diff.tar"
OCILayer = "application/vnd.oci.image.layer.v1.tar+gzip"
OCIArtifact = "application/vnd.oci.empty.v1+json"
// ChartConfigMediaType is the reserved media type for the Helm chart manifest config
ChartConfigMediaType = "application/vnd.cncf.helm.config.v1+json"
// ChartLayerMediaType is the reserved media type for Helm chart package content
ChartLayerMediaType = "application/vnd.cncf.helm.chart.content.v1.tar+gzip"
// ProvLayerMediaType is the reserved media type for Helm chart provenance files
ProvLayerMediaType = "application/vnd.cncf.helm.chart.provenance.v1.prov"
// FileLayerMediaType is the reserved media type for File content layers
FileLayerMediaType = "application/vnd.content.hauler.file.layer.v1"
// FileLocalConfigMediaType is the reserved media type for File config
FileLocalConfigMediaType = "application/vnd.content.hauler.file.local.config.v1+json"
FileDirectoryConfigMediaType = "application/vnd.content.hauler.file.directory.config.v1+json"
FileHttpConfigMediaType = "application/vnd.content.hauler.file.http.config.v1+json"
// MemoryConfigMediaType is the reserved media type for Memory config for a generic set of bytes stored in memory
MemoryConfigMediaType = "application/vnd.content.hauler.memory.config.v1+json"
// WasmArtifactLayerMediaType is the reserved media type for WASM artifact layers
WasmArtifactLayerMediaType = "application/vnd.wasm.content.layer.v1+wasm"
// WasmConfigMediaType is the reserved media type for WASM configs
WasmConfigMediaType = "application/vnd.wasm.config.v1+json"
UnknownManifest = "application/vnd.hauler.cattle.io.unknown.v1+json"
UnknownLayer = "application/vnd.content.hauler.unknown.layer"
OCIVendorPrefix = "vnd.oci"
DockerVendorPrefix = "vnd.docker"
HaulerVendorPrefix = "vnd.hauler"
OCIImageIndexFile = "index.json"
KindAnnotationName = "kind"
KindAnnotation = "dev.cosignproject.cosign/image"
CarbideRegistry = "rgcrprod.azurecr.us"
ImageAnnotationKey = "hauler.dev/key"
ImageAnnotationPlatform = "hauler.dev/platform"
ImageAnnotationRegistry = "hauler.dev/registry"
)

View File

@@ -1,9 +1,12 @@
package chart
import (
"archive/tar"
"bytes"
"compress/gzip"
"encoding/json"
"io"
"io/fs"
"os"
"path/filepath"
@@ -11,42 +14,55 @@ import (
"github.com/google/go-containerregistry/pkg/v1/partial"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/rancherfederal/hauler/pkg/artifacts"
"helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/chart/loader"
"helm.sh/helm/v3/pkg/cli"
"github.com/rancherfederal/hauler/pkg/artifact"
"github.com/rancherfederal/hauler/pkg/artifact/local"
"github.com/rancherfederal/hauler/pkg/artifact/types"
"github.com/rancherfederal/hauler/pkg/layer"
"github.com/rancherfederal/hauler/pkg/consts"
)
var _ artifact.OCI = (*Chart)(nil)
var _ artifacts.OCI = (*Chart)(nil)
// Chart implements the OCI interface for Chart API objects. API spec values are
// stored into the Repo, Name, and Version fields.
type Chart struct {
path string
path string
annotations map[string]string
}
func NewChart(name, repo, version string) (*Chart, error) {
// NewChart is a helper method that returns NewLocalChart or NewRemoteChart depending on v1alpha1.Chart contents
func NewChart(name string, opts *action.ChartPathOptions) (*Chart, error) {
cpo := action.ChartPathOptions{
RepoURL: repo,
Version: version,
RepoURL: opts.RepoURL,
Version: opts.Version,
CaFile: opts.CaFile,
CertFile: opts.CertFile,
KeyFile: opts.KeyFile,
InsecureSkipTLSverify: opts.InsecureSkipTLSverify,
Keyring: opts.Keyring,
Password: opts.Password,
PassCredentialsAll: opts.PassCredentialsAll,
Username: opts.Username,
Verify: opts.Verify,
}
cp, err := cpo.LocateChart(name, cli.New())
chartPath, err := cpo.LocateChart(name, cli.New())
if err != nil {
return nil, err
}
return &Chart{
path: cp,
}, nil
path: chartPath,
}, err
}
func (h *Chart) MediaType() string {
return types.OCIManifestSchema1
return consts.OCIManifestSchema1
}
func (h *Chart) Manifest() (*gv1.Manifest, error) {
@@ -94,23 +110,18 @@ func (h *Chart) configDescriptor() (gv1.Descriptor, error) {
}
return gv1.Descriptor{
MediaType: types.ChartConfigMediaType,
MediaType: consts.ChartConfigMediaType,
Size: size,
Digest: hash,
}, nil
}
func (h *Chart) Load() (*chart.Chart, error) {
rc, err := chartOpener(h.path)()
if err != nil {
return nil, err
}
defer rc.Close()
return loader.LoadArchive(rc)
return loader.Load(h.path)
}
func (h *Chart) Layers() ([]gv1.Layer, error) {
chartDataLayer, err := h.chartDataLayer()
chartDataLayer, err := h.chartData()
if err != nil {
return nil, err
}
@@ -125,17 +136,84 @@ func (h *Chart) RawChartData() ([]byte, error) {
return os.ReadFile(h.path)
}
func (h *Chart) chartDataLayer() (gv1.Layer, error) {
// chartData loads the chart contents into memory and returns a NopCloser for the contents
//
// Normally we avoid loading into memory, but charts sizes are strictly capped at ~1MB
func (h *Chart) chartData() (gv1.Layer, error) {
info, err := os.Stat(h.path)
if err != nil {
return nil, err
}
var chartdata []byte
if info.IsDir() {
buf := &bytes.Buffer{}
gw := gzip.NewWriter(buf)
tw := tar.NewWriter(gw)
if err := filepath.WalkDir(h.path, func(path string, d fs.DirEntry, err error) error {
fi, err := d.Info()
if err != nil {
return err
}
header, err := tar.FileInfoHeader(fi, fi.Name())
if err != nil {
return err
}
rel, err := filepath.Rel(filepath.Dir(h.path), path)
if err != nil {
return err
}
header.Name = rel
if err := tw.WriteHeader(header); err != nil {
return err
}
if !d.IsDir() {
data, err := os.Open(path)
if err != nil {
return err
}
if _, err := io.Copy(tw, data); err != nil {
return err
}
}
return nil
}); err != nil {
return nil, err
}
if err := tw.Close(); err != nil {
return nil, err
}
if err := gw.Close(); err != nil {
return nil, err
}
chartdata = buf.Bytes()
} else {
data, err := os.ReadFile(h.path)
if err != nil {
return nil, err
}
chartdata = data
}
annotations := make(map[string]string)
annotations[ocispec.AnnotationTitle] = filepath.Base(h.path)
return local.LayerFromOpener(chartOpener(h.path),
local.WithMediaType(types.ChartLayerMediaType),
local.WithAnnotations(annotations))
}
func chartOpener(path string) local.Opener {
return func() (io.ReadCloser, error) {
return os.Open(path)
opener := func() layer.Opener {
return func() (io.ReadCloser, error) {
return io.NopCloser(bytes.NewBuffer(chartdata)), nil
}
}
chartDataLayer, err := layer.FromOpener(opener(),
layer.WithMediaType(consts.ChartLayerMediaType),
layer.WithAnnotations(annotations))
return chartDataLayer, err
}

View File

@@ -1,72 +1,117 @@
package chart_test
import (
"context"
"os"
"path"
"reflect"
"testing"
"github.com/google/go-containerregistry/pkg/name"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/mholt/archiver/v3"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"helm.sh/helm/v3/pkg/action"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"github.com/rancherfederal/hauler/pkg/content/chart"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/store"
)
func TestChart_Copy(t *testing.T) {
ctx := context.Background()
l := log.NewLogger(os.Stdout)
ctx = l.WithContext(ctx)
var (
chartpath = "../../../testdata/podinfo-6.0.3.tgz"
)
func TestNewChart(t *testing.T) {
tmpdir, err := os.MkdirTemp("", "hauler")
if err != nil {
t.Error(err)
t.Fatal(err)
}
defer os.Remove(tmpdir)
defer os.RemoveAll(tmpdir)
s := store.NewStore(ctx, tmpdir)
s.Open()
defer s.Close()
if err := archiver.Unarchive(chartpath, tmpdir); err != nil {
t.Fatal(err)
}
type args struct {
ctx context.Context
registry string
name string
opts *action.ChartPathOptions
}
tests := []struct {
name string
cfg v1alpha1.Chart
args args
want v1.Descriptor
wantErr bool
}{
// TODO: This test isn't self-contained
{
name: "should work with unversioned chart",
cfg: v1alpha1.Chart{
Name: "loki",
RepoURL: "https://grafana.github.io/helm-charts",
},
name: "should create from a chart archive",
args: args{
ctx: ctx,
registry: s.Registry(),
name: chartpath,
opts: &action.ChartPathOptions{},
},
want: v1.Descriptor{
MediaType: consts.ChartLayerMediaType,
Size: 13524,
Digest: v1.Hash{
Algorithm: "sha256",
Hex: "e30b95a08787de69ffdad3c232d65cfb131b5b50c6fd44295f48a078fceaa44e",
},
Annotations: map[string]string{
ocispec.AnnotationTitle: "podinfo-6.0.3.tgz",
},
},
wantErr: false,
},
// TODO: This isn't matching digests b/c of file timestamps not being respected
// {
// name: "should create from a chart directory",
// args: args{
// path: filepath.Join(tmpdir, "podinfo"),
// },
// want: want,
// wantErr: false,
// },
{
// TODO: Use a mock helm server
name: "should fetch a remote chart",
args: args{
name: "ingress-nginx",
opts: &action.ChartPathOptions{RepoURL: "https://kubernetes.github.io/ingress-nginx", Version: "4.0.16"},
},
want: v1.Descriptor{
MediaType: consts.ChartLayerMediaType,
Size: 38591,
Digest: v1.Hash{
Algorithm: "sha256",
Hex: "b0ea91f7febc6708ad9971871d2de6e8feb2072110c3add6dd7082d90753caa2",
},
Annotations: map[string]string{
ocispec.AnnotationTitle: "ingress-nginx-4.0.16.tgz",
},
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
c, err := chart.NewChart(tt.cfg.Name, tt.cfg.RepoURL, tt.cfg.Version)
if err != nil {
t.Fatal(err)
}
ref, err := name.ParseReference(path.Join("hauler", tt.cfg.Name))
if err != nil {
t.Fatal(err)
got, err := chart.NewChart(tt.args.name, tt.args.opts)
if (err != nil) != tt.wantErr {
t.Errorf("NewLocalChart() error = %v, wantErr %v", err, tt.wantErr)
return
}
if _, err := s.AddArtifact(ctx, c, ref); (err != nil) != tt.wantErr {
m, err := got.Manifest()
if err != nil {
t.Error(err)
}
// TODO: This changes when we support provenance files
if len(m.Layers) > 1 {
t.Errorf("Expected 1 layer for chart, got %d", len(m.Layers))
}
desc := m.Layers[0]
if !reflect.DeepEqual(desc, tt.want) {
t.Errorf("got: %v\nwant: %v", desc, tt.want)
return
}
})
}
}

View File

@@ -11,7 +11,7 @@ import (
)
func Load(data []byte) (schema.ObjectKind, error) {
var tm *metav1.TypeMeta
var tm metav1.TypeMeta
if err := yaml.Unmarshal(data, &tm); err != nil {
return nil, err
}
@@ -20,5 +20,5 @@ func Load(data []byte) (schema.ObjectKind, error) {
return nil, fmt.Errorf("unrecognized content/collection type: %s", tm.GroupVersionKind().String())
}
return tm, nil
return &tm, nil
}

View File

@@ -1,82 +0,0 @@
package file
import (
"bytes"
"encoding/json"
gv1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/partial"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/artifact/types"
)
var _ partial.Describable = (*config)(nil)
type config struct {
Reference string `json:"ref"` // Reference is the reference from where the file was sourced
Name string `json:"name"` // Name is the files name on disk
Annotations map[string]string `json:"annotations,omitempty"`
URLs []string `json:"urls,omitempty"`
computed bool
size int64
hash gv1.Hash
}
func (c config) Descriptor() (gv1.Descriptor, error) {
if err := c.compute(); err != nil {
return gv1.Descriptor{}, err
}
return gv1.Descriptor{
MediaType: types.FileConfigMediaType,
Size: c.size,
Digest: c.hash,
URLs: c.URLs,
Annotations: c.Annotations,
// Platform: nil,
}, nil
}
func (c config) Digest() (gv1.Hash, error) {
if err := c.compute(); err != nil {
return gv1.Hash{}, err
}
return c.hash, nil
}
func (c config) MediaType() (gtypes.MediaType, error) {
return types.FileConfigMediaType, nil
}
func (c config) Size() (int64, error) {
if err := c.compute(); err != nil {
return 0, err
}
return c.size, nil
}
func (c *config) Raw() ([]byte, error) {
return json.Marshal(c)
}
func (c *config) compute() error {
if c.computed {
return nil
}
data, err := c.Raw()
if err != nil {
return err
}
h, size, err := gv1.SHA256(bytes.NewBuffer(data))
if err != nil {
return err
}
c.size = size
c.hash = h
return nil
}

View File

@@ -1,107 +0,0 @@
package file
import (
"io"
"net/http"
"os"
"strings"
gv1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/partial"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/rancherfederal/hauler/pkg/artifact"
"github.com/rancherfederal/hauler/pkg/artifact/local"
"github.com/rancherfederal/hauler/pkg/artifact/types"
)
var _ artifact.OCI = (*file)(nil)
type file struct {
blob gv1.Layer
config config
blobMap map[gv1.Hash]gv1.Layer
annotations map[string]string
}
func NewFile(ref string, filename string) (*file, error) {
var getter local.Opener
if strings.HasPrefix(ref, "http") || strings.HasPrefix(ref, "https") {
getter = remoteOpener(ref)
} else {
getter = localOpener(ref)
}
annotations := make(map[string]string)
annotations[ocispec.AnnotationTitle] = filename // For oras FileStore to recognize
annotations[ocispec.AnnotationSource] = ref
blob, err := local.LayerFromOpener(getter,
local.WithMediaType(types.FileLayerMediaType),
local.WithAnnotations(annotations))
if err != nil {
return nil, err
}
f := &file{
blob: blob,
config: config{
Reference: ref,
Name: filename,
},
}
return f, nil
}
func (f *file) MediaType() string {
return types.OCIManifestSchema1
}
func (f *file) RawConfig() ([]byte, error) {
return f.config.Raw()
}
func (f *file) Layers() ([]gv1.Layer, error) {
var layers []gv1.Layer
layers = append(layers, f.blob)
return layers, nil
}
func (f *file) Manifest() (*gv1.Manifest, error) {
desc, err := partial.Descriptor(f.blob)
if err != nil {
return nil, err
}
layerDescs := []gv1.Descriptor{*desc}
cfgDesc, err := f.config.Descriptor()
if err != nil {
return nil, err
}
return &gv1.Manifest{
SchemaVersion: 2,
MediaType: gtypes.MediaType(f.MediaType()),
Config: cfgDesc,
Layers: layerDescs,
Annotations: f.annotations,
}, nil
}
func localOpener(path string) local.Opener {
return func() (io.ReadCloser, error) {
return os.Open(path)
}
}
func remoteOpener(url string) local.Opener {
return func() (io.ReadCloser, error) {
resp, err := http.Get(url)
if err != nil {
return nil, err
}
return resp.Body, nil
}
}

View File

@@ -1,188 +0,0 @@
package file_test
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"reflect"
"testing"
"github.com/google/go-containerregistry/pkg/name"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"github.com/rancherfederal/hauler/pkg/artifact/types"
"github.com/rancherfederal/hauler/pkg/content/file"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/store"
)
func TestFile_Copy(t *testing.T) {
ctx := context.Background()
l := log.NewLogger(os.Stdout)
ctx = l.WithContext(ctx)
tmpdir, err := os.MkdirTemp("", "hauler")
if err != nil {
t.Error(err)
}
defer os.Remove(tmpdir)
// Make a temp file
f, err := os.CreateTemp(tmpdir, "tmp")
f.Write([]byte("content"))
defer f.Close()
fs := newTestFileServer(tmpdir)
fs.Start()
defer fs.Stop()
s := store.NewStore(ctx, tmpdir)
s.Open()
defer s.Close()
type args struct {
ctx context.Context
registry string
}
tests := []struct {
name string
cfg v1alpha1.File
args args
wantErr bool
}{
{
name: "should copy a local file successfully without an explicit name",
cfg: v1alpha1.File{
Ref: f.Name(),
Name: filepath.Base(f.Name()),
},
args: args{
ctx: ctx,
},
},
{
name: "should copy a local file successfully with an explicit name",
cfg: v1alpha1.File{
Ref: f.Name(),
Name: "my-other-file",
},
args: args{
ctx: ctx,
},
},
{
name: "should fail to copy a local file successfully with a malformed explicit name",
cfg: v1alpha1.File{
Ref: f.Name(),
Name: "my!invalid~@file",
},
args: args{
ctx: ctx,
},
wantErr: true,
},
{
name: "should copy a remote file successfully without an explicit name",
cfg: v1alpha1.File{
Ref: fmt.Sprintf("%s/%s", fs.server.URL, filepath.Base(f.Name())),
},
args: args{
ctx: ctx,
},
},
{
name: "should copy a remote file successfully with an explicit name",
cfg: v1alpha1.File{
Ref: fmt.Sprintf("%s/%s", fs.server.URL, filepath.Base(f.Name())),
Name: "my-other-file",
},
args: args{
ctx: ctx,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
f, err := file.NewFile(tt.cfg.Ref, tt.cfg.Name)
if err != nil {
t.Fatal(err)
}
ref, err := name.ParseReference("myfile")
if err != nil {
t.Fatal(err)
}
_, err = s.AddArtifact(ctx, f, ref)
if (err != nil) != tt.wantErr {
t.Error(err)
}
// if err := validate(tt.cfg.Ref, tt.cfg.Name, m); err != nil {
// t.Error(err)
// }
})
}
}
type testFileServer struct {
server *httptest.Server
}
func newTestFileServer(path string) *testFileServer {
s := httptest.NewUnstartedServer(http.FileServer(http.Dir(path)))
return &testFileServer{server: s}
}
func (s *testFileServer) Start() *httptest.Server {
s.server.Start()
return s.server
}
func (s *testFileServer) Stop() {
s.server.Close()
}
// validate ensure
func validate(ref string, name string, got *v1.Manifest) error {
data, err := os.ReadFile(ref)
if err != nil {
return err
}
d := digest.FromBytes(data)
annotations := make(map[string]string)
annotations[ocispec.AnnotationTitle] = name
annotations[ocispec.AnnotationSource] = ref
want := &v1.Manifest{
SchemaVersion: 2,
MediaType: types.OCIManifestSchema1,
Config: v1.Descriptor{},
Layers: []v1.Descriptor{
{
MediaType: types.FileLayerMediaType,
Size: int64(len(data)),
Digest: v1.Hash{
Algorithm: d.Algorithm().String(),
Hex: d.Hex(),
},
Annotations: annotations,
},
},
Annotations: nil,
}
if !reflect.DeepEqual(want.Layers, got.Layers) {
return fmt.Errorf("want = (%v) | got = (%v)", want, got)
}
return nil
}

View File

@@ -1,43 +0,0 @@
package image
import (
"github.com/google/go-containerregistry/pkg/name"
gv1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/rancherfederal/hauler/pkg/artifact"
)
var _ artifact.OCI = (*image)(nil)
func (i *image) MediaType() string {
mt, err := i.Image.MediaType()
if err != nil {
return ""
}
return string(mt)
}
func (i *image) RawConfig() ([]byte, error) {
return i.RawConfigFile()
}
type image struct {
gv1.Image
}
func NewImage(ref string) (*image, error) {
r, err := name.ParseReference(ref)
if err != nil {
return nil, err
}
img, err := remote.Image(r)
if err != nil {
return nil, err
}
return &image{
Image: img,
}, nil
}

View File

@@ -1,99 +0,0 @@
package image_test
import (
"context"
"os"
"path"
"path/filepath"
"testing"
"github.com/google/go-containerregistry/pkg/name"
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
"github.com/rancherfederal/hauler/pkg/content/image"
"github.com/rancherfederal/hauler/pkg/log"
"github.com/rancherfederal/hauler/pkg/store"
)
func TestImage_Copy(t *testing.T) {
ctx := context.Background()
l := log.NewLogger(os.Stdout)
ctx = l.WithContext(ctx)
tmpdir, err := os.MkdirTemp("", "hauler")
if err != nil {
t.Error(err)
}
defer os.Remove(tmpdir)
s := store.NewStore(ctx, tmpdir)
s.Open()
defer s.Close()
type args struct {
ctx context.Context
registry string
}
tests := []struct {
name string
cfg v1alpha1.Image
args args
wantErr bool
}{
// TODO: These mostly test functionality we're not responsible for (go-containerregistry), refactor these to only stuff we care about
{
name: "should work with tagged image",
cfg: v1alpha1.Image{
Ref: "busybox:1.34.1",
},
args: args{
ctx: ctx,
// registry: s.Registry(),
},
wantErr: false,
},
{
name: "should work with digest image",
cfg: v1alpha1.Image{
Ref: "busybox@sha256:6066ca124f8c2686b7ae71aa1d6583b28c6dc3df3bdc386f2c89b92162c597d9",
},
args: args{
ctx: ctx,
// registry: s.Registry(),
},
wantErr: false,
},
{
name: "should work with tagged image",
cfg: v1alpha1.Image{
Ref: "registry:2",
},
args: args{
ctx: ctx,
// registry: s.Registry(),
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
i, err := image.NewImage(tt.cfg.Ref)
if err != nil {
t.Error(err)
}
ref, err := name.ParseReference(path.Join("hauler", filepath.Base(tt.cfg.Ref)))
if err != nil {
t.Fatal(err)
}
if _, err := s.AddArtifact(ctx, i, ref); (err != nil) != tt.wantErr {
t.Error(err)
}
// if err := s.Add(tt.args.ctx, i, ref); (err != nil) != tt.wantErr {
// t.Errorf("Copy() error = %v, wantErr %v", err, tt.wantErr)
// }
})
}
}

288
pkg/content/oci.go Normal file
View File

@@ -0,0 +1,288 @@
package content
import (
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"sort"
"strings"
"sync"
ccontent "github.com/containerd/containerd/content"
"github.com/containerd/containerd/remotes"
"github.com/opencontainers/image-spec/specs-go"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"oras.land/oras-go/pkg/content"
"oras.land/oras-go/pkg/target"
"github.com/rancherfederal/hauler/pkg/consts"
)
var _ target.Target = (*OCI)(nil)
type OCI struct {
root string
index *ocispec.Index
nameMap *sync.Map // map[string]ocispec.Descriptor
}
func NewOCI(root string) (*OCI, error) {
o := &OCI{
root: root,
nameMap: &sync.Map{},
}
return o, nil
}
// AddIndex adds a descriptor to the index and updates it
//
// The descriptor must use AnnotationRefName to identify itself
func (o *OCI) AddIndex(desc ocispec.Descriptor) error {
if _, ok := desc.Annotations[ocispec.AnnotationRefName]; !ok {
return fmt.Errorf("descriptor must contain a reference from the annotation: %s", ocispec.AnnotationRefName)
}
key := fmt.Sprintf("%s-%s-%s", desc.Digest.String(), desc.Annotations[ocispec.AnnotationRefName], desc.Annotations[consts.KindAnnotationName])
o.nameMap.Store(key, desc)
return o.SaveIndex()
}
// LoadIndex will load the index from disk
func (o *OCI) LoadIndex() error {
path := o.path(consts.OCIImageIndexFile)
idx, err := os.Open(path)
if err != nil {
if !os.IsNotExist(err) {
return err
}
o.index = &ocispec.Index{
Versioned: specs.Versioned{
SchemaVersion: 2,
},
}
return nil
}
defer idx.Close()
if err := json.NewDecoder(idx).Decode(&o.index); err != nil {
return err
}
for _, desc := range o.index.Manifests {
key := fmt.Sprintf("%s-%s-%s", desc.Digest.String(), desc.Annotations[ocispec.AnnotationRefName], desc.Annotations[consts.KindAnnotationName])
if strings.TrimSpace(key) != "--" {
o.nameMap.Store(key, desc)
}
}
return nil
}
// SaveIndex will update the index on disk
func (o *OCI) SaveIndex() error {
var descs []ocispec.Descriptor
o.nameMap.Range(func(name, desc interface{}) bool {
n := desc.(ocispec.Descriptor).Annotations[ocispec.AnnotationRefName]
d := desc.(ocispec.Descriptor)
if d.Annotations == nil {
d.Annotations = make(map[string]string)
}
d.Annotations[ocispec.AnnotationRefName] = n
descs = append(descs, d)
return true
})
// sort index to ensure that images come before any signatures and attestations.
sort.SliceStable(descs, func(i, j int) bool {
kindI := descs[i].Annotations["kind"]
kindJ := descs[j].Annotations["kind"]
// Objects with the prefix of "dev.cosignproject.cosign/image" should be at the top.
if strings.HasPrefix(kindI, consts.KindAnnotation) && !strings.HasPrefix(kindJ, consts.KindAnnotation) {
return true
} else if !strings.HasPrefix(kindI, consts.KindAnnotation) && strings.HasPrefix(kindJ, consts.KindAnnotation) {
return false
}
return false // Default: maintain the order.
})
o.index.Manifests = descs
data, err := json.Marshal(o.index)
if err != nil {
return err
}
return os.WriteFile(o.path(consts.OCIImageIndexFile), data, 0644)
}
// Resolve attempts to resolve the reference into a name and descriptor.
//
// The argument `ref` should be a scheme-less URI representing the remote.
// Structurally, it has a host and path. The "host" can be used to directly
// reference a specific host or be matched against a specific handler.
//
// The returned name should be used to identify the referenced entity.
// Dependending on the remote namespace, this may be immutable or mutable.
// While the name may differ from ref, it should itself be a valid ref.
//
// If the resolution fails, an error will be returned.
func (o *OCI) Resolve(ctx context.Context, ref string) (name string, desc ocispec.Descriptor, err error) {
if err := o.LoadIndex(); err != nil {
return "", ocispec.Descriptor{}, err
}
d, ok := o.nameMap.Load(ref)
if !ok {
return "", ocispec.Descriptor{}, err
}
desc = d.(ocispec.Descriptor)
return ref, desc, nil
}
// Fetcher returns a new fetcher for the provided reference.
// All content fetched from the returned fetcher will be
// from the namespace referred to by ref.
func (o *OCI) Fetcher(ctx context.Context, ref string) (remotes.Fetcher, error) {
if err := o.LoadIndex(); err != nil {
return nil, err
}
if _, ok := o.nameMap.Load(ref); !ok {
return nil, nil
}
return o, nil
}
func (o *OCI) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error) {
readerAt, err := o.blobReaderAt(desc)
if err != nil {
return nil, err
}
return readerAt, nil
}
func (o *OCI) FetchManifest(ctx context.Context, manifest ocispec.Manifest) (io.ReadCloser, error) {
readerAt, err := o.manifestBlobReaderAt(manifest)
if err != nil {
return nil, err
}
return readerAt, nil
}
// Pusher returns a new pusher for the provided reference
// The returned Pusher should satisfy content.Ingester and concurrent attempts
// to push the same blob using the Ingester API should result in ErrUnavailable.
func (o *OCI) Pusher(ctx context.Context, ref string) (remotes.Pusher, error) {
if err := o.LoadIndex(); err != nil {
return nil, err
}
var baseRef, hash string
parts := strings.SplitN(ref, "@", 2)
baseRef = parts[0]
if len(parts) > 1 {
hash = parts[1]
}
return &ociPusher{
oci: o,
ref: baseRef,
digest: hash,
}, nil
}
func (o *OCI) Walk(fn func(reference string, desc ocispec.Descriptor) error) error {
if err := o.LoadIndex(); err != nil {
return err
}
var errst []string
o.nameMap.Range(func(key, value interface{}) bool {
if err := fn(key.(string), value.(ocispec.Descriptor)); err != nil {
errst = append(errst, err.Error())
}
return true
})
if errst != nil {
return fmt.Errorf(strings.Join(errst, "; "))
}
return nil
}
func (o *OCI) blobReaderAt(desc ocispec.Descriptor) (*os.File, error) {
blobPath, err := o.ensureBlob(desc.Digest.Algorithm().String(), desc.Digest.Hex())
if err != nil {
return nil, err
}
return os.Open(blobPath)
}
func (o *OCI) manifestBlobReaderAt(manifest ocispec.Manifest) (*os.File, error) {
blobPath, err := o.ensureBlob(string(manifest.Config.Digest.Algorithm().String()), manifest.Config.Digest.Hex())
if err != nil {
return nil, err
}
return os.Open(blobPath)
}
func (o *OCI) blobWriterAt(desc ocispec.Descriptor) (*os.File, error) {
blobPath, err := o.ensureBlob(desc.Digest.Algorithm().String(), desc.Digest.Hex())
if err != nil {
return nil, err
}
return os.OpenFile(blobPath, os.O_WRONLY|os.O_CREATE, 0644)
}
func (o *OCI) ensureBlob(alg string, hex string) (string, error) {
dir := o.path("blobs", alg)
if err := os.MkdirAll(dir, os.ModePerm); err != nil && !os.IsExist(err) {
return "", err
}
return filepath.Join(dir, hex), nil
}
func (o *OCI) path(elem ...string) string {
complete := []string{string(o.root)}
return filepath.Join(append(complete, elem...)...)
}
type ociPusher struct {
oci *OCI
ref string
digest string
}
// Push returns a content writer for the given resource identified
// by the descriptor.
func (p *ociPusher) Push(ctx context.Context, d ocispec.Descriptor) (ccontent.Writer, error) {
switch d.MediaType {
case ocispec.MediaTypeImageManifest, ocispec.MediaTypeImageIndex, consts.DockerManifestSchema2, consts.DockerManifestListSchema2:
// if the hash of the content matches that which was provided as the hash for the root, mark it
if p.digest != "" && p.digest == d.Digest.String() {
if err := p.oci.LoadIndex(); err != nil {
return nil, err
}
p.oci.nameMap.Store(p.ref, d)
if err := p.oci.SaveIndex(); err != nil {
return nil, err
}
}
}
blobPath, err := p.oci.ensureBlob(d.Digest.Algorithm().String(), d.Digest.Hex())
if err != nil {
return nil, err
}
if _, err := os.Stat(blobPath); err == nil {
// file already exists, discard (but validate digest)
return content.NewIoContentWriter(ioutil.Discard, content.WithOutputHash(d.Digest)), nil
}
f, err := os.Create(blobPath)
if err != nil {
return nil, err
}
w := content.NewIoContentWriter(f, content.WithInputHash(d.Digest), content.WithOutputHash(d.Digest))
return w, nil
}

247
pkg/cosign/cosign.go Normal file
View File

@@ -0,0 +1,247 @@
package cosign
import (
"fmt"
"os"
"os/exec"
"os/user"
"path/filepath"
"runtime"
"context"
"time"
"bufio"
"embed"
"strings"
"oras.land/oras-go/pkg/content"
"github.com/rancherfederal/hauler/pkg/store"
"github.com/rancherfederal/hauler/pkg/log"
)
const maxRetries = 3
const retryDelay = time.Second * 5
// VerifyFileSignature verifies the digital signature of a file using Sigstore/Cosign.
func VerifySignature(ctx context.Context, s *store.Layout, keyPath string, ref string) error {
operation := func() error {
cosignBinaryPath, err := getCosignPath(ctx)
if err != nil {
return err
}
cmd := exec.Command(cosignBinaryPath, "verify", "--insecure-ignore-tlog", "--key", keyPath, ref)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("error verifying signature: %v, output: %s", err, output)
}
return nil
}
return RetryOperation(ctx, operation)
}
// SaveImage saves image and any signatures/attestations to the store.
func SaveImage(ctx context.Context, s *store.Layout, ref string, platform string) error {
l := log.FromContext(ctx)
operation := func() error {
cosignBinaryPath, err := getCosignPath(ctx)
if err != nil {
return err
}
cmd := exec.Command(cosignBinaryPath, "save", ref, "--dir", s.Root)
// Conditionally add platform.
if platform != "" {
cmd.Args = append(cmd.Args, "--platform", platform)
}
output, err := cmd.CombinedOutput()
if err != nil {
if strings.Contains(string(output), "specified reference is not a multiarch image") {
l.Debugf(fmt.Sprintf("specified image [%s] is not a multiarch image. (choosing default)", ref))
// Rerun the command without the platform flag
cmd = exec.Command(cosignBinaryPath, "save", ref, "--dir", s.Root)
output, err = cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("error adding image to store: %v, output: %s", err, output)
}
} else {
return fmt.Errorf("error adding image to store: %v, output: %s", err, output)
}
}
return nil
}
return RetryOperation(ctx, operation)
}
// LoadImage loads store to a remote registry.
func LoadImages(ctx context.Context, s *store.Layout, registry string, ropts content.RegistryOptions) error {
l := log.FromContext(ctx)
cosignBinaryPath, err := getCosignPath(ctx)
if err != nil {
return err
}
cmd := exec.Command(cosignBinaryPath, "load", "--registry", registry, "--dir", s.Root)
// Conditionally add extra registry flags.
if ropts.Insecure {
cmd.Args = append(cmd.Args, "--allow-insecure-registry=true")
}
if ropts.PlainHTTP {
cmd.Args = append(cmd.Args, "--allow-http-registry=true")
}
stdout, err := cmd.StdoutPipe()
if err != nil {
return err
}
stderr, err := cmd.StderrPipe()
if err != nil {
return err
}
// start the command after having set up the pipe
if err := cmd.Start(); err != nil {
return err
}
// read command's stdout line by line
output := bufio.NewScanner(stdout)
for output.Scan() {
l.Infof(output.Text()) // write each line to your log, or anything you need
}
if err := output.Err(); err != nil {
cmd.Wait()
return err
}
// read command's stderr line by line
errors := bufio.NewScanner(stderr)
for errors.Scan() {
l.Errorf(errors.Text()) // write each line to your log, or anything you need
}
if err := errors.Err(); err != nil {
cmd.Wait()
return err
}
// Wait for the command to finish
err = cmd.Wait()
if err != nil {
return err
}
return nil
}
// RegistryLogin - performs cosign login
func RegistryLogin(ctx context.Context, s *store.Layout, registry string, ropts content.RegistryOptions) error {
cosignBinaryPath, err := getCosignPath(ctx)
if err != nil {
return err
}
cmd := exec.Command(cosignBinaryPath, "login", registry, "-u", ropts.Username, "-p", ropts.Password)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("error logging into registry: %v, output: %s", err, output)
}
return nil
}
func RetryOperation(ctx context.Context, operation func() error) error {
l := log.FromContext(ctx)
for attempt := 1; attempt <= maxRetries; attempt++ {
err := operation()
if err == nil {
// If the operation succeeds, return nil (no error).
return nil
}
// Log the error for the current attempt.
l.Errorf("Error (attempt %d/%d): %v", attempt, maxRetries, err)
// If this is not the last attempt, wait before retrying.
if attempt < maxRetries {
time.Sleep(retryDelay)
}
}
// If all attempts fail, return an error.
return fmt.Errorf("operation failed after %d attempts", maxRetries)
}
func EnsureBinaryExists(ctx context.Context, bin embed.FS) (error) {
// Set up a path for the binary to be copied.
binaryPath, err := getCosignPath(ctx)
if err != nil {
return fmt.Errorf("Error: %v\n", err)
}
// Determine the architecture so that we pull the correct embedded binary.
arch := runtime.GOARCH
rOS := runtime.GOOS
binaryName := "cosign"
if rOS == "windows" {
binaryName = fmt.Sprintf("cosign-%s-%s.exe", rOS, arch)
} else {
binaryName = fmt.Sprintf("cosign-%s-%s", rOS, arch)
}
// retrieve the embedded binary
f, err := bin.ReadFile(fmt.Sprintf("binaries/%s", binaryName))
if err != nil {
return fmt.Errorf("Error: %v\n", err)
}
// write the binary to the filesystem
err = os.WriteFile(binaryPath, f, 0755)
if err != nil {
return fmt.Errorf("Error: %v\n", err)
}
return nil
}
// getCosignPath returns the binary path
func getCosignPath(ctx context.Context) (string, error) {
// Get the current user's information
currentUser, err := user.Current()
if err != nil {
return "", fmt.Errorf("Error: %v\n", err)
}
// Get the user's home directory
homeDir := currentUser.HomeDir
// Construct the path to the .hauler directory
haulerDir := filepath.Join(homeDir, ".hauler")
// Create the .hauler directory if it doesn't exist
if _, err := os.Stat(haulerDir); os.IsNotExist(err) {
// .hauler directory does not exist, create it
if err := os.MkdirAll(haulerDir, 0755); err != nil {
return "", fmt.Errorf("Error creating .hauler directory: %v\n", err)
}
}
// Determine the binary name.
rOS := runtime.GOOS
binaryName := "cosign"
if rOS == "windows" {
binaryName = "cosign.exe"
}
// construct path to binary
binaryPath := filepath.Join(haulerDir, binaryName)
return binaryPath, nil
}

View File

@@ -1,4 +1,4 @@
package cache
package layer
import (
"errors"
@@ -7,9 +7,13 @@ import (
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/artifact"
"github.com/rancherfederal/hauler/pkg/artifacts"
)
/*
This package is _heavily_ influenced by go-containerregistry and it's cache implementation: https://github.com/google/go-containerregistry/tree/main/pkg/v1/cache
*/
type Cache interface {
Put(v1.Layer) (v1.Layer, error)
@@ -19,12 +23,12 @@ type Cache interface {
var ErrLayerNotFound = errors.New("layer not found")
type oci struct {
artifact.OCI
artifacts.OCI
c Cache
}
func Oci(o artifact.OCI, c Cache) artifact.OCI {
func OCICache(o artifacts.OCI, c Cache) artifacts.OCI {
return &oci{
OCI: o,
c: c,

View File

@@ -1,4 +1,4 @@
package cache
package layer
import (
"io"
@@ -6,15 +6,13 @@ import (
"path/filepath"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/rancherfederal/hauler/pkg/artifact/local"
)
type fs struct {
root string
}
func NewFilesystem(root string) Cache {
func NewFilesystemCache(root string) Cache {
return &fs{root: root}
}
@@ -37,14 +35,14 @@ func (f *fs) Put(l v1.Layer) (v1.Layer, error) {
func (f *fs) Get(h v1.Hash) (v1.Layer, error) {
opener := f.open(h)
l, err := local.LayerFromOpener(opener)
l, err := FromOpener(opener)
if os.IsNotExist(err) {
return nil, ErrLayerNotFound
}
return l, err
}
func (f *fs) open(h v1.Hash) local.Opener {
func (f *fs) open(h v1.Hash) Opener {
return func() (io.ReadCloser, error) {
return os.Open(layerpath(f.root, h))
}

View File

@@ -1,4 +1,4 @@
package local
package layer
import (
"io"
@@ -6,16 +6,16 @@ import (
v1 "github.com/google/go-containerregistry/pkg/v1"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
"github.com/rancherfederal/hauler/pkg/artifact/types"
"github.com/rancherfederal/hauler/pkg/consts"
)
type Opener func() (io.ReadCloser, error)
func LayerFromOpener(opener Opener, opts ...LayerOption) (v1.Layer, error) {
func FromOpener(opener Opener, opts ...Option) (v1.Layer, error) {
var err error
layer := &layer{
mediaType: types.UnknownLayer,
mediaType: consts.UnknownLayer,
annotations: make(map[string]string, 1),
}
@@ -25,7 +25,7 @@ func LayerFromOpener(opener Opener, opts ...LayerOption) (v1.Layer, error) {
if err != nil {
return nil, err
}
// TODO: actually compress this
return rc, nil
}
@@ -53,15 +53,15 @@ func compute(opener Opener) (v1.Hash, int64, error) {
return v1.SHA256(rc)
}
type LayerOption func(*layer)
type Option func(*layer)
func WithMediaType(mt string) LayerOption {
func WithMediaType(mt string) Option {
return func(l *layer) {
l.mediaType = mt
}
}
func WithAnnotations(annotations map[string]string) LayerOption {
func WithAnnotations(annotations map[string]string) Option {
return func(l *layer) {
if l.annotations == nil {
l.annotations = make(map[string]string)

View File

@@ -1,146 +0,0 @@
package layout
import (
"bytes"
"encoding/json"
"io"
"os"
"strings"
"github.com/google/go-containerregistry/pkg/name"
gv1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/empty"
"github.com/google/go-containerregistry/pkg/v1/layout"
gtypes "github.com/google/go-containerregistry/pkg/v1/types"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"golang.org/x/sync/errgroup"
"github.com/rancherfederal/hauler/pkg/artifact"
)
// Path is a wrapper around layout.Path
type Path struct {
layout.Path
}
// FromPath returns a new Path or creates one if one doesn't exist
func FromPath(path string) (Path, error) {
p, err := layout.FromPath(path)
if os.IsNotExist(err) {
p, err = layout.Write(path, empty.Index)
if err != nil {
return Path{}, err
}
}
return Path{Path: p}, err
}
// WriteOci will write oci content (artifact.OCI) to the given Path
func (l Path) WriteOci(o artifact.OCI, reference name.Reference) (ocispec.Descriptor, error) {
layers, err := o.Layers()
if err != nil {
return ocispec.Descriptor{}, err
}
// Write layers concurrently
var g errgroup.Group
for _, layer := range layers {
layer := layer
g.Go(func() error {
return l.writeLayer(layer)
})
}
if err := g.Wait(); err != nil {
return ocispec.Descriptor{}, err
}
// Write the config
cfgBlob, err := o.RawConfig()
if err != nil {
return ocispec.Descriptor{}, err
}
if err = l.writeBlob(cfgBlob); err != nil {
return ocispec.Descriptor{}, err
}
m, err := o.Manifest()
if err != nil {
return ocispec.Descriptor{}, err
}
manifest, err := json.Marshal(m)
if err != nil {
return ocispec.Descriptor{}, err
}
if err := l.writeBlob(manifest); err != nil {
return ocispec.Descriptor{}, err
}
desc := ocispec.Descriptor{
MediaType: o.MediaType(),
Size: int64(len(manifest)),
Digest: digest.FromBytes(manifest),
Annotations: map[string]string{
ocispec.AnnotationRefName: reference.Name(),
ocispec.AnnotationTitle: deregistry(reference).Name(),
},
}
if err := l.appendDescriptor(desc); err != nil {
return ocispec.Descriptor{}, err
}
return desc, nil
}
// writeBlob differs from layer.WriteBlob in that it requires data instead
func (l Path) writeBlob(data []byte) error {
h, _, err := gv1.SHA256(bytes.NewReader(data))
if err != nil {
return err
}
return l.WriteBlob(h, io.NopCloser(bytes.NewReader(data)))
}
// writeLayer is a verbatim reimplementation of layout.writeLayer
func (l Path) writeLayer(layer gv1.Layer) error {
d, err := layer.Digest()
if err != nil {
return err
}
r, err := layer.Compressed()
if err != nil {
return err
}
return l.WriteBlob(d, r)
}
// appendDescriptor is a helper that translates a ocispec.Descriptor into a gv1.Descriptor
func (l Path) appendDescriptor(desc ocispec.Descriptor) error {
gdesc := gv1.Descriptor{
MediaType: gtypes.MediaType(desc.MediaType),
Size: desc.Size,
Digest: gv1.Hash{
Algorithm: desc.Digest.Algorithm().String(),
Hex: desc.Digest.Hex(),
},
URLs: desc.URLs,
Annotations: desc.Annotations,
}
return l.AppendDescriptor(gdesc)
}
// deregistry removes the registry content from a name.Reference
func deregistry(ref name.Reference) name.Reference {
// No error checking b/c at this point we're already assumed to have a valid enough reference
dereg := strings.TrimLeft(strings.ReplaceAll(ref.Name(), ref.Context().RegistryStr(), ""), "/")
deref, _ := name.ParseReference(dereg, name.WithDefaultRegistry(""))
return deref
}

View File

@@ -1,191 +0,0 @@
package layout
import (
"context"
"encoding/json"
"fmt"
"os"
"path/filepath"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/remotes/docker"
"github.com/google/go-containerregistry/pkg/name"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
orascontent "oras.land/oras-go/pkg/content"
"oras.land/oras-go/pkg/oras"
"github.com/rancherfederal/hauler/pkg/artifact/types"
)
// interface guards
var (
_ content.Provider = (*OCIStore)(nil)
_ content.Ingester = (*OCIStore)(nil)
)
// OCIStore represents a content compatible store adhering by the oci-layout spec
type OCIStore struct {
content.Store
root string
index *ocispec.Index
digestMap map[string]ocispec.Descriptor
}
// Copy placeholder until we migrate to oras 0.5
// Will loop through each appropriately named index and copy the contents to the desired registry
func Copy(ctx context.Context, s *OCIStore, registry string) error {
for _, desc := range s.index.Manifests {
manifestBlobPath, err := s.blobPath(desc.Digest)
if err != nil {
return err
}
manifestData, err := os.ReadFile(manifestBlobPath)
if err != nil {
return err
}
m, mdesc, err := loadManifest(manifestData)
if err != nil {
return err
}
refName, ok := desc.Annotations[ocispec.AnnotationRefName]
if !ok {
return fmt.Errorf("no name found to push image")
}
rref, err := RelocateReference(refName, registry)
if err != nil {
return err
}
resolver := docker.NewResolver(docker.ResolverOptions{})
_, err = oras.Push(ctx, resolver, rref.Name(), s, m.Layers,
oras.WithConfig(m.Config), oras.WithNameValidation(nil), oras.WithManifest(mdesc))
if err != nil {
return err
}
}
return nil
}
// NewOCIStore will return a new OCIStore given a path to an oci-layout compatible directory
func NewOCIStore(path string) (*OCIStore, error) {
fs, err := local.NewStore(path)
if err != nil {
return nil, err
}
store := &OCIStore{
Store: fs,
root: path,
}
if err := store.validateOCILayout(); err != nil {
return nil, err
}
if err := store.LoadIndex(); err != nil {
return nil, nil
}
return store, nil
}
// LoadIndex will load an oci-layout compatible directory
func (s *OCIStore) LoadIndex() error {
path := filepath.Join(s.root, types.OCIImageIndexFile)
indexFile, err := os.Open(path)
if err != nil {
// TODO: Don't just bomb out?
return err
}
defer indexFile.Close()
if err := json.NewDecoder(indexFile).Decode(&s.index); err != nil {
return err
}
s.digestMap = make(map[string]ocispec.Descriptor)
for _, desc := range s.index.Manifests {
if name := desc.Annotations[ocispec.AnnotationRefName]; name != "" {
s.digestMap[name] = desc
}
}
return nil
}
func (s *OCIStore) validateOCILayout() error {
layoutFilePath := filepath.Join(s.root, ocispec.ImageLayoutFile)
layoutFile, err := os.Open(layoutFilePath)
if err != nil {
return err
}
defer layoutFile.Close()
var layout *ocispec.ImageLayout
if err := json.NewDecoder(layoutFile).Decode(&layout); err != nil {
return err
}
if layout.Version != ocispec.ImageLayoutVersion {
return orascontent.ErrUnsupportedVersion
}
return nil
}
func (s *OCIStore) blobPath(d digest.Digest) (string, error) {
if err := d.Validate(); err != nil {
return "", err
}
return filepath.Join(s.root, "blobs", d.Algorithm().String(), d.Hex()), nil
}
// manifest is a field wrapper around ocispec.Manifest that contains the mediaType field
type manifest struct {
ocispec.Manifest `json:",inline"`
MediaType string `json:"mediaType"`
}
// loadManifest
func loadManifest(data []byte) (ocispec.Manifest, ocispec.Descriptor, error) {
var m manifest
if err := json.Unmarshal(data, &m); err != nil {
return ocispec.Manifest{}, ocispec.Descriptor{}, err
}
desc := ocispec.Descriptor{
MediaType: m.MediaType,
Digest: digest.FromBytes(data),
Size: int64(len(data)),
}
return m.Manifest, desc, nil
}
// RelocateReference returns a name.Reference given a reference and registry
func RelocateReference(reference string, registry string) (name.Reference, error) {
ref, err := name.ParseReference(reference)
if err != nil {
return nil, err
}
relocated, err := name.ParseReference(ref.Context().RepositoryStr(), name.WithDefaultRegistry(registry))
if err != nil {
return nil, err
}
if _, err := name.NewDigest(ref.Name()); err == nil {
return relocated.Context().Digest(ref.Identifier()), nil
}
return relocated.Context().Tag(ref.Identifier()), nil
}

View File

@@ -14,6 +14,7 @@ type Logger interface {
SetLevel(string)
With(Fields) *logger
WithContext(context.Context) context.Context
Errorf(string, ...interface{})
Infof(string, ...interface{})
Warnf(string, ...interface{})

View File

@@ -0,0 +1,67 @@
// Package reference provides general types to represent oci content within a registry or local oci layout
// Grammar (stolen mostly from containerd's grammar)
//
// reference :=
package reference
import (
"strings"
gname "github.com/google/go-containerregistry/pkg/name"
)
const (
DefaultNamespace = "hauler"
DefaultTag = "latest"
)
type Reference interface {
// FullName is the full name of the reference
FullName() string
// Name is the registryless name
Name() string
}
// NewTagged will create a new docker.NamedTagged given a path-component
func NewTagged(n string, tag string) (gname.Reference, error) {
repo, err := Parse(n)
if err != nil {
return nil, err
}
tag = strings.Replace(tag, "+", "-", -1)
return repo.Context().Tag(tag), nil
}
// Parse will parse a reference and return a name.Reference namespaced with DefaultNamespace if necessary
func Parse(ref string) (gname.Reference, error) {
r, err := gname.ParseReference(ref, gname.WithDefaultRegistry(""), gname.WithDefaultTag(DefaultTag))
if err != nil {
return nil, err
}
if !strings.ContainsRune(r.String(), '/') {
ref = DefaultNamespace + "/" + r.String()
return gname.ParseReference(ref, gname.WithDefaultRegistry(""), gname.WithDefaultTag(DefaultTag))
}
return r, nil
}
// Relocate returns a name.Reference given a reference and registry
func Relocate(reference string, registry string) (gname.Reference, error) {
ref, err := gname.ParseReference(reference)
if err != nil {
return nil, err
}
relocated, err := gname.ParseReference(ref.Context().RepositoryStr(), gname.WithDefaultRegistry(registry))
if err != nil {
return nil, err
}
if _, err := gname.NewDigest(ref.Name()); err == nil {
return relocated.Context().Digest(ref.Identifier()), nil
}
return relocated.Context().Tag(ref.Identifier()), nil
}

View File

@@ -0,0 +1,57 @@
package reference_test
import (
"reflect"
"testing"
"github.com/rancherfederal/hauler/pkg/reference"
)
func TestParse(t *testing.T) {
type args struct {
ref string
}
tests := []struct {
name string
args args
want string
wantErr bool
}{
{
name: "Should add hauler namespace when doesn't exist",
args: args{
ref: "myfile",
},
want: "hauler/myfile:latest",
wantErr: false,
},
{
name: "shouldn't modify namespaced reference",
args: args{
ref: "rancher/rancher:latest",
},
want: "rancher/rancher:latest",
wantErr: false,
},
{
name: "Shouldn't modify canonical reference",
args: args{
ref: "index.docker.io/library/registry@sha256:42043edfae481178f07aa077fa872fcc242e276d302f4ac2026d9d2eb65b955f",
},
want: "index.docker.io/library/registry@sha256:42043edfae481178f07aa077fa872fcc242e276d302f4ac2026d9d2eb65b955f",
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := reference.Parse(tt.args.ref)
if (err != nil) != tt.wantErr {
t.Errorf("Parse() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got.Name(), tt.want) {
t.Errorf("Parse() got = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -1,146 +0,0 @@
package store
import (
"context"
"io/ioutil"
"os"
"path/filepath"
"github.com/google/go-containerregistry/pkg/name"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/rancherfederal/hauler/pkg/artifact"
"github.com/rancherfederal/hauler/pkg/cache"
"github.com/rancherfederal/hauler/pkg/layout"
)
// AddArtifact will add an artifact.OCI to the store
// The method to achieve this is to save artifact.OCI to a temporary directory in an OCI layout compatible form. Once
// saved, the entirety of the layout is copied to the store (which is just a registry). This allows us to not only use
// strict types to define generic content, but provides a processing pipeline suitable for extensibility. In the
// future we'll allow users to define their own content that must adhere either by artifact.OCI or simply an OCI layout.
func (s *Store) AddArtifact(ctx context.Context, oci artifact.OCI, reference name.Reference) (ocispec.Descriptor, error) {
if err := s.precheck(); err != nil {
return ocispec.Descriptor{}, err
}
stg, err := newOciStage()
if err != nil {
return ocispec.Descriptor{}, err
}
if s.cache != nil {
cached := cache.Oci(oci, s.cache)
oci = cached
}
pdesc, err := stg.add(ctx, oci, reference)
if err != nil {
return ocispec.Descriptor{}, err
}
if err := stg.commit(ctx, s); err != nil {
return ocispec.Descriptor{}, nil
}
return pdesc, nil
}
// Flush is a fancy name for delete-all-the-things, in this case it's as trivial as deleting everything in the underlying store directory
// This can be a highly destructive operation if the store's directory happens to be inline with other non-store contents
// To reduce the blast radius and likelihood of deleting things we don't own, Flush explicitly includes docker/registry/v2
// in the search dir
func (s *Store) Flush(ctx context.Context) error {
contentDir := filepath.Join(s.DataDir, "docker", "registry", "v2")
fs, err := ioutil.ReadDir(contentDir)
if !os.IsNotExist(err) && err != nil {
return err
}
for _, f := range fs {
err := os.RemoveAll(filepath.Join(contentDir, f.Name()))
if err != nil {
return err
}
}
return nil
}
// AddCollection .
func (s *Store) AddCollection(ctx context.Context, coll artifact.Collection) ([]ocispec.Descriptor, error) {
if err := s.precheck(); err != nil {
return nil, err
}
cnts, err := coll.Contents()
if err != nil {
return nil, err
}
for ref, o := range cnts {
if _, err := s.AddArtifact(ctx, o, ref); err != nil {
return nil, nil
}
}
return nil, err
}
type stager interface {
// add adds an artifact.OCI to the stage
add(artifact.OCI) error
// commit pushes all the staged contents into the store and closes the stage
commit(*Store) error
// close flushes and closes the stage
close() error
}
type oci struct {
layout layout.Path
root string
}
func (o *oci) add(ctx context.Context, oci artifact.OCI, reference name.Reference) (ocispec.Descriptor, error) {
mdesc, err := o.layout.WriteOci(oci, reference)
if err != nil {
return ocispec.Descriptor{}, err
}
return mdesc, err
}
func (o *oci) commit(ctx context.Context, s *Store) error {
defer o.close()
ts, err := layout.NewOCIStore(o.root)
if err != nil {
return err
}
if err = layout.Copy(ctx, ts, s.Registry()); err != nil {
return err
}
return err
}
func (o *oci) close() error {
return os.RemoveAll(o.root)
}
func newOciStage() (*oci, error) {
tmpdir, err := os.MkdirTemp("", "hauler")
if err != nil {
return nil, err
}
l, err := layout.FromPath(tmpdir)
if err != nil {
return nil, err
}
return &oci{
layout: l,
root: tmpdir,
}, nil
}

View File

@@ -1,20 +0,0 @@
package store
import "github.com/rancherfederal/hauler/pkg/cache"
// Options defines options for Store
type Options func(*Store)
// WithCache initializes a Store with a cache.Cache, all content added to the Store will first be cached
func WithCache(c cache.Cache) Options {
return func(s *Store) {
s.cache = c
}
}
// WithDefaultRepository sets the default repository to use when none is specified (defaults to "library")
func WithDefaultRepository(repo string) Options {
return func(s *Store) {
s.DefaultRepository = repo
}
}

View File

@@ -2,212 +2,261 @@ package store
import (
"context"
"fmt"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"regexp"
"strconv"
"time"
"os"
"path/filepath"
"github.com/distribution/distribution/v3/configuration"
dcontext "github.com/distribution/distribution/v3/context"
"github.com/distribution/distribution/v3/reference"
"github.com/distribution/distribution/v3/registry/client"
"github.com/distribution/distribution/v3/registry/handlers"
"github.com/google/go-containerregistry/pkg/name"
"github.com/sirupsen/logrus"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/static"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"golang.org/x/sync/errgroup"
"oras.land/oras-go/pkg/oras"
"oras.land/oras-go/pkg/target"
// Init filesystem distribution storage driver
_ "github.com/distribution/distribution/v3/registry/storage/driver/filesystem"
"github.com/rancherfederal/hauler/pkg/cache"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/consts"
"github.com/rancherfederal/hauler/pkg/content"
"github.com/rancherfederal/hauler/pkg/layer"
)
var (
httpRegex = regexp.MustCompile("https?://")
)
// Store is a simple wrapper around distribution/distribution to enable hauler's use case
type Store struct {
DataDir string
DefaultRepository string
config *configuration.Configuration
handler http.Handler
server *httptest.Server
cache cache.Cache
type Layout struct {
*content.OCI
Root string
cache layer.Cache
}
// NewStore creates a new registry store, designed strictly for use within haulers embedded operations and _not_ for serving
func NewStore(ctx context.Context, dataDir string, opts ...Options) *Store {
cfg := &configuration.Configuration{
Version: "0.1",
Storage: configuration.Storage{
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
"filesystem": configuration.Parameters{"rootdirectory": dataDir},
},
type Options func(*Layout)
func WithCache(c layer.Cache) Options {
return func(l *Layout) {
l.cache = c
}
cfg.Log.Level = "panic"
cfg.HTTP.Headers = http.Header{"X-Content-Type-Options": []string{"nosniff"}}
handler := setupHandler(ctx, cfg)
s := &Store{
DataDir: dataDir,
config: cfg,
handler: handler,
}
for _, opt := range opts {
opt(s)
}
return s
}
// Open will create a new server and start it, it's up to the consumer to close it
func (s *Store) Open() *httptest.Server {
server := httptest.NewServer(s.handler)
s.server = server
return server
}
// Close stops the server
func (s *Store) Close() {
s.server.Close()
s.server = nil
return
}
// List will list all known content tags in the registry
// TODO: This fn is messy and needs cleanup, this is arguably easier with the catalog api as well
func (s *Store) List(ctx context.Context) ([]string, error) {
reg, err := client.NewRegistry(s.RegistryURL(), nil)
func NewLayout(rootdir string, opts ...Options) (*Layout, error) {
ociStore, err := content.NewOCI(rootdir)
if err != nil {
return nil, err
}
entries := make(map[string]reference.Named)
last := ""
for {
chunk := make([]string, 20) // randomly chosen number...
nf, err := reg.Repositories(ctx, chunk, last)
last = strconv.Itoa(nf)
for _, e := range chunk {
if e == "" {
continue
}
ref, err := reference.WithName(e)
if err != nil {
return nil, err
}
entries[e] = ref
}
if err == io.EOF {
break
}
if err := ociStore.LoadIndex(); err != nil {
return nil, err
}
var refs []string
for ref, named := range entries {
repo, err := client.NewRepository(named, s.RegistryURL(), nil)
l := &Layout{
Root: rootdir,
OCI: ociStore,
}
for _, opt := range opts {
opt(l)
}
return l, nil
}
// AddOCI adds an artifacts.OCI to the store
//
// The method to achieve this is to save artifact.OCI to a temporary directory in an OCI layout compatible form. Once
// saved, the entirety of the layout is copied to the store (which is just a registry). This allows us to not only use
// strict types to define generic content, but provides a processing pipeline suitable for extensibility. In the
// future we'll allow users to define their own content that must adhere either by artifact.OCI or simply an OCI layout.
func (l *Layout) AddOCI(ctx context.Context, oci artifacts.OCI, ref string) (ocispec.Descriptor, error) {
if l.cache != nil {
cached := layer.OCICache(oci, l.cache)
oci = cached
}
// Write manifest blob
m, err := oci.Manifest()
if err != nil {
return ocispec.Descriptor{}, err
}
mdata, err := json.Marshal(m)
if err != nil {
return ocispec.Descriptor{}, err
}
if err := l.writeBlobData(mdata); err != nil {
return ocispec.Descriptor{}, err
}
// Write config blob
cdata, err := oci.RawConfig()
if err != nil {
return ocispec.Descriptor{}, err
}
static.NewLayer(cdata, "")
if err := l.writeBlobData(cdata); err != nil {
return ocispec.Descriptor{}, err
}
// write blob layers concurrently
layers, err := oci.Layers()
if err != nil {
return ocispec.Descriptor{}, err
}
var g errgroup.Group
for _, lyr := range layers {
lyr := lyr
g.Go(func() error {
return l.writeLayer(lyr)
})
}
if err := g.Wait(); err != nil {
return ocispec.Descriptor{}, err
}
// Build index
idx := ocispec.Descriptor{
MediaType: string(m.MediaType),
Digest: digest.FromBytes(mdata),
Size: int64(len(mdata)),
Annotations: map[string]string{
consts.KindAnnotationName: consts.KindAnnotation,
ocispec.AnnotationRefName: ref,
},
URLs: nil,
Platform: nil,
}
return idx, l.OCI.AddIndex(idx)
}
// AddOCICollection .
func (l *Layout) AddOCICollection(ctx context.Context, collection artifacts.OCICollection) ([]ocispec.Descriptor, error) {
cnts, err := collection.Contents()
if err != nil {
return nil, err
}
var descs []ocispec.Descriptor
for ref, oci := range cnts {
desc, err := l.AddOCI(ctx, oci, ref)
if err != nil {
return nil, err
}
tsvc := repo.Tags(ctx)
ts, err := tsvc.All(ctx)
if err != nil {
continue
}
for _, t := range ts {
ref, err := name.ParseReference(ref, name.WithDefaultRegistry(""), name.WithDefaultTag(t))
if err != nil {
return nil, err
}
refs = append(refs, ref.Name())
}
descs = append(descs, desc)
}
return refs, nil
return descs, nil
}
// precheck checks whether server is appropriately started and errors if it's not
// used to safely run Store operations without fear of panics
func (s *Store) precheck() error {
if s.server == nil || s.server.URL == "" {
return fmt.Errorf("server is not started yet")
// Flush is a fancy name for delete-all-the-things, in this case it's as trivial as deleting oci-layout content
//
// This can be a highly destructive operation if the store's directory happens to be inline with other non-store contents
// To reduce the blast radius and likelihood of deleting things we don't own, Flush explicitly deletes oci-layout content only
func (l *Layout) Flush(ctx context.Context) error {
blobs := filepath.Join(l.Root, "blobs")
if err := os.RemoveAll(blobs); err != nil {
return err
}
index := filepath.Join(l.Root, "index.json")
if err := os.RemoveAll(index); err != nil {
return err
}
layout := filepath.Join(l.Root, "oci-layout")
if err := os.RemoveAll(layout); err != nil {
return err
}
return nil
}
// Registry returns the registries URL without the protocol, suitable for image relocation operations
func (s *Store) Registry() string {
return httpRegex.ReplaceAllString(s.server.URL, "")
// Copy will copy a given reference to a given target.Target
//
// This is essentially a wrapper around oras.Copy, but locked to this content store
func (l *Layout) Copy(ctx context.Context, ref string, to target.Target, toRef string) (ocispec.Descriptor, error) {
return oras.Copy(ctx, l.OCI, ref, to, toRef,
oras.WithAdditionalCachedMediaTypes(consts.DockerManifestSchema2, consts.DockerManifestListSchema2))
}
// RegistryURL returns the registries URL
func (s *Store) RegistryURL() string {
return s.server.URL
}
func alive(path string, handler http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == path {
w.Header().Set("Cache-Control", "no-cache")
w.WriteHeader(http.StatusOK)
return
}
handler.ServeHTTP(w, r)
})
}
// setupHandler will set up the registry handler
func setupHandler(ctx context.Context, config *configuration.Configuration) http.Handler {
ctx, _ = configureLogging(ctx, config)
app := handlers.NewApp(ctx, config)
app.RegisterHealthChecks()
handler := alive("/", app)
return handler
}
func configureLogging(ctx context.Context, cfg *configuration.Configuration) (context.Context, context.CancelFunc) {
logrus.SetLevel(logLevel(cfg.Log.Level))
formatter := cfg.Log.Formatter
if formatter == "" {
formatter = "text"
}
logrus.SetFormatter(&logrus.TextFormatter{
TimestampFormat: time.RFC3339Nano,
})
if len(cfg.Log.Fields) > 0 {
var fields []interface{}
for k := range cfg.Log.Fields {
fields = append(fields, k)
// CopyAll performs bulk copy operations on the stores oci layout to a provided target.Target
func (l *Layout) CopyAll(ctx context.Context, to target.Target, toMapper func(string) (string, error)) ([]ocispec.Descriptor, error) {
var descs []ocispec.Descriptor
err := l.OCI.Walk(func(reference string, desc ocispec.Descriptor) error {
toRef := ""
if toMapper != nil {
tr, err := toMapper(reference)
if err != nil {
return err
}
toRef = tr
}
ctx = dcontext.WithValues(ctx, cfg.Log.Fields)
ctx = dcontext.WithLogger(ctx, dcontext.GetLogger(ctx, fields...))
}
desc, err := l.Copy(ctx, reference, to, toRef)
if err != nil {
return err
}
dcontext.SetDefaultLogger(dcontext.GetLogger(ctx))
return context.WithCancel(ctx)
}
func logLevel(level configuration.Loglevel) logrus.Level {
l, err := logrus.ParseLevel(string(level))
descs = append(descs, desc)
return nil
})
if err != nil {
l = logrus.InfoLevel
logrus.Warnf("error parsing log level %q: %v, using %q", level, err, l)
return nil, err
}
return l
return descs, nil
}
// Identify is a helper function that will identify a human-readable content type given a descriptor
func (l *Layout) Identify(ctx context.Context, desc ocispec.Descriptor) string {
rc, err := l.OCI.Fetch(ctx, desc)
if err != nil {
return ""
}
defer rc.Close()
m := struct {
Config struct {
MediaType string `json:"mediaType"`
} `json:"config"`
}{}
if err := json.NewDecoder(rc).Decode(&m); err != nil {
return ""
}
return m.Config.MediaType
}
func (l *Layout) writeBlobData(data []byte) error {
blob := static.NewLayer(data, "") // NOTE: MediaType isn't actually used in the writing
return l.writeLayer(blob)
}
func (l *Layout) writeLayer(layer v1.Layer) error {
d, err := layer.Digest()
if err != nil {
return err
}
r, err := layer.Compressed()
if err != nil {
return err
}
dir := filepath.Join(l.Root, "blobs", d.Algorithm)
if err := os.MkdirAll(dir, os.ModePerm); err != nil && !os.IsExist(err) {
return err
}
blobPath := filepath.Join(dir, d.Hex)
// Skip entirely if something exists, assume layer is present already
if _, err := os.Stat(blobPath); err == nil {
return nil
}
w, err := os.Create(blobPath)
if err != nil {
return err
}
defer w.Close()
_, err = io.Copy(w, r)
return err
}

View File

@@ -1,35 +1,28 @@
package store
package store_test
import (
"context"
"os"
"testing"
"github.com/google/go-containerregistry/pkg/name"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/random"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/rancherfederal/hauler/pkg/artifacts"
"github.com/rancherfederal/hauler/pkg/store"
)
func TestStore_List(t *testing.T) {
ctx := context.Background()
var (
ctx context.Context
root string
)
s, err := testStore(ctx)
if err != nil {
t.Fatal(err)
}
s.Open()
defer s.Close()
r := randomImage(t)
addImageToStore(t, s, r, "hauler/tester:latest")
addImageToStore(t, s, r, "hauler/tester:non")
addImageToStore(t, s, r, "other/ns:more")
addImageToStore(t, s, r, "unique/donkey:v1.2.2")
func TestLayout_AddOCI(t *testing.T) {
teardown := setup(t)
defer teardown()
type args struct {
ctx context.Context
ref string
}
tests := []struct {
name string
@@ -37,51 +30,76 @@ func TestStore_List(t *testing.T) {
wantErr bool
}{
{
name: "should list",
args: args{},
name: "",
args: args{
ref: "hello/world:v1",
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
refs, err := s.List(ctx)
s, err := store.NewLayout(root)
if (err != nil) != tt.wantErr {
t.Errorf("List() error = %v, wantErr %v", err, tt.wantErr)
t.Errorf("NewOCI() error = %v, wantErr %v", err, tt.wantErr)
return
}
moci := genArtifact(t, tt.args.ref)
// TODO: Make this more robust
if len(refs) != 4 {
t.Errorf("Expected 4, got %d", len(refs))
got, err := s.AddOCI(ctx, moci, tt.args.ref)
if (err != nil) != tt.wantErr {
t.Errorf("AddOCI() error = %v, wantErr %v", err, tt.wantErr)
return
}
_ = got
_, err = s.AddOCI(ctx, moci, tt.args.ref)
if err != nil {
t.Errorf("AddOCI() error = %v, wantErr %v", err, tt.wantErr)
return
}
})
}
}
func testStore(ctx context.Context) (*Store, error) {
func setup(t *testing.T) func() error {
tmpdir, err := os.MkdirTemp("", "hauler")
if err != nil {
return nil, err
t.Fatal(err)
}
root = tmpdir
s := NewStore(ctx, tmpdir)
return s, nil
ctx = context.Background()
return func() error {
os.RemoveAll(tmpdir)
return nil
}
}
func randomImage(t *testing.T) v1.Image {
r, err := random.Image(1024, 3)
type mockArtifact struct {
v1.Image
}
func (m mockArtifact) MediaType() string {
mt, err := m.Image.MediaType()
if err != nil {
t.Fatalf("random.Image() = %v", err)
return ""
}
return r
return string(mt)
}
func addImageToStore(t *testing.T, s *Store, image v1.Image, reference string) {
ref, err := name.ParseReference(reference, name.WithDefaultRegistry(s.Registry()))
func (m mockArtifact) RawConfig() ([]byte, error) {
return m.RawConfigFile()
}
func genArtifact(t *testing.T, ref string) artifacts.OCI {
img, err := random.Image(1024, 3)
if err != nil {
t.Error(err)
t.Fatal(err)
}
if err := remote.Write(ref, image); err != nil {
t.Error(err)
return &mockArtifact{
img,
}
}

View File

@@ -1,61 +0,0 @@
package version
import (
"encoding/json"
"fmt"
"path"
"runtime"
"strings"
"text/tabwriter"
)
var (
GitVersion = "devel"
commit = "unknown"
buildDate = "unknown"
)
type Info struct {
GitVersion string
GitCommit string
BuildDate string
GoVersion string
Compiler string
Platform string
}
func GetVersionInfo() Info {
return Info{
GitVersion: GitVersion,
GitCommit: commit,
BuildDate: buildDate,
GoVersion: runtime.Version(),
Compiler: runtime.Compiler,
Platform: path.Join(runtime.GOOS, runtime.GOARCH),
}
}
func (i Info) String() string {
b := strings.Builder{}
w := tabwriter.NewWriter(&b, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "GitVersion:\t%s\n", i.GitVersion)
fmt.Fprintf(w, "GitCommit:\t%s\n", i.GitCommit)
fmt.Fprintf(w, "BuildDate:\t%s\n", i.BuildDate)
fmt.Fprintf(w, "GoVersion:\t%s\n", i.GoVersion)
fmt.Fprintf(w, "Compiler:\t%s\n", i.Compiler)
fmt.Fprintf(w, "Platform:\t%s\n", i.Platform)
w.Flush()
return b.String()
}
func (i Info) JSONString() (string, error) {
b, err := json.MarshalIndent(i, "", " ")
if err != nil {
return "", err
}
return string(b), nil
}

View File

@@ -7,5 +7,17 @@ spec:
charts:
# charts are also fetched and served as OCI content (currently experimental in helm)
# HELM_EXPERIMENTAL_OCI=1 helm chart pull <hauler-registry>/loki:2.6.2
- name: loki
repoURL: https://grafana.github.io/helm-charts
# - name: loki
# repoURL: https://grafana.github.io/helm-charts
# - name: longhorn
# repoURL: https://charts.longhorn.io
# - name: cert-manager
# repoURL: https://charts.jetstack.io
# version: v1.6.1
# extraImages:
# - ref: quay.io/jetstack/cert-manager-cainjector:v1.6.1
- name: podinfo
repoURL: https://stefanprodan.github.io/podinfo

View File

@@ -5,19 +5,18 @@ metadata:
spec:
files:
# hauler can save/redistribute files on disk (be careful! paths are relative)
- ref: testdata/contents.yaml
- path: testdata/contents.yaml
# TODO: when directories are specified, they will be archived and stored as a file
# - ref: testdata/
# when directories are specified, the directory contents will be archived and stored
- path: testdata/
# hauler can also fetch remote content, and will "smartly" identify filenames _when possible_
# filename below = "k3s-images.txt"
- ref: "https://github.com/k3s-io/k3s/releases/download/v1.22.2%2Bk3s2/k3s-images.txt"
- path: "https://github.com/k3s-io/k3s/releases/download/v1.22.2%2Bk3s2/k3s-images.txt"
# when filenames are not appropriate, a name should be specified
# this will still work, but default to a filename of "get.k3s.io"
- ref: https://get.k3s.io
name: get-k3s.sh
# when discovered filenames are not desired, a file name can be specified
- path: https://get.k3s.io
name: k3s-init.sh
---
apiVersion: content.hauler.cattle.io/v1alpha1
@@ -27,16 +26,16 @@ metadata:
spec:
images:
# images can be referenced shorthanded without a tag
- ref: hello-world
- name: hello-world
# or namespaced with a tag
- ref: rancher/cowsay:latest
- name: rancher/cowsay:latest
# or by their digest:
# - ref: registry@sha256:42043edfae481178f07aa077fa872fcc242e276d302f4ac2026d9d2eb65b955f
- name: registry@sha256:42043edfae481178f07aa077fa872fcc242e276d302f4ac2026d9d2eb65b955f
# or fully qualified from any OCI compliant registry registry
- ref: ghcr.io/fluxcd/flux-cli:v0.22.0
- name: ghcr.io/fluxcd/flux-cli:v0.22.0
---
apiVersion: content.hauler.cattle.io/v1alpha1

BIN
testdata/podinfo-6.0.3.tgz vendored Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More