mirror of
https://github.com/hauler-dev/hauler.git
synced 2026-02-14 09:59:50 +00:00
refactor to baseline on pluggable oci collection/distribution (#41)
refactor to baseline on pluggable oci collection/distribution Co-authored-by: Josh Wolf <josh@joshwolf.dev>
This commit is contained in:
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1,3 +1,4 @@
|
||||
.DS_Store
|
||||
|
||||
# Vagrant
|
||||
.vagrant
|
||||
@@ -10,6 +11,7 @@
|
||||
*.njsproj
|
||||
*.sln
|
||||
*.sw?
|
||||
*.dir-locals.el
|
||||
|
||||
# old, ad-hoc ignores
|
||||
artifacts
|
||||
|
||||
@@ -9,8 +9,8 @@ builds:
|
||||
- arm64
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
flags:
|
||||
- -tags=containers_image_openpgp containers_image_ostree
|
||||
release:
|
||||
extra_files:
|
||||
- glob: ./pkg.tar.zst
|
||||
# flags:
|
||||
# - -tags=containers_image_openpgp containers_image_ostree
|
||||
#release:
|
||||
# extra_files:
|
||||
# - glob: ./pkg.tar.zst
|
||||
3
Makefile
3
Makefile
@@ -12,6 +12,9 @@ all: fmt vet install test
|
||||
build:
|
||||
mkdir bin;\
|
||||
$(GO_BUILD_ENV) go build -o bin ./cmd/...;\
|
||||
|
||||
build-all: fmt vet
|
||||
goreleaser build --rm-dist --snapshot
|
||||
|
||||
install:
|
||||
$(GO_BUILD_ENV) go install
|
||||
|
||||
71
README.md
71
README.md
@@ -1,68 +1,11 @@
|
||||
# Hauler - Kubernetes Air Gap Migration
|
||||
# Hauler: Airgap Assistant
|
||||
|
||||
__⚠️ WARNING: This is an experimental, work in progress project. _Everything_ is subject to change, and it is actively in development, so let us know what you think!__
|
||||
|
||||
Hauler is built to be a one stop shop for simplifying the burden of working with kubernetes in airgapped environments. Utility is split into a few commands intended to assist with increasingly complex airgapped use cases.
|
||||
`hauler` is a command line tool for that aims to simplify the painpoints that exist around airgapped Kubernetes deployments.
|
||||
It remains as unopinionated as possible, and does _not_ attempt to enforce a specific cluster type or application deployment model.
|
||||
Instead, it focuses solely on simplifying the primary airgap pain points:
|
||||
* artifact collection
|
||||
* artifact distribution
|
||||
|
||||
__Portable self contained clusters__:
|
||||
|
||||
Within the `hauler package` subset of commands, `Packages` (name to be finalized) can be created, updated, and ran.
|
||||
|
||||
A `Package` is a hauler specific, configurable, self-contained, compressed archive (`*.tar.zst`) that contains all dependencies needed to 1) create a kubernetes cluster, 2) deploy resources into the cluster.
|
||||
|
||||
```bash
|
||||
# Build a minimal portable k8s cluster
|
||||
hauler package build
|
||||
|
||||
# Build a package that deploys resources when deployed
|
||||
hauler package build -p path/to/chart -p path/to/manifests -i extra/image:latest -i busybox:musl
|
||||
|
||||
# Build a package that deploys a cluster, oci registry, and sample app on boot
|
||||
# Note the aliases introduced
|
||||
hauler pkg b -p testdata/docker-registry -p testdata/rawmanifests
|
||||
```
|
||||
|
||||
Hauler packages at their core stand on the shoulders of other technologies (`k3s`, `rke2`, and `fleet`), and as such, are designed to be extremely flexible.
|
||||
|
||||
Common use cases are to build turn key, appliance like clusters designed to boot on disconnected or low powered devices. Or portable "utility" clusters that can act as a stepping stone for further downstream deployable infrastructure. Since ever `Package` is built as an entirely self contained archive, disconnected environments are _always_ a first class citizen.
|
||||
|
||||
__Image Relocation__:
|
||||
|
||||
For disconnected workloads that don't require a cluster to be created first, images can be efficiently packaged and relocated with `hauler relocate`.
|
||||
|
||||
Images are stored as a compressed archive of an `oci` layout, ensuring only the required de-duplicated image layers are packaged and transferred.
|
||||
|
||||
## Installation
|
||||
|
||||
Hauler is and will always be a statically compiled binary, we strongly believe in a zero dependency tool is key to reducing operational complexity in airgap environments.
|
||||
|
||||
Before GA, hauler can be downloaded from the releases page for every tagged release
|
||||
|
||||
## Dev
|
||||
|
||||
A `Vagrant` file is provided as a testing ground. The boot scripts at `vagrant-scripts/*.sh` will be ran on boot to ensure the dev environment is airgapped.
|
||||
|
||||
```bash
|
||||
vagrant up
|
||||
|
||||
vagrant ssh
|
||||
```
|
||||
|
||||
More info can be found in the [vagrant docs](VAGRANT.md).
|
||||
|
||||
## WIP Warnings
|
||||
|
||||
API stability (including as a code library and as a network endpoint) is NOT guaranteed before `v1` API definitions and a 1.0 release. The following recommendations are made regarding usage patterns of hauler:
|
||||
- `alpha` (`v1alpha1`, `v1alpha2`, ...) API versions: use **_only_** through `haulerctl`
|
||||
- `beta` (`v1beta1`, `v1beta2`, ...) API versions: use as an **_experimental_** library and/or API endpoint
|
||||
- `stable` (`v1`, `v2`, ...) API versions: use as stable CLI tool, library, and/or API endpoint
|
||||
|
||||
### Build
|
||||
|
||||
```bash
|
||||
# Current arch build
|
||||
make build
|
||||
|
||||
# Multiarch dev build
|
||||
goreleaser build --rm-dist --snapshot
|
||||
```
|
||||
`hauler` achieves this by leaning heavily on the [oci spec](https://github.com/opencontainers), and the vast ecosystem of tooling available for fetching and distributing oci content.
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/oci"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
copyLong = `hauler copies artifacts stored on a registry to local disk`
|
||||
|
||||
copyExample = `
|
||||
# Run Hauler
|
||||
hauler copy locahost:5000/artifacts:latest
|
||||
`
|
||||
)
|
||||
|
||||
type copyOpts struct {
|
||||
*rootOpts
|
||||
dir string
|
||||
sourceRef string
|
||||
}
|
||||
|
||||
// NewCopyCommand creates a new sub command under
|
||||
// hauler for coping files to local disk
|
||||
func NewCopyCommand() *cobra.Command {
|
||||
opts := ©Opts{
|
||||
rootOpts: &ro,
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "copy",
|
||||
Short: "Download artifacts from OCI registry to local disk",
|
||||
Long: copyLong,
|
||||
Example: copyExample,
|
||||
Aliases: []string{"c", "cp"},
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
opts.sourceRef = args[0]
|
||||
return opts.Run(opts.sourceRef)
|
||||
},
|
||||
}
|
||||
|
||||
f := cmd.Flags()
|
||||
f.StringVarP(&opts.dir, "dir", "d", ".", "Target directory for file copy")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
// Run performs the operation.
|
||||
func (o *copyOpts) Run(src string) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
defer cancel()
|
||||
|
||||
if err := oci.Get(ctx, src, o.dir); err != nil {
|
||||
o.logger.Errorf("error copy artifact %s to local directory %s: %v", src, o.dir, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,42 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"github.com/containerd/containerd/remotes"
|
||||
"github.com/containerd/containerd/remotes/docker"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type ociOpts struct {
|
||||
insecure bool
|
||||
plainHTTP bool
|
||||
}
|
||||
|
||||
const (
|
||||
haulerMediaType = "application/vnd.oci.image"
|
||||
)
|
||||
|
||||
func NewOCICommand() *cobra.Command {
|
||||
opts := ociOpts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "oci",
|
||||
Short: "oci stuff",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Help()
|
||||
},
|
||||
}
|
||||
|
||||
cmd.AddCommand(NewOCIPushCommand())
|
||||
cmd.AddCommand(NewOCIPullCommand())
|
||||
|
||||
f := cmd.Flags()
|
||||
f.BoolVarP(&opts.insecure, "insecure", "", false, "Connect to registry without certs")
|
||||
f.BoolVarP(&opts.plainHTTP, "plain-http", "", false, "Connect to registry over plain http")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (o *ociOpts) resolver() (remotes.Resolver, error) {
|
||||
resolver := docker.NewResolver(docker.ResolverOptions{PlainHTTP: true})
|
||||
return resolver, nil
|
||||
}
|
||||
@@ -1,67 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/deislabs/oras/pkg/content"
|
||||
"github.com/deislabs/oras/pkg/oras"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type ociPullOpts struct {
|
||||
ociOpts
|
||||
|
||||
sourceRef string
|
||||
outDir string
|
||||
}
|
||||
|
||||
func NewOCIPullCommand() *cobra.Command {
|
||||
opts := ociPullOpts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "pull",
|
||||
Short: "oci pull",
|
||||
Aliases: []string{"p"},
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
PreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
return opts.PreRun()
|
||||
},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
opts.sourceRef = args[0]
|
||||
return opts.Run()
|
||||
},
|
||||
}
|
||||
|
||||
f := cmd.Flags()
|
||||
f.StringVarP(&opts.outDir, "out-dir", "o", ".", "output directory")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (o *ociPullOpts) PreRun() error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *ociPullOpts) Run() error {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
store := content.NewFileStore(o.outDir)
|
||||
defer store.Close()
|
||||
|
||||
allowedMediaTypes := []string{
|
||||
haulerMediaType,
|
||||
}
|
||||
|
||||
resolver, err := o.resolver()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
desc, _, err := oras.Pull(ctx, resolver, o.sourceRef, store, oras.WithAllowedMediaTypes(allowedMediaTypes))
|
||||
|
||||
logrus.Infof("pulled %s with digest: %s", o.sourceRef, desc.Digest)
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,74 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/deislabs/oras/pkg/content"
|
||||
"github.com/deislabs/oras/pkg/oras"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
"os"
|
||||
)
|
||||
|
||||
type ociPushOpts struct {
|
||||
ociOpts
|
||||
|
||||
targetRef string
|
||||
pathRef string
|
||||
}
|
||||
|
||||
func NewOCIPushCommand() *cobra.Command {
|
||||
opts := ociPushOpts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "push",
|
||||
Short: "oci push",
|
||||
Aliases: []string{"p"},
|
||||
Args: cobra.MinimumNArgs(2),
|
||||
PreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
return opts.PreRun()
|
||||
},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
opts.pathRef = args[0]
|
||||
opts.targetRef = args[1]
|
||||
return opts.Run()
|
||||
},
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (o *ociPushOpts) PreRun() error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *ociPushOpts) Run() error {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
data, err := os.ReadFile(o.pathRef)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resolver, err := o.resolver()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
store := content.NewMemoryStore()
|
||||
|
||||
contents := []ocispec.Descriptor{
|
||||
store.Add(o.pathRef, haulerMediaType, data),
|
||||
}
|
||||
|
||||
desc, err := oras.Push(ctx, resolver, o.targetRef, store, contents)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logrus.Infof("pushed %s to %s with digest: %s", o.pathRef, o.targetRef, desc.Digest)
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
package app
|
||||
|
||||
import "github.com/spf13/cobra"
|
||||
|
||||
type pkgOpts struct{}
|
||||
|
||||
func NewPkgCommand() *cobra.Command {
|
||||
opts := &pkgOpts{}
|
||||
//TODO
|
||||
_ = opts
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "pkg",
|
||||
Short: "Interact with packages",
|
||||
Aliases: []string{"p", "package"},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Help()
|
||||
},
|
||||
}
|
||||
|
||||
cmd.AddCommand(NewPkgBuildCommand())
|
||||
cmd.AddCommand(NewPkgRunCommand())
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -1,202 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"github.com/rancherfederal/hauler/pkg/driver"
|
||||
"github.com/rancherfederal/hauler/pkg/packager"
|
||||
"github.com/spf13/cobra"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"os"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
type pkgBuildOpts struct {
|
||||
*rootOpts
|
||||
|
||||
cfgFile string
|
||||
|
||||
name string
|
||||
dir string
|
||||
driver string
|
||||
driverVersion string
|
||||
|
||||
fleetVersion string
|
||||
|
||||
images []string
|
||||
paths []string
|
||||
}
|
||||
|
||||
func NewPkgBuildCommand() *cobra.Command {
|
||||
opts := pkgBuildOpts{
|
||||
rootOpts: &ro,
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "build",
|
||||
Short: "Build a self contained compressed archive of manifests and images",
|
||||
Long: `
|
||||
Compressed archives created with this command can be extracted and run anywhere the underlying 'driver' can be run.
|
||||
|
||||
Archives are built by collecting all the dependencies (images and manifests) required.
|
||||
|
||||
Examples:
|
||||
|
||||
# Build a package containing a helm chart with images autodetected from the generated helm chart
|
||||
hauler package build -p path/to/helm/chart
|
||||
|
||||
# Build a package, sourcing from multiple manifest sources and additional images not autodetected
|
||||
hauler pkg build -p path/to/raw/manifests -p path/to/kustomize -i busybox:latest -i busybox:musl
|
||||
|
||||
# Build a package using a different version of k3s
|
||||
hauler p build -p path/to/chart --driver-version "v1.20.6+k3s1"
|
||||
|
||||
# Build a package from a config file (if ./pkg.yaml does not exist, one will be created)
|
||||
hauler package build -c ./pkg.yaml
|
||||
`,
|
||||
Aliases: []string{"b"},
|
||||
PreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
return opts.PreRun()
|
||||
},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return opts.Run()
|
||||
},
|
||||
}
|
||||
|
||||
f := cmd.PersistentFlags()
|
||||
f.StringVarP(&opts.name, "name", "n", "pkg",
|
||||
"name of the pkg to create, will dicate file name")
|
||||
f.StringVarP(&opts.cfgFile, "config", "c", "",
|
||||
"path to config file")
|
||||
f.StringVar(&opts.dir, "directory", "",
|
||||
"Working directory for building package, if empty, an ephemeral temporary directory will be used. Set this to persist package artifacts between builds.")
|
||||
f.StringVarP(&opts.driver, "driver", "d", "k3s",
|
||||
"")
|
||||
f.StringVar(&opts.driverVersion, "driver-version", "v1.21.1+k3s1",
|
||||
"")
|
||||
f.StringVar(&opts.fleetVersion, "fleet-version", "v0.3.5",
|
||||
"")
|
||||
f.StringSliceVarP(&opts.paths, "path", "p", []string{},
|
||||
"")
|
||||
f.StringSliceVarP(&opts.images, "image", "i", []string{},
|
||||
"")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (o *pkgBuildOpts) PreRun() error {
|
||||
_, err := os.Stat(o.cfgFile)
|
||||
if os.IsNotExist(err) {
|
||||
if o.cfgFile == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
o.logger.Warnf("Did not find an existing %s, creating one", o.cfgFile)
|
||||
p := o.toPackage()
|
||||
|
||||
data, err := yaml.Marshal(p)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := os.WriteFile(o.cfgFile, data, 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *pkgBuildOpts) Run() error {
|
||||
o.logger.Infof("Building package")
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
var p v1alpha1.Package
|
||||
if o.cfgFile != "" {
|
||||
o.logger.Infof("Config file '%s' specified, attempting to load existing package config", o.cfgFile)
|
||||
cfgData, err := os.ReadFile(o.cfgFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := yaml.Unmarshal(cfgData, &p); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
} else {
|
||||
o.logger.Infof("No config file specified, strictly using cli arguments")
|
||||
p = o.toPackage()
|
||||
}
|
||||
|
||||
var wdir string
|
||||
if o.dir != "" {
|
||||
if _, err := os.Stat(o.dir); err != nil {
|
||||
o.logger.Errorf("Failed to use specified working directory: %s\n%v", err)
|
||||
return err
|
||||
}
|
||||
|
||||
wdir = o.dir
|
||||
} else {
|
||||
tmpdir, err := os.MkdirTemp("", "hauler")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.RemoveAll(tmpdir)
|
||||
wdir = tmpdir
|
||||
}
|
||||
|
||||
pkgr := packager.NewPackager(wdir, o.logger)
|
||||
|
||||
d := driver.NewDriver(p.Spec.Driver)
|
||||
if _, bErr := pkgr.PackageBundles(ctx, p.Spec.Paths...); bErr != nil {
|
||||
return bErr
|
||||
}
|
||||
|
||||
if iErr := pkgr.PackageImages(ctx, o.images...); iErr != nil {
|
||||
return iErr
|
||||
}
|
||||
|
||||
if dErr := pkgr.PackageDriver(ctx, d); dErr != nil {
|
||||
return dErr
|
||||
}
|
||||
|
||||
if fErr := pkgr.PackageFleet(ctx, p.Spec.Fleet); fErr != nil {
|
||||
return fErr
|
||||
}
|
||||
|
||||
a := packager.NewArchiver()
|
||||
if aErr := pkgr.Archive(a, p, o.name); aErr != nil {
|
||||
return aErr
|
||||
}
|
||||
|
||||
o.logger.Successf("Finished building package")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *pkgBuildOpts) toPackage() v1alpha1.Package {
|
||||
p := v1alpha1.Package{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "",
|
||||
APIVersion: "",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: o.name,
|
||||
},
|
||||
Spec: v1alpha1.PackageSpec{
|
||||
Fleet: v1alpha1.Fleet{
|
||||
Version: o.fleetVersion,
|
||||
},
|
||||
Driver: v1alpha1.Driver{
|
||||
Type: o.driver,
|
||||
Version: o.driverVersion,
|
||||
},
|
||||
Paths: o.paths,
|
||||
Images: o.images,
|
||||
},
|
||||
}
|
||||
return p
|
||||
}
|
||||
@@ -1,84 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_pkgBuildOpts_Run(t *testing.T) {
|
||||
l, _ := setupCliLogger(os.Stdout, "debug")
|
||||
tro := rootOpts{l}
|
||||
|
||||
type fields struct {
|
||||
rootOpts *rootOpts
|
||||
cfgFile string
|
||||
name string
|
||||
driver string
|
||||
driverVersion string
|
||||
fleetVersion string
|
||||
images []string
|
||||
paths []string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
fields fields
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "should package all types of local manifests",
|
||||
fields: fields{
|
||||
rootOpts: &tro,
|
||||
cfgFile: "pkg.yaml",
|
||||
name: "k3s",
|
||||
driver: "k3s",
|
||||
driverVersion: "v1.21.1+k3s1",
|
||||
fleetVersion: "v0.3.5",
|
||||
images: nil,
|
||||
paths: []string{
|
||||
"../../../testdata/docker-registry",
|
||||
"../../../testdata/rawmanifests",
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "should package using fleet.yaml",
|
||||
fields: fields{
|
||||
rootOpts: &tro,
|
||||
cfgFile: "pkg.yaml",
|
||||
name: "k3s",
|
||||
driver: "k3s",
|
||||
driverVersion: "v1.21.1+k3s1",
|
||||
fleetVersion: "v0.3.5",
|
||||
images: nil,
|
||||
paths: []string{
|
||||
"../../../testdata/custom",
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
o := &pkgBuildOpts{
|
||||
rootOpts: tt.fields.rootOpts,
|
||||
cfgFile: tt.fields.cfgFile,
|
||||
name: tt.fields.name,
|
||||
driver: tt.fields.driver,
|
||||
driverVersion: tt.fields.driverVersion,
|
||||
fleetVersion: tt.fields.fleetVersion,
|
||||
images: tt.fields.images,
|
||||
paths: tt.fields.paths,
|
||||
}
|
||||
|
||||
if err := o.PreRun(); err != nil {
|
||||
t.Errorf("PreRun() error = %v", err)
|
||||
}
|
||||
defer os.Remove(o.cfgFile)
|
||||
|
||||
if err := o.Run(); (err != nil) != tt.wantErr {
|
||||
t.Errorf("Run() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,91 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/rancherfederal/hauler/pkg/bootstrap"
|
||||
"github.com/rancherfederal/hauler/pkg/driver"
|
||||
"github.com/rancherfederal/hauler/pkg/packager"
|
||||
"github.com/spf13/cobra"
|
||||
"os"
|
||||
)
|
||||
|
||||
type pkgRunOpts struct {
|
||||
*rootOpts
|
||||
|
||||
cfgFile string
|
||||
}
|
||||
|
||||
func NewPkgRunCommand() *cobra.Command {
|
||||
opts := pkgRunOpts{
|
||||
rootOpts: &ro,
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "run",
|
||||
Short: "Run a compressed archive",
|
||||
Long: `
|
||||
Run a compressed archive created from a 'hauler package build'.
|
||||
|
||||
Examples:
|
||||
|
||||
# Run a package
|
||||
hauler package run pkg.tar.zst
|
||||
`,
|
||||
Aliases: []string{"r"},
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
PreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
return opts.PreRun()
|
||||
},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return opts.Run(args[0])
|
||||
},
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (o *pkgRunOpts) PreRun() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *pkgRunOpts) Run(pkgPath string) error {
|
||||
o.logger.Infof("Running from '%s'", pkgPath)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
tmpdir, err := os.MkdirTemp("", "hauler")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.logger.Debugf("Using temporary working directory: %s", tmpdir)
|
||||
|
||||
a := packager.NewArchiver()
|
||||
|
||||
if err := packager.Unpackage(a, pkgPath, tmpdir); err != nil {
|
||||
return err
|
||||
}
|
||||
o.logger.Debugf("Unpackaged %s", pkgPath)
|
||||
|
||||
b, err := bootstrap.NewBooter(tmpdir, o.logger)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d := driver.NewDriver(b.Package.Spec.Driver)
|
||||
|
||||
if preErr := b.PreBoot(ctx, d); preErr != nil {
|
||||
return preErr
|
||||
}
|
||||
|
||||
if bErr := b.Boot(ctx, d); bErr != nil {
|
||||
return bErr
|
||||
}
|
||||
|
||||
if postErr := b.PostBoot(ctx, d); postErr != nil {
|
||||
return postErr
|
||||
}
|
||||
|
||||
o.logger.Successf("Access the cluster with '/opt/hauler/bin/kubectl'")
|
||||
return nil
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type relocateOpts struct {
|
||||
inputFile string
|
||||
*rootOpts
|
||||
}
|
||||
|
||||
// NewRelocateCommand creates a new sub command under
|
||||
// haulterctl for relocating images and artifacts
|
||||
func NewRelocateCommand() *cobra.Command {
|
||||
opts := &relocateOpts{
|
||||
rootOpts: &ro,
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "relocate",
|
||||
Short: "relocate images or artifacts to a registry",
|
||||
Long: "",
|
||||
Aliases: []string{"r"},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Help()
|
||||
},
|
||||
}
|
||||
|
||||
cmd.AddCommand(NewRelocateArtifactsCommand(opts))
|
||||
cmd.AddCommand(NewRelocateImagesCommand(opts))
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -1,56 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/oci"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type relocateArtifactsOpts struct {
|
||||
*relocateOpts
|
||||
destRef string
|
||||
}
|
||||
|
||||
var (
|
||||
relocateArtifactsLong = `hauler relocate artifacts process an archive with files to be pushed to a registry`
|
||||
|
||||
relocateArtifactsExample = `
|
||||
# Run Hauler
|
||||
hauler relocate artifacts artifacts.tar.zst locahost:5000/artifacts:latest
|
||||
`
|
||||
)
|
||||
|
||||
// NewRelocateArtifactsCommand creates a new sub command of relocate for artifacts
|
||||
func NewRelocateArtifactsCommand(relocate *relocateOpts) *cobra.Command {
|
||||
opts := &relocateArtifactsOpts{
|
||||
relocateOpts: relocate,
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "artifacts",
|
||||
Short: "Use artifact from bundle artifacts to populate a target file server with the artifact's contents",
|
||||
Long: relocateArtifactsLong,
|
||||
Example: relocateArtifactsExample,
|
||||
Args: cobra.MinimumNArgs(2),
|
||||
Aliases: []string{"a", "art", "af"},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
opts.inputFile = args[0]
|
||||
opts.destRef = args[1]
|
||||
return opts.Run(opts.destRef, opts.inputFile)
|
||||
},
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (o *relocateArtifactsOpts) Run(dst string, input string) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
defer cancel()
|
||||
|
||||
if err := oci.Put(ctx, input, dst); err != nil {
|
||||
o.logger.Errorf("error pushing artifact to registry %s: %v", dst, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,103 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
"github.com/rancherfederal/hauler/pkg/oci"
|
||||
"github.com/rancherfederal/hauler/pkg/packager"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
relocateImagesLong = `hauler relocate images processes a bundle provides by hauler package build and copies all of
|
||||
the collected images to a registry`
|
||||
|
||||
relocateImagesExample = `
|
||||
# Run Hauler
|
||||
hauler relocate images pkg.tar.zst locahost:5000
|
||||
`
|
||||
)
|
||||
|
||||
type relocateImagesOpts struct {
|
||||
*relocateOpts
|
||||
destRef string
|
||||
}
|
||||
|
||||
// NewRelocateImagesCommand creates a new sub command of relocate for images
|
||||
func NewRelocateImagesCommand(relocate *relocateOpts) *cobra.Command {
|
||||
opts := &relocateImagesOpts{
|
||||
relocateOpts: relocate,
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "images",
|
||||
Short: "Use artifact from bundle images to populate a target registry with the artifact's images",
|
||||
Long: relocateImagesLong,
|
||||
Example: relocateImagesExample,
|
||||
Args: cobra.MinimumNArgs(2),
|
||||
Aliases: []string{"i", "img", "imgs"},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
opts.inputFile = args[0]
|
||||
opts.destRef = args[1]
|
||||
return opts.Run(opts.destRef, opts.inputFile)
|
||||
},
|
||||
}
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (o *relocateImagesOpts) Run(dst string, input string) error {
|
||||
|
||||
tmpdir, err := os.MkdirTemp("", "hauler")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.logger.Debugf("Using temporary working directory: %s", tmpdir)
|
||||
|
||||
a := packager.NewArchiver()
|
||||
|
||||
if err := packager.Unpackage(a, input, tmpdir); err != nil {
|
||||
o.logger.Errorf("error unpackaging input %s: %v", input, err)
|
||||
}
|
||||
o.logger.Debugf("Unpackaged %s", input)
|
||||
|
||||
path := filepath.Join(tmpdir, "layout")
|
||||
|
||||
ly, err := layout.FromPath(path)
|
||||
|
||||
if err != nil {
|
||||
o.logger.Errorf("error creating OCI layout: %v", err)
|
||||
}
|
||||
|
||||
for nm, hash := range oci.ListImages(ly) {
|
||||
|
||||
n := strings.SplitN(nm, "/", 2)
|
||||
|
||||
img, err := ly.Image(hash)
|
||||
|
||||
o.logger.Infof("Copy %s to %s", n[1], dst)
|
||||
|
||||
if err != nil {
|
||||
o.logger.Errorf("error creating image from layout: %v", err)
|
||||
}
|
||||
|
||||
dstimg := dst + "/" + n[1]
|
||||
|
||||
tag, err := name.ParseReference(dstimg)
|
||||
|
||||
if err != nil {
|
||||
o.logger.Errorf("err parsing destination image %s: %v", dstimg, err)
|
||||
}
|
||||
|
||||
if err := remote.Write(tag, img); err != nil {
|
||||
o.logger.Errorf("error writing image to destination registry %s: %v", dst, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,81 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
loglevel string
|
||||
timeout time.Duration
|
||||
|
||||
getLong = `hauler provides CLI-based air-gap migration assistance using k3s.
|
||||
|
||||
Choose your functionality and new a package when internet access is available,
|
||||
then deploy the package into your air-gapped environment.
|
||||
`
|
||||
|
||||
getExample = `
|
||||
hauler pkg build
|
||||
hauler pkg run pkg.tar.zst
|
||||
|
||||
hauler relocate artifacts artifacts.tar.zst
|
||||
hauler relocate images pkg.tar.zst locahost:5000
|
||||
|
||||
hauler copy localhost:5000/artifacts:latest
|
||||
`
|
||||
)
|
||||
|
||||
type rootOpts struct {
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
var ro rootOpts
|
||||
|
||||
// NewRootCommand defines the root hauler command
|
||||
func NewRootCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "hauler",
|
||||
Short: "hauler provides CLI-based air-gap migration assistance",
|
||||
Long: getLong,
|
||||
Example: getExample,
|
||||
SilenceUsage: true,
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
l, err := setupCliLogger(os.Stdout, loglevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ro.logger = l
|
||||
return nil
|
||||
},
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
return cmd.Help()
|
||||
},
|
||||
}
|
||||
|
||||
cobra.OnInitialize()
|
||||
|
||||
cmd.AddCommand(NewRelocateCommand())
|
||||
cmd.AddCommand(NewCopyCommand())
|
||||
cmd.AddCommand(NewPkgCommand())
|
||||
|
||||
f := cmd.PersistentFlags()
|
||||
f.StringVarP(&loglevel, "loglevel", "l", "debug",
|
||||
"Log level (debug, info, warn, error, fatal, panic)")
|
||||
f.DurationVar(&timeout, "timeout", 1*time.Minute,
|
||||
"TODO: timeout for operations")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func setupCliLogger(out io.Writer, level string) (log.Logger, error) {
|
||||
l := log.NewLogger(out)
|
||||
|
||||
return l, nil
|
||||
}
|
||||
85
cmd/hauler/cli/cli.go
Normal file
85
cmd/hauler/cli/cli.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
"github.com/rancherfederal/hauler/pkg/store"
|
||||
)
|
||||
|
||||
type rootOpts struct {
|
||||
logLevel string
|
||||
dataDir string
|
||||
cacheDir string
|
||||
}
|
||||
|
||||
var ro = &rootOpts{}
|
||||
|
||||
func New() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "hauler",
|
||||
Short: "",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
log.FromContext(cmd.Context()).SetLevel(ro.logLevel)
|
||||
return nil
|
||||
},
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Help()
|
||||
},
|
||||
}
|
||||
|
||||
pf := cmd.PersistentFlags()
|
||||
pf.StringVarP(&ro.logLevel, "log-level", "l", "info", "")
|
||||
pf.StringVar(&ro.dataDir, "content-dir", "", "Location of where to create and store contents (defaults to ~/.local/hauler)")
|
||||
pf.StringVar(&ro.cacheDir, "cache", "", "Location of where to store cache data (defaults to XDG_CACHE_DIR/hauler)")
|
||||
|
||||
// Add subcommands
|
||||
addGet(cmd)
|
||||
addSave(cmd)
|
||||
addLoad(cmd)
|
||||
addServe(cmd)
|
||||
addStore(cmd)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (o *rootOpts) getStore(ctx context.Context) (*store.Store, error) {
|
||||
dir := o.dataDir
|
||||
|
||||
if o.dataDir == "" {
|
||||
// Default to userspace
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
abs, _ := filepath.Abs(filepath.Join(home, ".local/hauler/store"))
|
||||
if err := os.MkdirAll(abs, os.ModePerm); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dir = abs
|
||||
} else {
|
||||
// Make sure directory exists and we can write to it
|
||||
if f, err := os.Stat(o.dataDir); err != nil {
|
||||
return nil, err
|
||||
} else if !f.IsDir() {
|
||||
return nil, fmt.Errorf("%s is not a directory", o.dataDir)
|
||||
} // TODO: Add writeable check
|
||||
|
||||
abs, err := filepath.Abs(o.dataDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dir = abs
|
||||
}
|
||||
|
||||
s := store.NewStore(ctx, dir)
|
||||
return s, nil
|
||||
}
|
||||
25
cmd/hauler/cli/get.go
Normal file
25
cmd/hauler/cli/get.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/cmd/hauler/cli/get"
|
||||
)
|
||||
|
||||
func addGet(parent *cobra.Command) {
|
||||
o := &get.Opts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "get",
|
||||
Short: "Get OCI content from a registry",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, arg []string) error {
|
||||
ctx := cmd.Context()
|
||||
|
||||
return get.Cmd(ctx, o, arg[0])
|
||||
},
|
||||
}
|
||||
o.AddArgs(cmd)
|
||||
|
||||
parent.AddCommand(cmd)
|
||||
}
|
||||
74
cmd/hauler/cli/get/get.go
Normal file
74
cmd/hauler/cli/get/get.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package get
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/containerd/containerd/images"
|
||||
"github.com/containerd/containerd/remotes/docker"
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/spf13/cobra"
|
||||
"oras.land/oras-go/pkg/content"
|
||||
"oras.land/oras-go/pkg/oras"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
)
|
||||
|
||||
type Opts struct {
|
||||
DestinationDir string
|
||||
}
|
||||
|
||||
func (o *Opts) AddArgs(cmd *cobra.Command) {
|
||||
f := cmd.Flags()
|
||||
|
||||
f.StringVar(&o.DestinationDir, "dir", "", "Directory to save contents to (defaults to current directory)")
|
||||
}
|
||||
|
||||
func Cmd(ctx context.Context, o *Opts, reference string) error {
|
||||
l := log.FromContext(ctx)
|
||||
l.Debugf("running command `hauler get`")
|
||||
|
||||
cs := content.NewFileStore(o.DestinationDir)
|
||||
defer cs.Close()
|
||||
|
||||
ref, err := name.ParseReference(reference)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resolver := docker.NewResolver(docker.ResolverOptions{})
|
||||
|
||||
desc, err := remote.Get(ref)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Debugf("Getting content of media type: %s", desc.MediaType)
|
||||
switch desc.MediaType {
|
||||
case ocispec.MediaTypeImageManifest:
|
||||
desc, artifacts, err := oras.Pull(ctx, resolver, ref.Name(), cs, oras.WithPullBaseHandler())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// TODO: Better logging
|
||||
_ = desc
|
||||
_ = artifacts
|
||||
// l.Infof("Downloaded %d artifacts: %s", len(artifacts), content.ResolveName(desc))
|
||||
|
||||
case images.MediaTypeDockerSchema2Manifest:
|
||||
img, err := remote.Image(ref, remote.WithAuthFromKeychain(authn.DefaultKeychain))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_ = img
|
||||
default:
|
||||
return fmt.Errorf("unknown media type: %s", desc.MediaType)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
29
cmd/hauler/cli/load.go
Normal file
29
cmd/hauler/cli/load.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/cmd/hauler/cli/load"
|
||||
)
|
||||
|
||||
func addLoad(parent *cobra.Command) {
|
||||
o := &load.Opts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "load",
|
||||
Short: "Load archived content into hauler's store",
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
|
||||
s, err := ro.getStore(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return load.Cmd(ctx, o, s.DataDir, args...)
|
||||
},
|
||||
}
|
||||
|
||||
parent.AddCommand(cmd)
|
||||
}
|
||||
32
cmd/hauler/cli/load/load.go
Normal file
32
cmd/hauler/cli/load/load.go
Normal file
@@ -0,0 +1,32 @@
|
||||
package load
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/mholt/archiver/v3"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
)
|
||||
|
||||
type Opts struct{}
|
||||
|
||||
// Cmd
|
||||
// TODO: Just use mholt/archiver for now, even though we don't need most of it
|
||||
func Cmd(ctx context.Context, o *Opts, dir string, archiveRefs ...string) error {
|
||||
l := log.FromContext(ctx)
|
||||
l.Debugf("running command `hauler load`")
|
||||
|
||||
// TODO: Support more formats?
|
||||
a := archiver.NewTarZstd()
|
||||
a.OverwriteExisting = true
|
||||
|
||||
for _, archiveRef := range archiveRefs {
|
||||
l.Infof("Loading content from %s to %s", archiveRef, dir)
|
||||
err := a.Unarchive(archiveRef, dir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
30
cmd/hauler/cli/save.go
Normal file
30
cmd/hauler/cli/save.go
Normal file
@@ -0,0 +1,30 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/cmd/hauler/cli/save"
|
||||
)
|
||||
|
||||
func addSave(parent *cobra.Command) {
|
||||
o := &save.Opts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "save",
|
||||
Short: "Save hauler's store into a transportable compressed archive",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
|
||||
s, err := ro.getStore(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return save.Cmd(ctx, o, o.FileName, s.DataDir)
|
||||
},
|
||||
}
|
||||
o.AddArgs(cmd)
|
||||
|
||||
parent.AddCommand(cmd)
|
||||
}
|
||||
55
cmd/hauler/cli/save/save.go
Normal file
55
cmd/hauler/cli/save/save.go
Normal file
@@ -0,0 +1,55 @@
|
||||
package save
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mholt/archiver/v3"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
)
|
||||
|
||||
type Opts struct {
|
||||
FileName string
|
||||
}
|
||||
|
||||
func (o *Opts) AddArgs(cmd *cobra.Command) {
|
||||
f := cmd.Flags()
|
||||
|
||||
f.StringVarP(&o.FileName, "filename", "f", "pkg.tar.zst", "Name of archive")
|
||||
}
|
||||
|
||||
// Cmd
|
||||
// TODO: Just use mholt/archiver for now, even though we don't need most of it
|
||||
func Cmd(ctx context.Context, o *Opts, outputFile string, dir string) error {
|
||||
l := log.FromContext(ctx)
|
||||
l.Debugf("running command `hauler save`")
|
||||
|
||||
// TODO: Support more formats?
|
||||
a := archiver.NewTarZstd()
|
||||
a.OverwriteExisting = true
|
||||
|
||||
absOutputfile, err := filepath.Abs(outputFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Infof("Saving data dir (%s) as compressed archive to %s", dir, absOutputfile)
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.Chdir(cwd)
|
||||
if err := os.Chdir(dir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = a.Archive([]string{"."}, absOutputfile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
29
cmd/hauler/cli/serve.go
Normal file
29
cmd/hauler/cli/serve.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/cmd/hauler/cli/serve"
|
||||
)
|
||||
|
||||
func addServe(parent *cobra.Command) {
|
||||
o := &serve.Opts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "serve",
|
||||
Short: "Serve artifacts in hauler's embedded store",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
|
||||
s, err := ro.getStore(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return serve.Cmd(ctx, o, s.DataDir)
|
||||
},
|
||||
}
|
||||
o.AddFlags(cmd)
|
||||
|
||||
parent.AddCommand(cmd)
|
||||
}
|
||||
58
cmd/hauler/cli/serve/serve.go
Normal file
58
cmd/hauler/cli/serve/serve.go
Normal file
@@ -0,0 +1,58 @@
|
||||
package serve
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/distribution/distribution/v3/configuration"
|
||||
"github.com/distribution/distribution/v3/registry"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
)
|
||||
|
||||
type Opts struct {
|
||||
Port int
|
||||
configFile string
|
||||
}
|
||||
|
||||
func (o *Opts) AddFlags(cmd *cobra.Command) {
|
||||
f := cmd.Flags()
|
||||
|
||||
f.IntVarP(&o.Port, "port", "p", 5000, "Port to listen on")
|
||||
}
|
||||
|
||||
// Cmd does
|
||||
func Cmd(ctx context.Context, o *Opts, dir string) error {
|
||||
l := log.FromContext(ctx)
|
||||
l.Debugf("running command `hauler serve`")
|
||||
|
||||
cfg := &configuration.Configuration{
|
||||
Version: "0.1",
|
||||
Storage: configuration.Storage{
|
||||
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
|
||||
"filesystem": configuration.Parameters{"rootdirectory": dir},
|
||||
|
||||
// TODO: Ensure this is toggleable via cli arg if necessary
|
||||
"maintenance": configuration.Parameters{"readonly.enabled": true},
|
||||
},
|
||||
}
|
||||
cfg.Log.Level = "info"
|
||||
cfg.HTTP.Addr = fmt.Sprintf(":%d", o.Port)
|
||||
cfg.HTTP.Headers = http.Header{
|
||||
"X-Content-Type-Options": []string{"nosniff"},
|
||||
}
|
||||
|
||||
r, err := registry.NewRegistry(ctx, cfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Infof("Starting registry listening on :%d", o.Port)
|
||||
if err = r.ListenAndServe(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
69
cmd/hauler/cli/store.go
Normal file
69
cmd/hauler/cli/store.go
Normal file
@@ -0,0 +1,69 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/cmd/hauler/cli/store"
|
||||
)
|
||||
|
||||
func addStore(parent *cobra.Command) {
|
||||
cmd := &cobra.Command{
|
||||
Use: "store",
|
||||
Short: "Interact with hauler's content store",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Help()
|
||||
},
|
||||
}
|
||||
|
||||
cmd.AddCommand(
|
||||
addStoreSync(),
|
||||
addStoreGet(),
|
||||
)
|
||||
|
||||
parent.AddCommand(cmd)
|
||||
}
|
||||
|
||||
func addStoreGet() *cobra.Command {
|
||||
o := &store.GetOpts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "get",
|
||||
Short: "Get content from hauler's embedded content store",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
|
||||
s, err := ro.getStore(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return store.GetCmd(ctx, o, s, args[0])
|
||||
},
|
||||
}
|
||||
o.AddArgs(cmd)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func addStoreSync() *cobra.Command {
|
||||
o := &store.SyncOpts{}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "sync",
|
||||
Short: "Sync content to hauler's embedded content store",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
|
||||
s, err := ro.getStore(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return store.SyncCmd(ctx, o, s)
|
||||
},
|
||||
}
|
||||
o.AddFlags(cmd)
|
||||
|
||||
return cmd
|
||||
}
|
||||
43
cmd/hauler/cli/store/get.go
Normal file
43
cmd/hauler/cli/store/get.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package store
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rancherfederal/hauler/cmd/hauler/cli/get"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
"github.com/rancherfederal/hauler/pkg/store"
|
||||
)
|
||||
|
||||
type GetOpts struct {
|
||||
DestinationDir string
|
||||
}
|
||||
|
||||
func (o *GetOpts) AddArgs(cmd *cobra.Command) {
|
||||
f := cmd.Flags()
|
||||
|
||||
f.StringVar(&o.DestinationDir, "dir", "", "Directory to save contents to (defaults to current directory)")
|
||||
}
|
||||
|
||||
func GetCmd(ctx context.Context, o *GetOpts, s *store.Store, reference string) error {
|
||||
l := log.FromContext(ctx)
|
||||
l.Debugf("running command `hauler store get`")
|
||||
|
||||
s.Open()
|
||||
defer s.Close()
|
||||
|
||||
ref, err := name.ParseReference(reference)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
eref := s.RelocateReference(ref)
|
||||
|
||||
gopts := &get.Opts{
|
||||
DestinationDir: o.DestinationDir,
|
||||
}
|
||||
|
||||
return get.Cmd(ctx, gopts, eref.Name())
|
||||
}
|
||||
129
cmd/hauler/cli/store/sync.go
Normal file
129
cmd/hauler/cli/store/sync.go
Normal file
@@ -0,0 +1,129 @@
|
||||
package store
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"io"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"k8s.io/apimachinery/pkg/util/yaml"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"github.com/rancherfederal/hauler/pkg/content"
|
||||
"github.com/rancherfederal/hauler/pkg/content/chart"
|
||||
"github.com/rancherfederal/hauler/pkg/content/driver"
|
||||
"github.com/rancherfederal/hauler/pkg/content/file"
|
||||
"github.com/rancherfederal/hauler/pkg/content/image"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
"github.com/rancherfederal/hauler/pkg/store"
|
||||
)
|
||||
|
||||
type SyncOpts struct {
|
||||
ContentFiles []string
|
||||
}
|
||||
|
||||
func (o *SyncOpts) AddFlags(cmd *cobra.Command) {
|
||||
f := cmd.Flags()
|
||||
|
||||
f.StringSliceVarP(&o.ContentFiles, "files", "f", []string{}, "Path to content files")
|
||||
}
|
||||
|
||||
func SyncCmd(ctx context.Context, o *SyncOpts, s *store.Store) error {
|
||||
l := log.FromContext(ctx)
|
||||
l.Debugf("running cli command `hauler store sync`")
|
||||
|
||||
s.Open()
|
||||
defer s.Close()
|
||||
|
||||
for _, filename := range o.ContentFiles {
|
||||
l.Debugf("Syncing content file: '%s'", filename)
|
||||
fi, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
reader := yaml.NewYAMLReader(bufio.NewReader(fi))
|
||||
|
||||
var docs [][]byte
|
||||
for {
|
||||
raw, err := reader.Read()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
docs = append(docs, raw)
|
||||
}
|
||||
|
||||
for _, doc := range docs {
|
||||
gvk, err := content.ValidateType(doc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Infof("Syncing content from: '%s'", gvk.String())
|
||||
|
||||
switch gvk.Kind {
|
||||
case v1alpha1.FilesContentKind:
|
||||
var cfg v1alpha1.Files
|
||||
if err := yaml.Unmarshal(doc, &cfg); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, f := range cfg.Spec.Files {
|
||||
oci := file.NewFile(f)
|
||||
if err := s.Add(ctx, oci); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
case v1alpha1.ImagesContentKind:
|
||||
var cfg v1alpha1.Images
|
||||
if err := yaml.Unmarshal(doc, &cfg); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, i := range cfg.Spec.Images {
|
||||
oci := image.NewImage(i)
|
||||
|
||||
if err := s.Add(ctx, oci); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
case v1alpha1.ChartsContentKind:
|
||||
var cfg v1alpha1.Charts
|
||||
if err := yaml.Unmarshal(doc, &cfg); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, c := range cfg.Spec.Charts {
|
||||
oci := chart.NewChart(c)
|
||||
if err := s.Add(ctx, oci); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
case v1alpha1.DriverContentKind:
|
||||
var cfg v1alpha1.Driver
|
||||
if err := yaml.Unmarshal(doc, &cfg); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
oci, err := driver.NewK3s(cfg.Spec.Version)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := s.Add(ctx, oci); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,15 +1,21 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"context"
|
||||
"os"
|
||||
|
||||
"github.com/rancherfederal/hauler/cmd/hauler/app"
|
||||
"github.com/rancherfederal/hauler/cmd/hauler/cli"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
)
|
||||
|
||||
func main() {
|
||||
root := app.NewRootCommand()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
if err := root.Execute(); err != nil {
|
||||
log.Fatalln(err)
|
||||
logger := log.NewLogger(os.Stdout)
|
||||
ctx = logger.WithContext(ctx)
|
||||
|
||||
if err := cli.New().ExecuteContext(ctx); err != nil {
|
||||
logger.Errorf("%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
306
go.mod
306
go.mod
@@ -1,70 +1,276 @@
|
||||
module github.com/rancherfederal/hauler
|
||||
|
||||
go 1.16
|
||||
go 1.17
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.8.0 // indirect
|
||||
github.com/Microsoft/go-winio v0.5.0 // indirect
|
||||
github.com/containerd/containerd v1.5.0-beta.4
|
||||
github.com/deislabs/oras v0.11.1
|
||||
github.com/docker/docker v20.10.6+incompatible // indirect
|
||||
github.com/andybalholm/brotli v1.0.2 // indirect
|
||||
github.com/containerd/containerd v1.5.7
|
||||
github.com/distribution/distribution/v3 v3.0.0-20210926092439-1563384b69df
|
||||
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7 // indirect
|
||||
github.com/google/go-containerregistry v0.5.1
|
||||
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||
github.com/imdario/mergo v0.3.12
|
||||
github.com/klauspost/compress v1.13.0 // indirect
|
||||
github.com/google/go-containerregistry v0.6.0
|
||||
github.com/klauspost/compress v1.13.4 // indirect
|
||||
github.com/klauspost/pgzip v1.2.5 // indirect
|
||||
github.com/mattn/go-isatty v0.0.14 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.13 // indirect
|
||||
github.com/mholt/archiver/v3 v3.5.0
|
||||
github.com/opencontainers/go-digest v1.0.0
|
||||
github.com/opencontainers/image-spec v1.0.2-0.20190823105129-775207bd45b6
|
||||
github.com/otiai10/copy v1.6.0
|
||||
github.com/pterm/pterm v0.12.24
|
||||
github.com/rancher/fleet v0.3.5
|
||||
github.com/rancher/fleet v0.3.6
|
||||
github.com/rancher/fleet/pkg/apis v0.0.0
|
||||
github.com/rancher/wrangler v0.8.4
|
||||
github.com/rs/zerolog v1.24.0
|
||||
github.com/sirupsen/logrus v1.8.1
|
||||
github.com/spf13/afero v1.6.0
|
||||
github.com/spf13/cobra v1.1.3
|
||||
github.com/spf13/cobra v1.2.1
|
||||
github.com/ulikunitz/xz v0.5.10 // indirect
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190809123943-df4f5c81cb3b // indirect
|
||||
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5 // indirect
|
||||
golang.org/x/tools v0.1.3 // indirect
|
||||
google.golang.org/genproto v0.0.0-20210524171403-669157292da3 // indirect
|
||||
google.golang.org/grpc v1.38.0 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
|
||||
helm.sh/helm/v3 v3.5.1
|
||||
k8s.io/apimachinery v0.21.1
|
||||
k8s.io/cli-runtime v0.20.2
|
||||
go.etcd.io/bbolt v1.3.6
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519 // indirect
|
||||
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac // indirect
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
|
||||
helm.sh/helm/v3 v3.7.1
|
||||
k8s.io/api v0.22.1
|
||||
k8s.io/apimachinery v0.22.1
|
||||
k8s.io/cli-runtime v0.22.1
|
||||
k8s.io/client-go v11.0.1-0.20190816222228-6d55c1b1f1ca+incompatible
|
||||
oras.land/oras-go v0.4.0
|
||||
sigs.k8s.io/cli-utils v0.23.1
|
||||
sigs.k8s.io/controller-runtime v0.9.0
|
||||
sigs.k8s.io/yaml v1.2.0
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.83.0 // indirect
|
||||
cloud.google.com/go/storage v1.10.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go v56.3.0+incompatible // indirect
|
||||
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 // indirect
|
||||
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
|
||||
github.com/Azure/go-autorest/autorest v0.11.20 // indirect
|
||||
github.com/Azure/go-autorest/autorest/adal v0.9.15 // indirect
|
||||
github.com/Azure/go-autorest/autorest/azure/auth v0.4.2 // indirect
|
||||
github.com/Azure/go-autorest/autorest/azure/cli v0.3.1 // indirect
|
||||
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
|
||||
github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
|
||||
github.com/Azure/go-autorest/autorest/validation v0.2.0 // indirect
|
||||
github.com/Azure/go-autorest/logger v0.2.1 // indirect
|
||||
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
|
||||
github.com/BurntSushi/toml v0.3.1 // indirect
|
||||
github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd // indirect
|
||||
github.com/Masterminds/goutils v1.1.1 // indirect
|
||||
github.com/Masterminds/semver v1.5.0 // indirect
|
||||
github.com/Masterminds/semver/v3 v3.1.1 // indirect
|
||||
github.com/Masterminds/sprig/v3 v3.2.2 // indirect
|
||||
github.com/Masterminds/squirrel v1.5.0 // indirect
|
||||
github.com/PuerkitoBio/purell v1.1.1 // indirect
|
||||
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
|
||||
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d // indirect
|
||||
github.com/argoproj/argo-cd v1.8.7 // indirect
|
||||
github.com/argoproj/gitops-engine v0.3.3 // indirect
|
||||
github.com/argoproj/pkg v0.2.0 // indirect
|
||||
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 // indirect
|
||||
github.com/aws/aws-sdk-go v1.35.24 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect
|
||||
github.com/blang/semver v3.5.1+incompatible // indirect
|
||||
github.com/bombsimon/logrusr v1.0.0 // indirect
|
||||
github.com/bshuster-repo/logrus-logstash-hook v1.0.0 // indirect
|
||||
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd // indirect
|
||||
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b // indirect
|
||||
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.1.1 // indirect
|
||||
github.com/chai2010/gettext-go v0.0.0-20170215093142-bf70f2a70fb1 // indirect
|
||||
github.com/cheggaaa/pb v1.0.27 // indirect
|
||||
github.com/containerd/stargz-snapshotter/estargz v0.7.0 // indirect
|
||||
github.com/cyphar/filepath-securejoin v0.2.2 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/dimchansky/utfbom v1.1.0 // indirect
|
||||
github.com/docker/cli v20.10.9+incompatible // indirect
|
||||
github.com/docker/distribution v2.7.1+incompatible // indirect
|
||||
github.com/docker/docker v20.10.9+incompatible // indirect
|
||||
github.com/docker/docker-credential-helpers v0.6.4 // indirect
|
||||
github.com/docker/go-connections v0.4.0 // indirect
|
||||
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
|
||||
github.com/docker/go-metrics v0.0.1 // indirect
|
||||
github.com/docker/go-units v0.4.0 // indirect
|
||||
github.com/dsnet/compress v0.0.1 // indirect
|
||||
github.com/emicklei/go-restful v2.9.5+incompatible // indirect
|
||||
github.com/emirpasic/gods v1.12.0 // indirect
|
||||
github.com/evanphx/json-patch v4.11.0+incompatible // indirect
|
||||
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
|
||||
github.com/fatih/camelcase v1.0.0 // indirect
|
||||
github.com/fatih/color v1.9.0 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.1 // indirect
|
||||
github.com/ghodss/yaml v1.0.0 // indirect
|
||||
github.com/go-errors/errors v1.0.1 // indirect
|
||||
github.com/go-logr/logr v0.4.0 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.19.3 // indirect
|
||||
github.com/go-openapi/jsonreference v0.19.3 // indirect
|
||||
github.com/go-openapi/spec v0.19.5 // indirect
|
||||
github.com/go-openapi/swag v0.19.5 // indirect
|
||||
github.com/gobwas/glob v0.2.3 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang-jwt/jwt/v4 v4.0.0 // indirect
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
|
||||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/golang/snappy v0.0.3 // indirect
|
||||
github.com/gomodule/redigo v2.0.0+incompatible // indirect
|
||||
github.com/google/btree v1.0.0 // indirect
|
||||
github.com/google/go-cmp v0.5.6 // indirect
|
||||
github.com/google/gofuzz v1.1.0 // indirect
|
||||
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
|
||||
github.com/google/uuid v1.2.0 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.0.5 // indirect
|
||||
github.com/googleapis/gnostic v0.5.5 // indirect
|
||||
github.com/gorilla/handlers v1.5.1 // indirect
|
||||
github.com/gorilla/mux v1.8.0 // indirect
|
||||
github.com/gosuri/uitable v0.0.4 // indirect
|
||||
github.com/goware/prefixer v0.0.0-20160118172347-395022866408 // indirect
|
||||
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4 // indirect
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
|
||||
github.com/hashicorp/errwrap v1.0.0 // indirect
|
||||
github.com/hashicorp/go-cleanhttp v0.5.1 // indirect
|
||||
github.com/hashicorp/go-getter v1.4.1 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.0 // indirect
|
||||
github.com/hashicorp/go-retryablehttp v0.6.6 // indirect
|
||||
github.com/hashicorp/go-rootcerts v1.0.1 // indirect
|
||||
github.com/hashicorp/go-safetemp v1.0.0 // indirect
|
||||
github.com/hashicorp/go-sockaddr v1.0.2 // indirect
|
||||
github.com/hashicorp/go-version v1.2.0 // indirect
|
||||
github.com/hashicorp/golang-lru v0.5.4 // indirect
|
||||
github.com/hashicorp/hcl v1.0.0 // indirect
|
||||
github.com/hashicorp/vault/api v1.0.4 // indirect
|
||||
github.com/hashicorp/vault/sdk v0.1.13 // indirect
|
||||
github.com/howeyc/gopass v0.0.0-20170109162249-bf9dde6d0d2c // indirect
|
||||
github.com/huandu/xstrings v1.3.2 // indirect
|
||||
github.com/imdario/mergo v0.3.12 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
github.com/jmoiron/sqlx v1.3.1 // indirect
|
||||
github.com/jonboulle/clockwork v0.1.0 // indirect
|
||||
github.com/json-iterator/go v1.1.11 // indirect
|
||||
github.com/jstemmer/go-junit-report v0.9.1 // indirect
|
||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
|
||||
github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd // indirect
|
||||
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
|
||||
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
|
||||
github.com/lib/pq v1.10.0 // indirect
|
||||
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
|
||||
github.com/mailru/easyjson v0.7.0 // indirect
|
||||
github.com/mattn/go-colorable v0.1.6 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
|
||||
github.com/mitchellh/copystructure v1.1.1 // indirect
|
||||
github.com/mitchellh/go-homedir v1.1.0 // indirect
|
||||
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
|
||||
github.com/mitchellh/go-wordwrap v1.0.0 // indirect
|
||||
github.com/mitchellh/mapstructure v1.4.1 // indirect
|
||||
github.com/mitchellh/reflectwalk v1.0.1 // indirect
|
||||
github.com/moby/locker v1.0.1 // indirect
|
||||
github.com/moby/spdystream v0.2.0 // indirect
|
||||
github.com/moby/term v0.0.0-20201216013528-df9cb8a40635 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.1 // indirect
|
||||
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
|
||||
github.com/morikuni/aec v1.0.0 // indirect
|
||||
github.com/mozilla-services/yaml v0.0.0-20191106225358-5c216288813c // indirect
|
||||
github.com/nwaples/rardecode v1.1.0 // indirect
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
|
||||
github.com/pierrec/lz4 v2.0.5+incompatible // indirect
|
||||
github.com/pierrec/lz4/v4 v4.0.3 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/prometheus/client_golang v1.11.0 // indirect
|
||||
github.com/prometheus/client_model v0.2.0 // indirect
|
||||
github.com/prometheus/common v0.26.0 // indirect
|
||||
github.com/prometheus/procfs v0.6.0 // indirect
|
||||
github.com/rancher/lasso v0.0.0-20210616224652-fc3ebd901c08 // indirect
|
||||
github.com/rivo/uniseg v0.2.0 // indirect
|
||||
github.com/robfig/cron v1.1.0 // indirect
|
||||
github.com/rubenv/sql-migrate v0.0.0-20210614095031-55d5740dbbcc // indirect
|
||||
github.com/russross/blackfriday v1.5.2 // indirect
|
||||
github.com/ryanuber/go-glob v1.0.0 // indirect
|
||||
github.com/sergi/go-diff v1.1.0 // indirect
|
||||
github.com/shopspring/decimal v1.2.0 // indirect
|
||||
github.com/spf13/cast v1.3.1 // indirect
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
github.com/src-d/gcfg v1.4.0 // indirect
|
||||
github.com/stretchr/testify v1.7.0 // indirect
|
||||
github.com/xanzy/ssh-agent v0.2.1 // indirect
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
|
||||
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
|
||||
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 // indirect
|
||||
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca // indirect
|
||||
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 // indirect
|
||||
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50 // indirect
|
||||
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f // indirect
|
||||
go.mozilla.org/gopgagent v0.0.0-20170926210634-4d7ea76ff71a // indirect
|
||||
go.mozilla.org/sops/v3 v3.6.1 // indirect
|
||||
go.opencensus.io v0.23.0 // indirect
|
||||
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 // indirect
|
||||
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect
|
||||
golang.org/x/mod v0.4.2 // indirect
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5 // indirect
|
||||
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c // indirect
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
|
||||
golang.org/x/text v0.3.6 // indirect
|
||||
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba // indirect
|
||||
golang.org/x/tools v0.1.5 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
|
||||
google.golang.org/api v0.47.0 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c // indirect
|
||||
google.golang.org/grpc v1.38.0 // indirect
|
||||
google.golang.org/protobuf v1.26.0 // indirect
|
||||
gopkg.in/gorp.v1 v1.7.2 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/ini.v1 v1.62.0 // indirect
|
||||
gopkg.in/square/go-jose.v2 v2.5.1 // indirect
|
||||
gopkg.in/src-d/go-billy.v4 v4.3.2 // indirect
|
||||
gopkg.in/src-d/go-git.v4 v4.13.1 // indirect
|
||||
gopkg.in/urfave/cli.v1 v1.20.0 // indirect
|
||||
gopkg.in/warnings.v0 v0.1.2 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
|
||||
k8s.io/apiextensions-apiserver v0.22.1 // indirect
|
||||
k8s.io/apiserver v0.22.1 // indirect
|
||||
k8s.io/component-base v0.21.3 // indirect
|
||||
k8s.io/component-helpers v0.21.3 // indirect
|
||||
k8s.io/klog/v2 v2.9.0 // indirect
|
||||
k8s.io/kube-aggregator v0.20.4 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20210305001622-591a79e4bda7 // indirect
|
||||
k8s.io/kubectl v0.22.1 // indirect
|
||||
k8s.io/kubernetes v1.21.3 // indirect
|
||||
k8s.io/utils v0.0.0-20210527160623-6fdb442a123b // indirect
|
||||
sigs.k8s.io/kustomize/api v0.8.8 // indirect
|
||||
sigs.k8s.io/kustomize/kyaml v0.10.17 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.1.2 // indirect
|
||||
sigs.k8s.io/yaml v1.2.0 // indirect
|
||||
)
|
||||
|
||||
replace (
|
||||
github.com/docker/docker => github.com/moby/moby v20.10.8+incompatible
|
||||
github.com/rancher/fleet/pkg/apis v0.0.0 => github.com/rancher/fleet/pkg/apis v0.0.0-20210604212701-3a76c78716ab
|
||||
helm.sh/helm/v3 => github.com/rancher/helm/v3 v3.3.3-fleet1
|
||||
k8s.io/api => k8s.io/api v0.20.2
|
||||
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.20.2 // indirect
|
||||
k8s.io/apimachinery => k8s.io/apimachinery v0.20.2 // indirect
|
||||
k8s.io/apiserver => k8s.io/apiserver v0.20.2
|
||||
k8s.io/cli-runtime => k8s.io/cli-runtime v0.20.2
|
||||
k8s.io/client-go => github.com/rancher/client-go v0.20.0-fleet1
|
||||
k8s.io/cloud-provider => k8s.io/cloud-provider v0.20.2
|
||||
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.20.2
|
||||
k8s.io/code-generator => k8s.io/code-generator v0.20.2
|
||||
k8s.io/component-base => k8s.io/component-base v0.20.2
|
||||
k8s.io/component-helpers => k8s.io/component-helpers v0.20.2
|
||||
k8s.io/controller-manager => k8s.io/controller-manager v0.20.2
|
||||
k8s.io/cri-api => k8s.io/cri-api v0.20.2
|
||||
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.20.2
|
||||
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.20.2
|
||||
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.20.2
|
||||
k8s.io/kube-proxy => k8s.io/kube-proxy v0.20.2
|
||||
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.20.2
|
||||
k8s.io/kubectl => k8s.io/kubectl v0.20.2
|
||||
k8s.io/kubelet => k8s.io/kubelet v0.20.2
|
||||
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.20.2
|
||||
k8s.io/metrics => k8s.io/metrics v0.20.2
|
||||
k8s.io/mount-utils => k8s.io/mount-utils v0.20.2
|
||||
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.20.2
|
||||
k8s.io/api => k8s.io/api v0.21.3
|
||||
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.21.3 // indirect
|
||||
k8s.io/apimachinery => k8s.io/apimachinery v0.21.3 // indirect
|
||||
k8s.io/apiserver => k8s.io/apiserver v0.21.3
|
||||
k8s.io/cli-runtime => k8s.io/cli-runtime v0.21.3
|
||||
k8s.io/client-go => github.com/rancher/client-go v0.21.3-fleet1
|
||||
k8s.io/cloud-provider => k8s.io/cloud-provider v0.21.3
|
||||
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.21.3
|
||||
k8s.io/code-generator => k8s.io/code-generator v0.21.3
|
||||
k8s.io/component-base => k8s.io/component-base v0.21.3
|
||||
k8s.io/component-helpers => k8s.io/component-helpers v0.21.3
|
||||
k8s.io/controller-manager => k8s.io/controller-manager v0.21.3
|
||||
k8s.io/cri-api => k8s.io/cri-api v0.21.3
|
||||
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.21.3
|
||||
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.21.3
|
||||
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.21.3
|
||||
k8s.io/kube-proxy => k8s.io/kube-proxy v0.21.3
|
||||
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.21.3
|
||||
k8s.io/kubectl => k8s.io/kubectl v0.21.3
|
||||
k8s.io/kubelet => k8s.io/kubelet v0.21.3
|
||||
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.21.3
|
||||
k8s.io/metrics => k8s.io/metrics v0.21.3
|
||||
k8s.io/mount-utils => k8s.io/mount-utils v0.21.3
|
||||
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.21.3
|
||||
)
|
||||
|
||||
28
pkg/apis/hauler.cattle.io/v1alpha1/chart.go
Normal file
28
pkg/apis/hauler.cattle.io/v1alpha1/chart.go
Normal file
@@ -0,0 +1,28 @@
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
const ChartsContentKind = "Charts"
|
||||
|
||||
type Charts struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ChartSpec `json:"spec,omitempty"`
|
||||
}
|
||||
|
||||
type ChartSpec struct {
|
||||
Charts []Chart `json:"charts,omitempty"`
|
||||
}
|
||||
|
||||
type Chart struct {
|
||||
Name string `json:"name"`
|
||||
RepoURL string `json:"repoURL"`
|
||||
Version string `json:"version"`
|
||||
|
||||
bleh sql.ColumnType
|
||||
}
|
||||
@@ -1,91 +1,21 @@
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/cli-utils/pkg/object"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
type Drive interface {
|
||||
Images() ([]string, error)
|
||||
BinURL() string
|
||||
const (
|
||||
DriverContentKind = "Driver"
|
||||
)
|
||||
|
||||
LibPath() string
|
||||
EtcPath() string
|
||||
Config() (*map[string]interface{}, error)
|
||||
SystemObjects() (objs []object.ObjMetadata)
|
||||
type Driver struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec DriverSpec `json:"spec"`
|
||||
}
|
||||
|
||||
//Driver
|
||||
type Driver struct {
|
||||
type DriverSpec struct {
|
||||
Type string `json:"type"`
|
||||
Version string `json:"version"`
|
||||
}
|
||||
|
||||
////TODO: Don't hardcode this
|
||||
//func (k k3s) BinURL() string {
|
||||
// return "https://github.com/k3s-io/k3s/releases/download/v1.21.1%2Bk3s1/k3s"
|
||||
//}
|
||||
//
|
||||
//func (k k3s) PackageImages() ([]string, error) {
|
||||
// //TODO: Replace this with a query to images.txt on release page
|
||||
// return []string{
|
||||
// "docker.io/rancher/coredns-coredns:1.8.3",
|
||||
// "docker.io/rancher/klipper-helm:v0.5.0-build20210505",
|
||||
// "docker.io/rancher/klipper-lb:v0.2.0",
|
||||
// "docker.io/rancher/library-busybox:1.32.1",
|
||||
// "docker.io/rancher/library-traefik:2.4.8",
|
||||
// "docker.io/rancher/local-path-provisioner:v0.0.19",
|
||||
// "docker.io/rancher/metrics-server:v0.3.6",
|
||||
// "docker.io/rancher/pause:3.1",
|
||||
// }, nil
|
||||
//}
|
||||
//
|
||||
//func (k k3s) Config() (*map[string]interface{}, error) {
|
||||
// // TODO: This should be typed
|
||||
// c := make(map[string]interface{})
|
||||
// c["write-kubeconfig-mode"] = "0644"
|
||||
//
|
||||
// //TODO: Add uid or something to ensure this works for multi-node setups
|
||||
// c["node-name"] = "hauler"
|
||||
//
|
||||
// return &c, nil
|
||||
//}
|
||||
//
|
||||
//func (k k3s) SystemObjects() (objs []object.ObjMetadata) {
|
||||
// //TODO: Make sure this matches up with specified config disables
|
||||
// for _, dep := range []string{"coredns", "local-path-provisioner", "metrics-server"} {
|
||||
// objMeta, _ := object.CreateObjMetadata("kube-system", dep, schema.GroupKind{Kind: "Deployment", Group: "apps"})
|
||||
// objs = append(objs, objMeta)
|
||||
// }
|
||||
// return objs
|
||||
//}
|
||||
//
|
||||
//func (k k3s) LibPath() string { return "/var/lib/rancher/k3s" }
|
||||
//func (k k3s) EtcPath() string { return "/etc/rancher/k3s" }
|
||||
//
|
||||
////TODO: Implement rke2 as a driver
|
||||
//type rke2 struct{}
|
||||
//
|
||||
//func (r rke2) PackageImages() ([]string, error) { return []string{}, nil }
|
||||
//func (r rke2) BinURL() string { return "" }
|
||||
//func (r rke2) LibPath() string { return "" }
|
||||
//func (r rke2) EtcPath() string { return "" }
|
||||
//func (r rke2) Config() (*map[string]interface{}, error) { return nil, nil }
|
||||
//func (r rke2) SystemObjects() (objs []object.ObjMetadata) { return objs }
|
||||
//
|
||||
////NewDriver will return the appropriate driver given a kind, defaults to k3s
|
||||
//func NewDriver(kind string) Drive {
|
||||
// var d Drive
|
||||
// switch kind {
|
||||
// case "rke2":
|
||||
// //TODO
|
||||
// d = rke2{}
|
||||
//
|
||||
// default:
|
||||
// d = k3s{
|
||||
// dataDir: "/var/lib/rancher/k3s",
|
||||
// etcDir: "/etc/rancher/k3s",
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// return d
|
||||
//}
|
||||
|
||||
23
pkg/apis/hauler.cattle.io/v1alpha1/file.go
Normal file
23
pkg/apis/hauler.cattle.io/v1alpha1/file.go
Normal file
@@ -0,0 +1,23 @@
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
const FilesContentKind = "Files"
|
||||
|
||||
type Files struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec FileSpec `json:"spec,omitempty"`
|
||||
}
|
||||
|
||||
type FileSpec struct {
|
||||
Files []File `json:"files,omitempty"`
|
||||
}
|
||||
|
||||
type File struct {
|
||||
Ref string `json:"ref"`
|
||||
Name string `json:"name,omitempty"`
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
//Fleet is used as the deployment engine for all things Hauler
|
||||
type Fleet struct {
|
||||
//Version of fleet to package and use in deployment
|
||||
Version string `json:"version"`
|
||||
}
|
||||
|
||||
//TODO: These should be identified from the chart version
|
||||
func (f Fleet) Images() ([]string, error) {
|
||||
return []string{
|
||||
fmt.Sprintf("rancher/gitjob:v0.1.15"),
|
||||
fmt.Sprintf("rancher/fleet:%s", f.Version),
|
||||
fmt.Sprintf("rancher/fleet-agent:%s", f.Version),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (f Fleet) CRDChart() string {
|
||||
return fmt.Sprintf("https://github.com/rancher/fleet/releases/download/%s/fleet-crd-%s.tgz", f.Version, f.VLess())
|
||||
}
|
||||
func (f Fleet) Chart() string {
|
||||
return fmt.Sprintf("https://github.com/rancher/fleet/releases/download/%s/fleet-%s.tgz", f.Version, f.VLess())
|
||||
}
|
||||
|
||||
func (f Fleet) VLess() string {
|
||||
return strings.ReplaceAll(f.Version, "v", "")
|
||||
}
|
||||
16
pkg/apis/hauler.cattle.io/v1alpha1/groupversion_info.go
Normal file
16
pkg/apis/hauler.cattle.io/v1alpha1/groupversion_info.go
Normal file
@@ -0,0 +1,16 @@
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"sigs.k8s.io/controller-runtime/pkg/scheme"
|
||||
)
|
||||
|
||||
const (
|
||||
ContentGroup = "content.hauler.cattle.io"
|
||||
)
|
||||
|
||||
var (
|
||||
GroupVersion = schema.GroupVersion{Group: ContentGroup, Version: "v1alpha1"}
|
||||
|
||||
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
|
||||
)
|
||||
22
pkg/apis/hauler.cattle.io/v1alpha1/image.go
Normal file
22
pkg/apis/hauler.cattle.io/v1alpha1/image.go
Normal file
@@ -0,0 +1,22 @@
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
)
|
||||
|
||||
const ImagesContentKind = "Images"
|
||||
|
||||
type Images struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec ImageSpec `json:"spec,omitempty"`
|
||||
}
|
||||
|
||||
type ImageSpec struct {
|
||||
Images []Image `json:"images,omitempty"`
|
||||
}
|
||||
|
||||
type Image struct {
|
||||
Ref string `json:"ref"`
|
||||
}
|
||||
@@ -1,53 +0,0 @@
|
||||
package v1alpha1
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
const (
|
||||
BundlesDir = "bundles"
|
||||
LayoutDir = "layout"
|
||||
BinDir = "bin"
|
||||
ChartDir = "charts"
|
||||
|
||||
PackageFile = "package.json"
|
||||
)
|
||||
|
||||
type Package struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
Spec PackageSpec `json:"spec"`
|
||||
}
|
||||
|
||||
type PackageSpec struct {
|
||||
Fleet Fleet `json:"fleet"`
|
||||
|
||||
Driver Driver `json:"driver"`
|
||||
|
||||
// Paths is the list of directories relative to the working directory contains all resources to be bundled.
|
||||
// path globbing is supported, for example [ "charts/*" ] will match all folders as a subdirectory of charts/
|
||||
// If empty, "/" is the default
|
||||
Paths []string `json:"paths,omitempty"`
|
||||
|
||||
Images []string `json:"images,omitempty"`
|
||||
}
|
||||
|
||||
//LoadPackageFromDir will load an existing package from a directory on disk, it fails if no PackageFile is found in dir
|
||||
func LoadPackageFromDir(path string) (Package, error) {
|
||||
data, err := os.ReadFile(filepath.Join(path, PackageFile))
|
||||
if err != nil {
|
||||
return Package{}, err
|
||||
}
|
||||
|
||||
var p Package
|
||||
if err := yaml.Unmarshal(data, &p); err != nil {
|
||||
return Package{}, err
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
@@ -1,174 +0,0 @@
|
||||
package bootstrap
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"github.com/google/go-containerregistry/pkg/v1/tarball"
|
||||
"github.com/otiai10/copy"
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"github.com/rancherfederal/hauler/pkg/driver"
|
||||
"github.com/rancherfederal/hauler/pkg/fs"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
"helm.sh/helm/v3/pkg/chart/loader"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
type Booter interface {
|
||||
Init() error
|
||||
PreBoot(context.Context) error
|
||||
Boot(context.Context, driver.Driver) error
|
||||
PostBoot(context.Context, driver.Driver) error
|
||||
}
|
||||
|
||||
type booter struct {
|
||||
Package v1alpha1.Package
|
||||
fs fs.PkgFs
|
||||
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
//NewBooter will build a new booter given a path to a directory containing a hauler package.json
|
||||
func NewBooter(pkgPath string, logger log.Logger) (*booter, error) {
|
||||
pkg, err := v1alpha1.LoadPackageFromDir(pkgPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fsys := fs.NewPkgFS(pkgPath)
|
||||
|
||||
return &booter{
|
||||
Package: pkg,
|
||||
fs: fsys,
|
||||
logger: logger,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (b booter) PreBoot(ctx context.Context, d driver.Driver) error {
|
||||
b.logger.Infof("Beginning pre boot")
|
||||
|
||||
//TODO: Feel like there's a better way to do all this dir creation
|
||||
|
||||
if err := os.MkdirAll(d.DataPath(), os.ModePerm); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
//TODO: Don't hardcode this
|
||||
binPath := filepath.Join("/opt/hauler/bin")
|
||||
if err := b.move(b.fs.Bin(), binPath, os.ModePerm); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
bundlesPath := d.DataPath("server/manifests/hauler")
|
||||
if err := b.move(b.fs.Bundle(), bundlesPath, 0700); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
chartsPath := d.DataPath("server/static/charts/hauler")
|
||||
if err := b.move(b.fs.Chart(), chartsPath, 0700); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
//Images are slightly different b/c we convert before move as well
|
||||
//TODO: refactor this better
|
||||
if err := b.moveImages(d); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
b.logger.Debugf("Writing %s config", d.Name())
|
||||
if err := d.WriteConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
b.logger.Successf("Completed pre boot")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b booter) Boot(ctx context.Context, d driver.Driver) error {
|
||||
b.logger.Infof("Beginning boot")
|
||||
|
||||
var stdoutBuf, stderrBuf bytes.Buffer
|
||||
out := io.MultiWriter(os.Stdout, &stdoutBuf, &stderrBuf)
|
||||
|
||||
err := d.Start(out)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
b.logger.Infof("Waiting for driver core components to provision...")
|
||||
waitErr := waitForDriver(ctx, d)
|
||||
if waitErr != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
b.logger.Successf("Completed boot")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b booter) PostBoot(ctx context.Context, d driver.Driver) error {
|
||||
b.logger.Infof("Beginning post boot")
|
||||
|
||||
cf := NewBootConfig("fleet-system", d.KubeConfigPath())
|
||||
|
||||
fleetCrdChartPath := b.fs.Chart().Path(fmt.Sprintf("fleet-crd-%s.tgz", b.Package.Spec.Fleet.VLess()))
|
||||
fleetCrdChart, err := loader.Load(fleetCrdChartPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
b.logger.Infof("Installing fleet crds")
|
||||
fleetCrdRelease, fleetCrdErr := installChart(cf, fleetCrdChart, "fleet-crd", nil, b.logger)
|
||||
if fleetCrdErr != nil {
|
||||
return fleetCrdErr
|
||||
}
|
||||
|
||||
b.logger.Infof("Installed '%s' to namespace '%s'", fleetCrdRelease.Name, fleetCrdRelease.Namespace)
|
||||
|
||||
fleetChartPath := b.fs.Chart().Path(fmt.Sprintf("fleet-%s.tgz", b.Package.Spec.Fleet.VLess()))
|
||||
fleetChart, err := loader.Load(fleetChartPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
b.logger.Infof("Installing fleet")
|
||||
fleetRelease, fleetErr := installChart(cf, fleetChart, "fleet", nil, b.logger)
|
||||
if fleetErr != nil {
|
||||
return fleetErr
|
||||
}
|
||||
|
||||
b.logger.Infof("Installed '%s' to namespace '%s'", fleetRelease.Name, fleetRelease.Namespace)
|
||||
|
||||
b.logger.Successf("Completed post boot")
|
||||
return nil
|
||||
}
|
||||
|
||||
//TODO: Move* will actually just copy. This is more expensive, but is much safer/easier at handling deep merges, should this change?
|
||||
func (b booter) move(fsys fs.PkgFs, path string, mode os.FileMode) error {
|
||||
if err := os.MkdirAll(path, mode); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err := copy.Copy(fsys.Path(), path)
|
||||
if !os.IsNotExist(err) && err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b booter) moveImages(d driver.Driver) error {
|
||||
//NOTE: archives are not recursively searched, this _must_ be at the images dir
|
||||
path := d.DataPath("agent/images")
|
||||
if err := os.MkdirAll(path, 0700); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
refs, err := b.fs.MapLayout()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return tarball.MultiRefWriteToFile(filepath.Join(path, "hauler.tar"), refs)
|
||||
}
|
||||
@@ -1,29 +0,0 @@
|
||||
package bootstrap
|
||||
|
||||
import (
|
||||
"k8s.io/cli-runtime/pkg/genericclioptions"
|
||||
)
|
||||
|
||||
type BootSettings struct {
|
||||
config *genericclioptions.ConfigFlags
|
||||
Namespace string
|
||||
KubeConfig string
|
||||
}
|
||||
|
||||
func NewBootConfig(ns, kubepath string) *BootSettings {
|
||||
env := &BootSettings{
|
||||
Namespace: ns,
|
||||
KubeConfig: kubepath,
|
||||
}
|
||||
|
||||
env.config = &genericclioptions.ConfigFlags{
|
||||
Namespace: &env.Namespace,
|
||||
KubeConfig: &env.KubeConfig,
|
||||
}
|
||||
return env
|
||||
}
|
||||
|
||||
// RESTClientGetter gets the kubeconfig from BootSettings
|
||||
func (s *BootSettings) RESTClientGetter() genericclioptions.RESTClientGetter {
|
||||
return s.config
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
package bootstrap
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBootSettings(t *testing.T) {
|
||||
|
||||
ns := "test"
|
||||
kpath := "somepath"
|
||||
|
||||
settings := NewBootConfig(ns, kpath)
|
||||
|
||||
if settings.Namespace != ns {
|
||||
t.Errorf("expected namespace %q, got %q", ns, settings.Namespace)
|
||||
}
|
||||
if settings.KubeConfig != kpath {
|
||||
t.Errorf("expected kube-config %q, got %q", kpath, settings.KubeConfig)
|
||||
}
|
||||
}
|
||||
@@ -1,63 +0,0 @@
|
||||
package bootstrap
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"github.com/rancherfederal/hauler/pkg/driver"
|
||||
"github.com/rancherfederal/hauler/pkg/kube"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
"helm.sh/helm/v3/pkg/action"
|
||||
"helm.sh/helm/v3/pkg/chart"
|
||||
"helm.sh/helm/v3/pkg/release"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
func waitForDriver(ctx context.Context, d driver.Driver) error {
|
||||
ctx, cancel := context.WithTimeout(ctx, 2*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
//TODO: This is a janky way of waiting for file to exist
|
||||
for {
|
||||
_, err := os.Stat(d.KubeConfigPath())
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
|
||||
if ctx.Err() == context.DeadlineExceeded {
|
||||
return errors.New("timed out waiting for driver to provision")
|
||||
}
|
||||
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
|
||||
cfg, err := kube.NewKubeConfig()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sc, err := kube.NewStatusChecker(cfg, 5*time.Second, 5*time.Minute)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return sc.WaitForCondition(d.SystemObjects()...)
|
||||
}
|
||||
|
||||
//TODO: This is likely way too fleet specific
|
||||
func installChart(cf *BootSettings, chart *chart.Chart, releaseName string, vals map[string]interface{}, logger log.Logger) (*release.Release, error) {
|
||||
actionConfig := new(action.Configuration)
|
||||
if err := actionConfig.Init(cf.RESTClientGetter(), cf.Namespace, os.Getenv("HELM_DRIVER"), logger.Debugf); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
client := action.NewInstall(actionConfig)
|
||||
client.ReleaseName = releaseName
|
||||
client.CreateNamespace = true
|
||||
client.Wait = true
|
||||
|
||||
//TODO: Do this better
|
||||
client.Namespace = cf.Namespace
|
||||
|
||||
return client.Run(chart, vals)
|
||||
}
|
||||
140
pkg/content/chart/chart.go
Normal file
140
pkg/content/chart/chart.go
Normal file
@@ -0,0 +1,140 @@
|
||||
package chart
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/containerd/containerd/remotes"
|
||||
"github.com/containerd/containerd/remotes/docker"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"helm.sh/helm/v3/pkg/action"
|
||||
"helm.sh/helm/v3/pkg/chart/loader"
|
||||
"helm.sh/helm/v3/pkg/cli"
|
||||
"k8s.io/client-go/util/jsonpath"
|
||||
"oras.land/oras-go/pkg/content"
|
||||
"oras.land/oras-go/pkg/oras"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
)
|
||||
|
||||
const (
|
||||
// OCIScheme is the URL scheme for OCI-based requests
|
||||
OCIScheme = "oci"
|
||||
|
||||
// CredentialsFileBasename is the filename for auth credentials file
|
||||
CredentialsFileBasename = "config.json"
|
||||
|
||||
// ConfigMediaType is the reserved media type for the Helm chart manifest config
|
||||
ConfigMediaType = "application/vnd.cncf.helm.config.v1+json"
|
||||
|
||||
// ChartLayerMediaType is the reserved media type for Helm chart package content
|
||||
ChartLayerMediaType = "application/vnd.cncf.helm.chart.content.v1.tar+gzip"
|
||||
|
||||
// ProvLayerMediaType is the reserved media type for Helm chart provenance files
|
||||
ProvLayerMediaType = "application/vnd.cncf.helm.chart.provenance.v1.prov"
|
||||
)
|
||||
|
||||
type Chart struct {
|
||||
cfg v1alpha1.Chart
|
||||
|
||||
resolver remotes.Resolver
|
||||
}
|
||||
|
||||
var defaultKnownImagePaths = []string{
|
||||
// Deployments & DaemonSets
|
||||
"{.spec.template.spec.initContainers[*].image}",
|
||||
"{.spec.template.spec.containers[*].image}",
|
||||
|
||||
// Pods
|
||||
"{.spec.initContainers[*].image}",
|
||||
"{.spec.containers[*].image}",
|
||||
}
|
||||
|
||||
func NewChart(cfg v1alpha1.Chart) Chart {
|
||||
return Chart{
|
||||
cfg: cfg,
|
||||
|
||||
// TODO:
|
||||
resolver: docker.NewResolver(docker.ResolverOptions{}),
|
||||
}
|
||||
}
|
||||
|
||||
func (c Chart) Copy(ctx context.Context, registry string) error {
|
||||
var (
|
||||
s = content.NewMemoryStore()
|
||||
l = log.FromContext(ctx)
|
||||
contentDescriptors []ocispec.Descriptor
|
||||
)
|
||||
|
||||
chartdata, err := fetch(c.cfg.Name, c.cfg.RepoURL, c.cfg.Version)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ch, err := loader.LoadArchive(bytes.NewBuffer(chartdata))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
chartDescriptor := s.Add("", ChartLayerMediaType, chartdata)
|
||||
contentDescriptors = append(contentDescriptors, chartDescriptor)
|
||||
|
||||
configData, _ := json.Marshal(ch.Metadata)
|
||||
configDescriptor := s.Add("", ConfigMediaType, configData)
|
||||
|
||||
// TODO: Clean this up
|
||||
ref, err := name.ParseReference(fmt.Sprintf("hauler/%s:%s", c.cfg.Name, c.cfg.Version), name.WithDefaultRegistry(registry))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Infof("Copying chart to: '%s'", ref.Name())
|
||||
pushedDesc, err := oras.Push(ctx, c.resolver, ref.Name(), s, contentDescriptors,
|
||||
oras.WithConfig(configDescriptor), oras.WithNameValidation(nil))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Debugf("Copied with descriptor: '%s'", pushedDesc.Digest.String())
|
||||
return nil
|
||||
}
|
||||
|
||||
func fetch(name, repo, version string) ([]byte, error) {
|
||||
cpo := action.ChartPathOptions{
|
||||
RepoURL: repo,
|
||||
Version: version,
|
||||
}
|
||||
|
||||
cp, err := cpo.LocateChart(name, cli.New())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(cp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func parseJSONPath(data interface{}, parser *jsonpath.JSONPath, template string) ([]string, error) {
|
||||
buf := new(bytes.Buffer)
|
||||
if err := parser.Parse(template); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := parser.Execute(buf, data); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f := func(s rune) bool { return s == ' ' }
|
||||
r := strings.FieldsFunc(buf.String(), f)
|
||||
return r, nil
|
||||
}
|
||||
95
pkg/content/chart/dependents.go
Normal file
95
pkg/content/chart/dependents.go
Normal file
@@ -0,0 +1,95 @@
|
||||
package chart
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"io"
|
||||
|
||||
"github.com/rancher/wrangler/pkg/yaml"
|
||||
"helm.sh/helm/v3/pkg/action"
|
||||
"helm.sh/helm/v3/pkg/chart"
|
||||
"helm.sh/helm/v3/pkg/chartutil"
|
||||
"helm.sh/helm/v3/pkg/kube/fake"
|
||||
"helm.sh/helm/v3/pkg/storage"
|
||||
"helm.sh/helm/v3/pkg/storage/driver"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/util/jsonpath"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
)
|
||||
|
||||
// ImagesInChart will render a chart and identify all dependent images from it
|
||||
func ImagesInChart(c *chart.Chart) (v1alpha1.Images, error) {
|
||||
objs, err := template(c)
|
||||
if err != nil {
|
||||
return v1alpha1.Images{}, err
|
||||
}
|
||||
|
||||
var imageRefs []string
|
||||
for _, o := range objs {
|
||||
d, err := o.(*unstructured.Unstructured).MarshalJSON()
|
||||
if err != nil {
|
||||
// TODO: Should we actually capture these errors?
|
||||
continue
|
||||
}
|
||||
|
||||
var obj interface{}
|
||||
if err := json.Unmarshal(d, &obj); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
j := jsonpath.New("")
|
||||
j.AllowMissingKeys(true)
|
||||
|
||||
for _, p := range defaultKnownImagePaths {
|
||||
r, err := parseJSONPath(obj, j, p)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
imageRefs = append(imageRefs, r...)
|
||||
}
|
||||
}
|
||||
|
||||
ims := v1alpha1.Images{
|
||||
Spec: v1alpha1.ImageSpec{
|
||||
Images: []v1alpha1.Image{},
|
||||
},
|
||||
}
|
||||
|
||||
for _, ref := range imageRefs {
|
||||
ims.Spec.Images = append(ims.Spec.Images, v1alpha1.Image{Ref: ref})
|
||||
}
|
||||
return ims, nil
|
||||
}
|
||||
|
||||
func template(c *chart.Chart) ([]runtime.Object, error) {
|
||||
s := storage.Init(driver.NewMemory())
|
||||
|
||||
templateCfg := &action.Configuration{
|
||||
RESTClientGetter: nil,
|
||||
Releases: s,
|
||||
KubeClient: &fake.PrintingKubeClient{Out: io.Discard},
|
||||
Capabilities: chartutil.DefaultCapabilities,
|
||||
Log: func(format string, v ...interface{}) {},
|
||||
}
|
||||
|
||||
// TODO: Do we need values if we're claiming this is best effort image detection?
|
||||
// Justification being: if users are relying on us to get images from their values, they could just add images to the []ImagesInChart spec of the Store api
|
||||
vals := make(map[string]interface{})
|
||||
|
||||
client := action.NewInstall(templateCfg)
|
||||
client.ReleaseName = "dry"
|
||||
client.DryRun = true
|
||||
client.Replace = true
|
||||
client.ClientOnly = true
|
||||
client.IncludeCRDs = true
|
||||
|
||||
release, err := client.Run(c, vals)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return yaml.ToObjects(bytes.NewBufferString(release.Manifest))
|
||||
}
|
||||
116
pkg/content/content.go
Normal file
116
pkg/content/content.go
Normal file
@@ -0,0 +1,116 @@
|
||||
package content
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/util/yaml"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
)
|
||||
|
||||
type Oci interface {
|
||||
// Copy relocates content to an OCI compliant registry given a name.Reference
|
||||
Copy(ctx context.Context, registry string) error
|
||||
}
|
||||
|
||||
func ValidateType(data []byte) (metav1.TypeMeta, error) {
|
||||
var tm metav1.TypeMeta
|
||||
if err := yaml.Unmarshal(data, &tm); err != nil {
|
||||
return metav1.TypeMeta{}, err
|
||||
}
|
||||
|
||||
if tm.GroupVersionKind().GroupVersion() != v1alpha1.GroupVersion {
|
||||
return metav1.TypeMeta{}, fmt.Errorf("%s is not a registered content type", tm.GroupVersionKind().String())
|
||||
}
|
||||
|
||||
return tm, nil
|
||||
}
|
||||
|
||||
// // NewFromBytes returns a new Oci object from content bytes
|
||||
// func NewFromBytes(data []byte) (Oci, error) {
|
||||
// var tm metav1.TypeMeta
|
||||
// if err := yaml.Unmarshal(data, &tm); err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
//
|
||||
// if tm.GroupVersionKind().GroupVersion() != v1alpha1.GroupVersion {
|
||||
// return nil, fmt.Errorf("%s is not an understood content type", tm.GroupVersionKind().String())
|
||||
// }
|
||||
//
|
||||
// var oci Oci
|
||||
//
|
||||
// switch tm.Kind {
|
||||
// case v1alpha1.FilesContentKind:
|
||||
// var cfg v1alpha1.Files
|
||||
// err := yaml.Unmarshal(data, &cfg)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
//
|
||||
// oci = file.New(cfg.Spec.Files[0].Name, cfg.Spec.Files[0].Ref)
|
||||
//
|
||||
// case v1alpha1.ImagesContentKind:
|
||||
// var cfg v1alpha1.Images
|
||||
// err := yaml.Unmarshal(data, &cfg)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
//
|
||||
// oci, err = image.New(cfg.Spec.Images[0].Ref)
|
||||
//
|
||||
// case v1alpha1.ChartsContentKind:
|
||||
// var cfg v1alpha1.Charts
|
||||
// err := yaml.Unmarshal(data, &cfg)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
//
|
||||
// oci = chart.New(cfg.Spec.Charts[0].Name, cfg.Spec.Charts[0].RepoURL, cfg.Spec.Charts[0].Version)
|
||||
//
|
||||
// case v1alpha1.DriverContentKind:
|
||||
// var cfg v1alpha1.Driver
|
||||
// err := yaml.Unmarshal(data, &cfg)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
//
|
||||
// return nil, fmt.Errorf("%s is still a wip", tm.GroupVersionKind().String())
|
||||
//
|
||||
// default:
|
||||
// return nil, fmt.Errorf("%s is not an understood content type", tm.GroupVersionKind().String())
|
||||
// }
|
||||
//
|
||||
// return oci, nil
|
||||
// }
|
||||
//
|
||||
// // NewFromFile is a helper function around NewFromBytes to load new content given a filename
|
||||
// func NewFromFile(filename string) ([]Oci, error) {
|
||||
// fi, err := os.Open(filename)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
//
|
||||
// reader := yaml.NewYAMLReader(bufio.NewReader(fi))
|
||||
//
|
||||
// var contents []Oci
|
||||
// for {
|
||||
// raw, err := reader.Read()
|
||||
// if err == io.EOF {
|
||||
// break
|
||||
// }
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
//
|
||||
// o, err := NewFromBytes(raw)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
//
|
||||
// contents = append(contents, o)
|
||||
// }
|
||||
//
|
||||
// return contents, err
|
||||
// }
|
||||
128
pkg/content/driver/dependencies.go
Normal file
128
pkg/content/driver/dependencies.go
Normal file
@@ -0,0 +1,128 @@
|
||||
package driver
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
)
|
||||
|
||||
type dependencies struct {
|
||||
files v1alpha1.Files
|
||||
images v1alpha1.Images
|
||||
}
|
||||
|
||||
// TODO: support multi-arch with variadic options
|
||||
func newDependencies(kind string, version string) (dependencies, error) {
|
||||
files := v1alpha1.Files{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: v1alpha1.GroupVersion.String(),
|
||||
Kind: v1alpha1.FilesContentKind,
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: kind,
|
||||
},
|
||||
Spec: v1alpha1.FileSpec{
|
||||
Files: []v1alpha1.File{},
|
||||
},
|
||||
}
|
||||
|
||||
images := v1alpha1.Images{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: v1alpha1.GroupVersion.String(),
|
||||
Kind: v1alpha1.ImagesContentKind,
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: kind,
|
||||
},
|
||||
Spec: v1alpha1.ImageSpec{
|
||||
Images: []v1alpha1.Image{},
|
||||
},
|
||||
}
|
||||
|
||||
var imgs, fls []string = nil, nil
|
||||
|
||||
switch kind {
|
||||
case "rke2":
|
||||
releaseUrl := "https://github.com/rancher/rke2/releases/download"
|
||||
u, _ := url.Parse(releaseUrl)
|
||||
u.Path = path.Join(u.Path, version, "rke2-images-all.linux-amd64.txt")
|
||||
|
||||
rke2Images, err := parseImageList(u.String())
|
||||
if err != nil {
|
||||
return dependencies{}, err
|
||||
}
|
||||
|
||||
imgs = rke2Images
|
||||
|
||||
fls = []string{
|
||||
path.Join(releaseUrl, version, "rke2.linux-amd64"),
|
||||
}
|
||||
|
||||
case "k3s":
|
||||
r := release("https://github.com/k3s-io/k3s/releases/download")
|
||||
|
||||
k3sImages, err := parseImageList(r.Join(version, "k3s-images.txt"))
|
||||
if err != nil {
|
||||
return dependencies{}, err
|
||||
}
|
||||
|
||||
imgs = k3sImages
|
||||
|
||||
fls = []string{
|
||||
r.Join(version, "k3s"),
|
||||
}
|
||||
|
||||
default:
|
||||
return dependencies{}, fmt.Errorf("%s is not a valid driver kind", kind)
|
||||
}
|
||||
|
||||
for _, fi := range fls {
|
||||
f := v1alpha1.File{
|
||||
Ref: fi,
|
||||
}
|
||||
|
||||
files.Spec.Files = append(files.Spec.Files, f)
|
||||
}
|
||||
|
||||
for _, i := range imgs {
|
||||
img := v1alpha1.Image{Ref: i}
|
||||
images.Spec.Images = append(images.Spec.Images, img)
|
||||
}
|
||||
|
||||
return dependencies{
|
||||
files: files,
|
||||
images: images,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// parseImageList is a helper function to fetch and parse k3s/rke2 release image lists
|
||||
func parseImageList(ref string) ([]string, error) {
|
||||
resp, err := http.Get(ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var imgs []string
|
||||
scanner := bufio.NewScanner(resp.Body)
|
||||
for scanner.Scan() {
|
||||
imgs = append(imgs, scanner.Text())
|
||||
}
|
||||
|
||||
return imgs, nil
|
||||
}
|
||||
|
||||
type release string
|
||||
|
||||
func (r release) Join(component ...string) string {
|
||||
u, _ := url.Parse(string(r))
|
||||
complete := []string{u.Path}
|
||||
u.Path = path.Join(append(complete, component...)...)
|
||||
return u.String()
|
||||
}
|
||||
53
pkg/content/driver/k3s.go
Normal file
53
pkg/content/driver/k3s.go
Normal file
@@ -0,0 +1,53 @@
|
||||
package driver
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/content/file"
|
||||
"github.com/rancherfederal/hauler/pkg/content/image"
|
||||
)
|
||||
|
||||
type K3s struct {
|
||||
Files []file.File
|
||||
Images []image.Image
|
||||
}
|
||||
|
||||
func NewK3s(version string) (*K3s, error) {
|
||||
bom, err := newDependencies("k3s", version)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var files []file.File
|
||||
for _, f := range bom.files.Spec.Files {
|
||||
fi := file.NewFile(f)
|
||||
files = append(files, fi)
|
||||
}
|
||||
|
||||
var images []image.Image
|
||||
for _, i := range bom.images.Spec.Images {
|
||||
img := image.NewImage(i)
|
||||
images = append(images, img)
|
||||
}
|
||||
|
||||
return &K3s{
|
||||
Files: files,
|
||||
Images: images,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (k *K3s) Copy(ctx context.Context, registry string) error {
|
||||
for _, f := range k.Files {
|
||||
if err := f.Copy(ctx, registry); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
for _, i := range k.Images {
|
||||
if err := i.Copy(ctx, registry); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
41
pkg/content/driver/k3s_test.go
Normal file
41
pkg/content/driver/k3s_test.go
Normal file
@@ -0,0 +1,41 @@
|
||||
package driver
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_newBom(t *testing.T) {
|
||||
type args struct {
|
||||
kind string
|
||||
version string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want dependencies
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "bleh",
|
||||
args: args{
|
||||
kind: "k3s",
|
||||
version: "v1.22.2+k3s2",
|
||||
},
|
||||
want: dependencies{},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := newDependencies(tt.args.kind, tt.args.version)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("newDependencies() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
}
|
||||
if !reflect.DeepEqual(got, tt.want) {
|
||||
t.Errorf("newDependencies() got = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
11
pkg/content/driver/rke2.go
Normal file
11
pkg/content/driver/rke2.go
Normal file
@@ -0,0 +1,11 @@
|
||||
package driver
|
||||
|
||||
import (
|
||||
"github.com/rancherfederal/hauler/pkg/content/file"
|
||||
"github.com/rancherfederal/hauler/pkg/content/image"
|
||||
)
|
||||
|
||||
type Rke2 struct {
|
||||
Files []file.File
|
||||
Images []image.Image
|
||||
}
|
||||
182
pkg/content/file/file.go
Normal file
182
pkg/content/file/file.go
Normal file
@@ -0,0 +1,182 @@
|
||||
package file
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/containerd/containerd/remotes/docker"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
orascontent "oras.land/oras-go/pkg/content"
|
||||
"oras.land/oras-go/pkg/oras"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
)
|
||||
|
||||
const (
|
||||
LayerMediaType = "application/vnd.hauler.cattle.io-artifact"
|
||||
)
|
||||
|
||||
type File struct {
|
||||
name string
|
||||
raw string
|
||||
|
||||
cfg v1alpha1.File
|
||||
|
||||
content getter
|
||||
}
|
||||
|
||||
func NewFile(cfg v1alpha1.File) File {
|
||||
u, err := url.Parse(cfg.Ref)
|
||||
if err != nil {
|
||||
return File{content: local(cfg.Ref)}
|
||||
}
|
||||
|
||||
var g getter
|
||||
switch u.Scheme {
|
||||
case "http", "https":
|
||||
g = https{u}
|
||||
|
||||
default:
|
||||
g = local(cfg.Ref)
|
||||
}
|
||||
|
||||
return File{
|
||||
cfg: cfg,
|
||||
content: g,
|
||||
}
|
||||
}
|
||||
|
||||
func (f *File) Ref(opts ...name.Option) (name.Reference, error) {
|
||||
cname := f.content.name()
|
||||
if f.name != "" {
|
||||
cname = f.name
|
||||
}
|
||||
|
||||
if cname == "" {
|
||||
return nil, fmt.Errorf("cannot identify name from %s", f.raw)
|
||||
}
|
||||
|
||||
return name.ParseReference(cname, opts...)
|
||||
}
|
||||
|
||||
func (f *File) Repo() string {
|
||||
cname := f.content.name()
|
||||
if f.name != "" {
|
||||
cname = f.name
|
||||
}
|
||||
return path.Join("hauler", cname)
|
||||
}
|
||||
|
||||
func (f File) Copy(ctx context.Context, registry string) error {
|
||||
l := log.FromContext(ctx)
|
||||
|
||||
resolver := docker.NewResolver(docker.ResolverOptions{})
|
||||
|
||||
// TODO: Should use a hybrid store that can mock out filenames
|
||||
fs := orascontent.NewMemoryStore()
|
||||
data, err := f.content.load()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cname := f.content.name()
|
||||
if f.name != "" {
|
||||
cname = f.name
|
||||
}
|
||||
|
||||
desc := fs.Add(cname, f.content.mediaType(), data)
|
||||
|
||||
ref, err := name.ParseReference(path.Join("hauler", cname), name.WithDefaultRegistry(registry))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Infof("Copying file to: %s", ref.Name())
|
||||
pushedDesc, err := oras.Push(ctx, resolver, ref.Name(), fs, []ocispec.Descriptor{desc})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Debugf("Copied with descriptor: %s", pushedDesc.Digest.String())
|
||||
return nil
|
||||
}
|
||||
|
||||
type getter interface {
|
||||
load() ([]byte, error)
|
||||
name() string
|
||||
mediaType() string
|
||||
}
|
||||
|
||||
type local string
|
||||
|
||||
func (f local) load() ([]byte, error) {
|
||||
fi, err := os.Stat(string(f))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var data []byte
|
||||
if fi.IsDir() {
|
||||
data = []byte("")
|
||||
} else {
|
||||
data, err = os.ReadFile(string(f))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func (f local) name() string {
|
||||
return filepath.Base(string(f))
|
||||
}
|
||||
|
||||
func (f local) mediaType() string {
|
||||
return "some-media-type"
|
||||
}
|
||||
|
||||
type https struct {
|
||||
url *url.URL
|
||||
}
|
||||
|
||||
// TODO: Support auth
|
||||
func (f https) load() ([]byte, error) {
|
||||
resp, err := http.Get(f.url.String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
data, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func (f https) name() string {
|
||||
resp, err := http.Get(f.url.String())
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
switch resp.Header {
|
||||
|
||||
default:
|
||||
return path.Base(f.url.String())
|
||||
}
|
||||
}
|
||||
|
||||
func (f https) mediaType() string {
|
||||
return "some-remote-media-type"
|
||||
}
|
||||
63
pkg/content/file/file_test.go
Normal file
63
pkg/content/file/file_test.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package file
|
||||
|
||||
// func TestNewFile(t *testing.T) {
|
||||
// ctx := context.Background()
|
||||
//
|
||||
// tmpdir, err := os.MkdirTemp("", "hauler")
|
||||
// if err != nil {
|
||||
// t.Error(err)
|
||||
// }
|
||||
// defer os.Remove(tmpdir)
|
||||
//
|
||||
// // Make some temp files
|
||||
// f, err := os.CreateTemp(tmpdir, "tmp")
|
||||
// f.Write([]byte("content"))
|
||||
// defer f.Close()
|
||||
//
|
||||
// c, err := cache.NewBoltDB(tmpdir, "cache")
|
||||
// if err != nil {
|
||||
// t.Fatal(err)
|
||||
// }
|
||||
// _ = c
|
||||
//
|
||||
// s := rstore.NewStore(ctx, tmpdir)
|
||||
// s.Start()
|
||||
// defer s.Stop()
|
||||
//
|
||||
// type args struct {
|
||||
// cfg v1alpha1.File
|
||||
// root string
|
||||
// }
|
||||
// tests := []struct {
|
||||
// name string
|
||||
// args args
|
||||
// want *File
|
||||
// wantErr bool
|
||||
// }{
|
||||
// {
|
||||
// name: "should work",
|
||||
// args: args{
|
||||
// root: tmpdir,
|
||||
// cfg: v1alpha1.File{
|
||||
// Name: "myfile",
|
||||
// },
|
||||
// },
|
||||
// want: nil,
|
||||
// wantErr: false,
|
||||
// },
|
||||
// }
|
||||
// for _, tt := range tests {
|
||||
// t.Run(tt.name, func(t *testing.T) {
|
||||
// got := New(tt.args.cfg, tt.args.root)
|
||||
//
|
||||
// ref, _ := name.ParseReference(path.Join("hauler", tt.args.cfg.Name))
|
||||
// if err := s.Add(ctx, got, ref); err != nil {
|
||||
// t.Error(err)
|
||||
// }
|
||||
//
|
||||
// if !reflect.DeepEqual(got, tt.want) {
|
||||
// t.Errorf("New() got = %v, want %v", got, tt.want)
|
||||
// }
|
||||
// })
|
||||
// }
|
||||
// }
|
||||
58
pkg/content/image/image.go
Normal file
58
pkg/content/image/image.go
Normal file
@@ -0,0 +1,58 @@
|
||||
package image
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
"github.com/rancherfederal/hauler/pkg/store"
|
||||
)
|
||||
|
||||
type Image struct {
|
||||
cfg v1alpha1.Image
|
||||
}
|
||||
|
||||
func NewImage(cfg v1alpha1.Image) Image {
|
||||
return Image{
|
||||
cfg: cfg,
|
||||
}
|
||||
}
|
||||
|
||||
func (i Image) Ref(opts ...name.Option) (name.Reference, error) {
|
||||
return name.ParseReference(i.cfg.Ref, opts...)
|
||||
}
|
||||
|
||||
// Repo returns the repository component of the image
|
||||
func (i Image) Repo() string {
|
||||
ref, _ := name.ParseReference(i.cfg.Ref)
|
||||
return ref.Context().RepositoryStr()
|
||||
}
|
||||
|
||||
func (i Image) Copy(ctx context.Context, registry string) error {
|
||||
l := log.FromContext(ctx)
|
||||
|
||||
srcRef, err := name.ParseReference(i.cfg.Ref)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
img, err := remote.Image(srcRef)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dstRef, err := store.RelocateReference(srcRef, registry)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.Infof("Copying image to: '%s'", dstRef.Name())
|
||||
if err := remote.Write(dstRef, img, remote.WithContext(ctx)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,49 +0,0 @@
|
||||
package driver
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"io"
|
||||
"sigs.k8s.io/cli-utils/pkg/object"
|
||||
)
|
||||
|
||||
type Driver interface {
|
||||
Name() string
|
||||
|
||||
//TODO: Really want this to just return a usable client
|
||||
KubeConfigPath() string
|
||||
|
||||
Images(ctx context.Context) (map[name.Reference]v1.Image, error)
|
||||
|
||||
Binary() (io.ReadCloser, error)
|
||||
|
||||
SystemObjects() []object.ObjMetadata
|
||||
|
||||
Start(io.Writer) error
|
||||
|
||||
DataPath(...string) string
|
||||
|
||||
WriteConfig() error
|
||||
}
|
||||
|
||||
//NewDriver will return a new concrete Driver type given a kind
|
||||
func NewDriver(driver v1alpha1.Driver) (d Driver) {
|
||||
switch driver.Type {
|
||||
case "rke2":
|
||||
// TODO
|
||||
default:
|
||||
d = K3s{
|
||||
Version: driver.Version,
|
||||
Config: K3sConfig{
|
||||
DataDir: "/var/lib/rancher/k3s",
|
||||
KubeConfig: "/etc/rancher/k3s/k3s.yaml",
|
||||
KubeConfigMode: "0644",
|
||||
Disable: nil,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
@@ -1,872 +0,0 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
set -o noglob
|
||||
|
||||
# Usage:
|
||||
# curl ... | ENV_VAR=... sh -
|
||||
# or
|
||||
# ENV_VAR=... ./install.sh
|
||||
#
|
||||
# Example:
|
||||
# Installing a server without traefik:
|
||||
# curl ... | INSTALL_K3S_EXEC="--disable=traefik" sh -
|
||||
# Installing an agent to point at a server:
|
||||
# curl ... | K3S_TOKEN=xxx K3S_URL=https://server-url:6443 sh -
|
||||
#
|
||||
# Environment variables:
|
||||
# - K3S_*
|
||||
# Environment variables which begin with K3S_ will be preserved for the
|
||||
# systemd service to use. Setting K3S_URL without explicitly setting
|
||||
# a systemd exec command will default the command to "agent", and we
|
||||
# enforce that K3S_TOKEN or K3S_CLUSTER_SECRET is also set.
|
||||
#
|
||||
# - INSTALL_K3S_SKIP_DOWNLOAD
|
||||
# If set to true will not download k3s hash or binary.
|
||||
#
|
||||
# - INSTALL_K3S_FORCE_RESTART
|
||||
# If set to true will always restart the K3s service
|
||||
#
|
||||
# - INSTALL_K3S_SYMLINK
|
||||
# If set to 'skip' will not create symlinks, 'force' will overwrite,
|
||||
# default will symlink if command does not exist in path.
|
||||
#
|
||||
# - INSTALL_K3S_SKIP_ENABLE
|
||||
# If set to true will not enable or start k3s service.
|
||||
#
|
||||
# - INSTALL_K3S_SKIP_START
|
||||
# If set to true will not start k3s service.
|
||||
#
|
||||
# - INSTALL_K3S_VERSION
|
||||
# Version of k3s to download from github. Will attempt to download from the
|
||||
# stable channel if not specified.
|
||||
#
|
||||
# - INSTALL_K3S_COMMIT
|
||||
# Commit of k3s to download from temporary cloud storage.
|
||||
# * (for developer & QA use)
|
||||
#
|
||||
# - INSTALL_K3S_BIN_DIR
|
||||
# Directory to install k3s binary, links, and uninstall script to, or use
|
||||
# /usr/local/bin as the default
|
||||
#
|
||||
# - INSTALL_K3S_BIN_DIR_READ_ONLY
|
||||
# If set to true will not write files to INSTALL_K3S_BIN_DIR, forces
|
||||
# setting INSTALL_K3S_SKIP_DOWNLOAD=true
|
||||
#
|
||||
# - INSTALL_K3S_SYSTEMD_DIR
|
||||
# Directory to install systemd service and environment files to, or use
|
||||
# /etc/systemd/system as the default
|
||||
#
|
||||
# - INSTALL_K3S_EXEC or script arguments
|
||||
# Command with flags to use for launching k3s in the systemd service, if
|
||||
# the command is not specified will default to "agent" if K3S_URL is set
|
||||
# or "server" if not. The final systemd command resolves to a combination
|
||||
# of EXEC and script args ($@).
|
||||
#
|
||||
# The following commands result in the same behavior:
|
||||
# curl ... | INSTALL_K3S_EXEC="--disable=traefik" sh -s -
|
||||
# curl ... | INSTALL_K3S_EXEC="server --disable=traefik" sh -s -
|
||||
# curl ... | INSTALL_K3S_EXEC="server" sh -s - --disable=traefik
|
||||
# curl ... | sh -s - server --disable=traefik
|
||||
# curl ... | sh -s - --disable=traefik
|
||||
#
|
||||
# - INSTALL_K3S_NAME
|
||||
# Name of systemd service to create, will default from the k3s exec command
|
||||
# if not specified. If specified the name will be prefixed with 'k3s-'.
|
||||
#
|
||||
# - INSTALL_K3S_TYPE
|
||||
# Type of systemd service to create, will default from the k3s exec command
|
||||
# if not specified.
|
||||
#
|
||||
# - INSTALL_K3S_SELINUX_WARN
|
||||
# If set to true will continue if k3s-selinux policy is not found.
|
||||
#
|
||||
# - INSTALL_K3S_SKIP_SELINUX_RPM
|
||||
# If set to true will skip automatic installation of the k3s RPM.
|
||||
#
|
||||
# - INSTALL_K3S_CHANNEL_URL
|
||||
# Channel URL for fetching k3s download URL.
|
||||
# Defaults to 'https://update.k3s.io/v1-release/channels'.
|
||||
#
|
||||
# - INSTALL_K3S_CHANNEL
|
||||
# Channel to use for fetching k3s download URL.
|
||||
# Defaults to 'stable'.
|
||||
|
||||
GITHUB_URL=https://github.com/k3s-io/k3s/releases
|
||||
STORAGE_URL=https://storage.googleapis.com/k3s-ci-builds
|
||||
DOWNLOADER=
|
||||
|
||||
# --- helper functions for logs ---
|
||||
info()
|
||||
{
|
||||
echo '[INFO] ' "$@"
|
||||
}
|
||||
warn()
|
||||
{
|
||||
echo '[WARN] ' "$@" >&2
|
||||
}
|
||||
fatal()
|
||||
{
|
||||
echo '[ERROR] ' "$@" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
# --- fatal if no systemd or openrc ---
|
||||
verify_system() {
|
||||
if [ -x /sbin/openrc-run ]; then
|
||||
HAS_OPENRC=true
|
||||
return
|
||||
fi
|
||||
if [ -d /run/systemd ]; then
|
||||
HAS_SYSTEMD=true
|
||||
return
|
||||
fi
|
||||
fatal 'Can not find systemd or openrc to use as a process supervisor for k3s'
|
||||
}
|
||||
|
||||
# --- add quotes to command arguments ---
|
||||
quote() {
|
||||
for arg in "$@"; do
|
||||
printf '%s\n' "$arg" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/'/"
|
||||
done
|
||||
}
|
||||
|
||||
# --- add indentation and trailing slash to quoted args ---
|
||||
quote_indent() {
|
||||
printf ' \\\n'
|
||||
for arg in "$@"; do
|
||||
printf '\t%s \\\n' "$(quote "$arg")"
|
||||
done
|
||||
}
|
||||
|
||||
# --- escape most punctuation characters, except quotes, forward slash, and space ---
|
||||
escape() {
|
||||
printf '%s' "$@" | sed -e 's/\([][!#$%&()*;<=>?\_`{|}]\)/\\\1/g;'
|
||||
}
|
||||
|
||||
# --- escape double quotes ---
|
||||
escape_dq() {
|
||||
printf '%s' "$@" | sed -e 's/"/\\"/g'
|
||||
}
|
||||
|
||||
# --- ensures $K3S_URL is empty or begins with https://, exiting fatally otherwise ---
|
||||
verify_k3s_url() {
|
||||
case "${K3S_URL}" in
|
||||
"")
|
||||
;;
|
||||
https://*)
|
||||
;;
|
||||
*)
|
||||
fatal "Only https:// URLs are supported for K3S_URL (have ${K3S_URL})"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# --- define needed environment variables ---
|
||||
setup_env() {
|
||||
# --- use command args if passed or create default ---
|
||||
case "$1" in
|
||||
# --- if we only have flags discover if command should be server or agent ---
|
||||
(-*|"")
|
||||
if [ -z "${K3S_URL}" ]; then
|
||||
CMD_K3S=server
|
||||
else
|
||||
if [ -z "${K3S_TOKEN}" ] && [ -z "${K3S_TOKEN_FILE}" ] && [ -z "${K3S_CLUSTER_SECRET}" ]; then
|
||||
fatal "Defaulted k3s exec command to 'agent' because K3S_URL is defined, but K3S_TOKEN, K3S_TOKEN_FILE or K3S_CLUSTER_SECRET is not defined."
|
||||
fi
|
||||
CMD_K3S=agent
|
||||
fi
|
||||
;;
|
||||
# --- command is provided ---
|
||||
(*)
|
||||
CMD_K3S=$1
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
|
||||
verify_k3s_url
|
||||
|
||||
CMD_K3S_EXEC="${CMD_K3S}$(quote_indent "$@")"
|
||||
|
||||
# --- use systemd name if defined or create default ---
|
||||
if [ -n "${INSTALL_K3S_NAME}" ]; then
|
||||
SYSTEM_NAME=k3s-${INSTALL_K3S_NAME}
|
||||
else
|
||||
if [ "${CMD_K3S}" = server ]; then
|
||||
SYSTEM_NAME=k3s
|
||||
else
|
||||
SYSTEM_NAME=k3s-${CMD_K3S}
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- check for invalid characters in system name ---
|
||||
valid_chars=$(printf '%s' "${SYSTEM_NAME}" | sed -e 's/[][!#$%&()*;<=>?\_`{|}/[:space:]]/^/g;' )
|
||||
if [ "${SYSTEM_NAME}" != "${valid_chars}" ]; then
|
||||
invalid_chars=$(printf '%s' "${valid_chars}" | sed -e 's/[^^]/ /g')
|
||||
fatal "Invalid characters for system name:
|
||||
${SYSTEM_NAME}
|
||||
${invalid_chars}"
|
||||
fi
|
||||
|
||||
# --- use sudo if we are not already root ---
|
||||
SUDO=sudo
|
||||
if [ $(id -u) -eq 0 ]; then
|
||||
SUDO=
|
||||
fi
|
||||
|
||||
# --- use systemd type if defined or create default ---
|
||||
if [ -n "${INSTALL_K3S_TYPE}" ]; then
|
||||
SYSTEMD_TYPE=${INSTALL_K3S_TYPE}
|
||||
else
|
||||
if [ "${CMD_K3S}" = server ]; then
|
||||
SYSTEMD_TYPE=notify
|
||||
else
|
||||
SYSTEMD_TYPE=exec
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- use binary install directory if defined or create default ---
|
||||
if [ -n "${INSTALL_K3S_BIN_DIR}" ]; then
|
||||
BIN_DIR=${INSTALL_K3S_BIN_DIR}
|
||||
else
|
||||
# --- use /usr/local/bin if root can write to it, otherwise use /opt/bin if it exists
|
||||
BIN_DIR=/usr/local/bin
|
||||
if ! $SUDO sh -c "touch ${BIN_DIR}/k3s-ro-test && rm -rf ${BIN_DIR}/k3s-ro-test"; then
|
||||
if [ -d /opt/bin ]; then
|
||||
BIN_DIR=/opt/bin
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- use systemd directory if defined or create default ---
|
||||
if [ -n "${INSTALL_K3S_SYSTEMD_DIR}" ]; then
|
||||
SYSTEMD_DIR="${INSTALL_K3S_SYSTEMD_DIR}"
|
||||
else
|
||||
SYSTEMD_DIR=/etc/systemd/system
|
||||
fi
|
||||
|
||||
# --- set related files from system name ---
|
||||
SERVICE_K3S=${SYSTEM_NAME}.service
|
||||
UNINSTALL_K3S_SH=${UNINSTALL_K3S_SH:-${BIN_DIR}/${SYSTEM_NAME}-uninstall.sh}
|
||||
KILLALL_K3S_SH=${KILLALL_K3S_SH:-${BIN_DIR}/k3s-killall.sh}
|
||||
|
||||
# --- use service or environment location depending on systemd/openrc ---
|
||||
if [ "${HAS_SYSTEMD}" = true ]; then
|
||||
FILE_K3S_SERVICE=${SYSTEMD_DIR}/${SERVICE_K3S}
|
||||
FILE_K3S_ENV=${SYSTEMD_DIR}/${SERVICE_K3S}.env
|
||||
elif [ "${HAS_OPENRC}" = true ]; then
|
||||
$SUDO mkdir -p /etc/rancher/k3s
|
||||
FILE_K3S_SERVICE=/etc/init.d/${SYSTEM_NAME}
|
||||
FILE_K3S_ENV=/etc/rancher/k3s/${SYSTEM_NAME}.env
|
||||
fi
|
||||
|
||||
# --- get hash of config & exec for currently installed k3s ---
|
||||
PRE_INSTALL_HASHES=$(get_installed_hashes)
|
||||
|
||||
# --- if bin directory is read only skip download ---
|
||||
if [ "${INSTALL_K3S_BIN_DIR_READ_ONLY}" = true ]; then
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true
|
||||
fi
|
||||
|
||||
# --- setup channel values
|
||||
INSTALL_K3S_CHANNEL_URL=${INSTALL_K3S_CHANNEL_URL:-'https://update.k3s.io/v1-release/channels'}
|
||||
INSTALL_K3S_CHANNEL=${INSTALL_K3S_CHANNEL:-'stable'}
|
||||
}
|
||||
|
||||
# --- check if skip download environment variable set ---
|
||||
can_skip_download() {
|
||||
if [ "${INSTALL_K3S_SKIP_DOWNLOAD}" != true ]; then
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# --- verify an executable k3s binary is installed ---
|
||||
verify_k3s_is_executable() {
|
||||
if [ ! -x ${BIN_DIR}/k3s ]; then
|
||||
fatal "Executable k3s binary not found at ${BIN_DIR}/k3s"
|
||||
fi
|
||||
}
|
||||
|
||||
# --- set arch and suffix, fatal if architecture not supported ---
|
||||
setup_verify_arch() {
|
||||
if [ -z "$ARCH" ]; then
|
||||
ARCH=$(uname -m)
|
||||
fi
|
||||
case $ARCH in
|
||||
amd64)
|
||||
ARCH=amd64
|
||||
SUFFIX=
|
||||
;;
|
||||
x86_64)
|
||||
ARCH=amd64
|
||||
SUFFIX=
|
||||
;;
|
||||
arm64)
|
||||
ARCH=arm64
|
||||
SUFFIX=-${ARCH}
|
||||
;;
|
||||
aarch64)
|
||||
ARCH=arm64
|
||||
SUFFIX=-${ARCH}
|
||||
;;
|
||||
arm*)
|
||||
ARCH=arm
|
||||
SUFFIX=-${ARCH}hf
|
||||
;;
|
||||
*)
|
||||
fatal "Unsupported architecture $ARCH"
|
||||
esac
|
||||
}
|
||||
|
||||
# --- verify existence of network downloader executable ---
|
||||
verify_downloader() {
|
||||
# Return failure if it doesn't exist or is no executable
|
||||
[ -x "$(command -v $1)" ] || return 1
|
||||
|
||||
# Set verified executable as our downloader program and return success
|
||||
DOWNLOADER=$1
|
||||
return 0
|
||||
}
|
||||
|
||||
# --- create temporary directory and cleanup when done ---
|
||||
setup_tmp() {
|
||||
TMP_DIR=$(mktemp -d -t k3s-install.XXXXXXXXXX)
|
||||
TMP_HASH=${TMP_DIR}/k3s.hash
|
||||
TMP_BIN=${TMP_DIR}/k3s.bin
|
||||
cleanup() {
|
||||
code=$?
|
||||
set +e
|
||||
trap - EXIT
|
||||
rm -rf ${TMP_DIR}
|
||||
exit $code
|
||||
}
|
||||
trap cleanup INT EXIT
|
||||
}
|
||||
|
||||
# --- use desired k3s version if defined or find version from channel ---
|
||||
get_release_version() {
|
||||
if [ -n "${INSTALL_K3S_COMMIT}" ]; then
|
||||
VERSION_K3S="commit ${INSTALL_K3S_COMMIT}"
|
||||
elif [ -n "${INSTALL_K3S_VERSION}" ]; then
|
||||
VERSION_K3S=${INSTALL_K3S_VERSION}
|
||||
else
|
||||
info "Finding release for channel ${INSTALL_K3S_CHANNEL}"
|
||||
version_url="${INSTALL_K3S_CHANNEL_URL}/${INSTALL_K3S_CHANNEL}"
|
||||
case $DOWNLOADER in
|
||||
curl)
|
||||
VERSION_K3S=$(curl -w '%{url_effective}' -L -s -S ${version_url} -o /dev/null | sed -e 's|.*/||')
|
||||
;;
|
||||
wget)
|
||||
VERSION_K3S=$(wget -SqO /dev/null ${version_url} 2>&1 | grep -i Location | sed -e 's|.*/||')
|
||||
;;
|
||||
*)
|
||||
fatal "Incorrect downloader executable '$DOWNLOADER'"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
info "Using ${VERSION_K3S} as release"
|
||||
}
|
||||
|
||||
# --- download from github url ---
|
||||
download() {
|
||||
[ $# -eq 2 ] || fatal 'download needs exactly 2 arguments'
|
||||
|
||||
case $DOWNLOADER in
|
||||
curl)
|
||||
curl -o $1 -sfL $2
|
||||
;;
|
||||
wget)
|
||||
wget -qO $1 $2
|
||||
;;
|
||||
*)
|
||||
fatal "Incorrect executable '$DOWNLOADER'"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Abort if download command failed
|
||||
[ $? -eq 0 ] || fatal 'Download failed'
|
||||
}
|
||||
|
||||
# --- download hash from github url ---
|
||||
download_hash() {
|
||||
if [ -n "${INSTALL_K3S_COMMIT}" ]; then
|
||||
HASH_URL=${STORAGE_URL}/k3s${SUFFIX}-${INSTALL_K3S_COMMIT}.sha256sum
|
||||
else
|
||||
HASH_URL=${GITHUB_URL}/download/${VERSION_K3S}/sha256sum-${ARCH}.txt
|
||||
fi
|
||||
info "Downloading hash ${HASH_URL}"
|
||||
download ${TMP_HASH} ${HASH_URL}
|
||||
HASH_EXPECTED=$(grep " k3s${SUFFIX}$" ${TMP_HASH})
|
||||
HASH_EXPECTED=${HASH_EXPECTED%%[[:blank:]]*}
|
||||
}
|
||||
|
||||
# --- check hash against installed version ---
|
||||
installed_hash_matches() {
|
||||
if [ -x ${BIN_DIR}/k3s ]; then
|
||||
HASH_INSTALLED=$(sha256sum ${BIN_DIR}/k3s)
|
||||
HASH_INSTALLED=${HASH_INSTALLED%%[[:blank:]]*}
|
||||
if [ "${HASH_EXPECTED}" = "${HASH_INSTALLED}" ]; then
|
||||
return
|
||||
fi
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# --- download binary from github url ---
|
||||
download_binary() {
|
||||
if [ -n "${INSTALL_K3S_COMMIT}" ]; then
|
||||
BIN_URL=${STORAGE_URL}/k3s${SUFFIX}-${INSTALL_K3S_COMMIT}
|
||||
else
|
||||
BIN_URL=${GITHUB_URL}/download/${VERSION_K3S}/k3s${SUFFIX}
|
||||
fi
|
||||
info "Downloading binary ${BIN_URL}"
|
||||
download ${TMP_BIN} ${BIN_URL}
|
||||
}
|
||||
|
||||
# --- verify downloaded binary hash ---
|
||||
verify_binary() {
|
||||
info "Verifying binary download"
|
||||
HASH_BIN=$(sha256sum ${TMP_BIN})
|
||||
HASH_BIN=${HASH_BIN%%[[:blank:]]*}
|
||||
if [ "${HASH_EXPECTED}" != "${HASH_BIN}" ]; then
|
||||
fatal "Download sha256 does not match ${HASH_EXPECTED}, got ${HASH_BIN}"
|
||||
fi
|
||||
}
|
||||
|
||||
# --- setup permissions and move binary to system directory ---
|
||||
setup_binary() {
|
||||
chmod 755 ${TMP_BIN}
|
||||
info "Installing k3s to ${BIN_DIR}/k3s"
|
||||
$SUDO chown root:root ${TMP_BIN}
|
||||
$SUDO mv -f ${TMP_BIN} ${BIN_DIR}/k3s
|
||||
}
|
||||
|
||||
# --- setup selinux policy ---
|
||||
setup_selinux() {
|
||||
case ${INSTALL_K3S_CHANNEL} in
|
||||
*testing)
|
||||
rpm_channel=testing
|
||||
;;
|
||||
*latest)
|
||||
rpm_channel=latest
|
||||
;;
|
||||
*)
|
||||
rpm_channel=stable
|
||||
;;
|
||||
esac
|
||||
|
||||
rpm_site="rpm.rancher.io"
|
||||
if [ "${rpm_channel}" = "testing" ]; then
|
||||
rpm_site="rpm-testing.rancher.io"
|
||||
fi
|
||||
|
||||
policy_hint="please install:
|
||||
yum install -y container-selinux selinux-policy-base
|
||||
yum install -y https://${rpm_site}/k3s/${rpm_channel}/common/centos/7/noarch/k3s-selinux-0.2-1.el7_8.noarch.rpm
|
||||
"
|
||||
policy_error=fatal
|
||||
if [ "$INSTALL_K3S_SELINUX_WARN" = true ] || grep -q 'ID=flatcar' /etc/os-release; then
|
||||
policy_error=warn
|
||||
fi
|
||||
|
||||
if [ "$INSTALL_K3S_SKIP_SELINUX_RPM" = true ] || can_skip_download; then
|
||||
info "Skipping installation of SELinux RPM"
|
||||
else
|
||||
install_selinux_rpm ${rpm_site} ${rpm_channel}
|
||||
fi
|
||||
|
||||
if ! $SUDO chcon -u system_u -r object_r -t container_runtime_exec_t ${BIN_DIR}/k3s >/dev/null 2>&1; then
|
||||
if $SUDO grep '^\s*SELINUX=enforcing' /etc/selinux/config >/dev/null 2>&1; then
|
||||
$policy_error "Failed to apply container_runtime_exec_t to ${BIN_DIR}/k3s, ${policy_hint}"
|
||||
fi
|
||||
else
|
||||
if [ ! -f /usr/share/selinux/packages/k3s.pp ]; then
|
||||
$policy_error "Failed to find the k3s-selinux policy, ${policy_hint}"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# --- if on an el7/el8 system, install k3s-selinux
|
||||
install_selinux_rpm() {
|
||||
if [ -r /etc/redhat-release ] || [ -r /etc/centos-release ] || [ -r /etc/oracle-release ]; then
|
||||
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
|
||||
maj_ver=$(echo "$dist_version" | sed -E -e "s/^([0-9]+)\.?[0-9]*$/\1/")
|
||||
set +o noglob
|
||||
$SUDO rm -f /etc/yum.repos.d/rancher-k3s-common*.repo
|
||||
set -o noglob
|
||||
if [ -r /etc/redhat-release ]; then
|
||||
case ${maj_ver} in
|
||||
7)
|
||||
$SUDO yum -y install yum-utils
|
||||
$SUDO yum-config-manager --enable rhel-7-server-extras-rpms
|
||||
;;
|
||||
8)
|
||||
:
|
||||
;;
|
||||
*)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
$SUDO tee /etc/yum.repos.d/rancher-k3s-common.repo >/dev/null << EOF
|
||||
[rancher-k3s-common-${2}]
|
||||
name=Rancher K3s Common (${2})
|
||||
baseurl=https://${1}/k3s/${2}/common/centos/${maj_ver}/noarch
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://${1}/public.key
|
||||
EOF
|
||||
$SUDO yum -y install "k3s-selinux"
|
||||
fi
|
||||
return
|
||||
}
|
||||
|
||||
# --- download and verify k3s ---
|
||||
download_and_verify() {
|
||||
if can_skip_download; then
|
||||
info 'Skipping k3s download and verify'
|
||||
verify_k3s_is_executable
|
||||
return
|
||||
fi
|
||||
|
||||
setup_verify_arch
|
||||
verify_downloader curl || verify_downloader wget || fatal 'Can not find curl or wget for downloading files'
|
||||
setup_tmp
|
||||
get_release_version
|
||||
download_hash
|
||||
|
||||
if installed_hash_matches; then
|
||||
info 'Skipping binary downloaded, installed k3s matches hash'
|
||||
return
|
||||
fi
|
||||
|
||||
download_binary
|
||||
verify_binary
|
||||
setup_binary
|
||||
}
|
||||
|
||||
# --- add additional utility links ---
|
||||
create_symlinks() {
|
||||
[ "${INSTALL_K3S_BIN_DIR_READ_ONLY}" = true ] && return
|
||||
[ "${INSTALL_K3S_SYMLINK}" = skip ] && return
|
||||
|
||||
for cmd in kubectl crictl ctr; do
|
||||
if [ ! -e ${BIN_DIR}/${cmd} ] || [ "${INSTALL_K3S_SYMLINK}" = force ]; then
|
||||
which_cmd=$(command -v ${cmd} 2>/dev/null || true)
|
||||
if [ -z "${which_cmd}" ] || [ "${INSTALL_K3S_SYMLINK}" = force ]; then
|
||||
info "Creating ${BIN_DIR}/${cmd} symlink to k3s"
|
||||
$SUDO ln -sf k3s ${BIN_DIR}/${cmd}
|
||||
else
|
||||
info "Skipping ${BIN_DIR}/${cmd} symlink to k3s, command exists in PATH at ${which_cmd}"
|
||||
fi
|
||||
else
|
||||
info "Skipping ${BIN_DIR}/${cmd} symlink to k3s, already exists"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# --- create killall script ---
|
||||
create_killall() {
|
||||
[ "${INSTALL_K3S_BIN_DIR_READ_ONLY}" = true ] && return
|
||||
info "Creating killall script ${KILLALL_K3S_SH}"
|
||||
$SUDO tee ${KILLALL_K3S_SH} >/dev/null << \EOF
|
||||
#!/bin/sh
|
||||
[ $(id -u) -eq 0 ] || exec sudo $0 $@
|
||||
|
||||
for bin in /var/lib/rancher/k3s/data/**/bin/; do
|
||||
[ -d $bin ] && export PATH=$PATH:$bin:$bin/aux
|
||||
done
|
||||
|
||||
set -x
|
||||
|
||||
for service in /etc/systemd/system/k3s*.service; do
|
||||
[ -s $service ] && systemctl stop $(basename $service)
|
||||
done
|
||||
|
||||
for service in /etc/init.d/k3s*; do
|
||||
[ -x $service ] && $service stop
|
||||
done
|
||||
|
||||
pschildren() {
|
||||
ps -e -o ppid= -o pid= | \
|
||||
sed -e 's/^\s*//g; s/\s\s*/\t/g;' | \
|
||||
grep -w "^$1" | \
|
||||
cut -f2
|
||||
}
|
||||
|
||||
pstree() {
|
||||
for pid in $@; do
|
||||
echo $pid
|
||||
for child in $(pschildren $pid); do
|
||||
pstree $child
|
||||
done
|
||||
done
|
||||
}
|
||||
|
||||
killtree() {
|
||||
kill -9 $(
|
||||
{ set +x; } 2>/dev/null;
|
||||
pstree $@;
|
||||
set -x;
|
||||
) 2>/dev/null
|
||||
}
|
||||
|
||||
getshims() {
|
||||
ps -e -o pid= -o args= | sed -e 's/^ *//; s/\s\s*/\t/;' | grep -w 'k3s/data/[^/]*/bin/containerd-shim' | cut -f1
|
||||
}
|
||||
|
||||
killtree $({ set +x; } 2>/dev/null; getshims; set -x)
|
||||
|
||||
do_unmount_and_remove() {
|
||||
awk -v path="$1" '$2 ~ ("^" path) { print $2 }' /proc/self/mounts | sort -r | xargs -r -t -n 1 sh -c 'umount "$0" && rm -rf "$0"'
|
||||
}
|
||||
|
||||
do_unmount_and_remove '/run/k3s'
|
||||
do_unmount_and_remove '/var/lib/rancher/k3s'
|
||||
do_unmount_and_remove '/var/lib/kubelet/pods'
|
||||
do_unmount_and_remove '/var/lib/kubelet/plugins'
|
||||
do_unmount_and_remove '/run/netns/cni-'
|
||||
|
||||
# Remove CNI namespaces
|
||||
ip netns show 2>/dev/null | grep cni- | xargs -r -t -n 1 ip netns delete
|
||||
|
||||
# Delete network interface(s) that match 'master cni0'
|
||||
ip link show 2>/dev/null | grep 'master cni0' | while read ignore iface ignore; do
|
||||
iface=${iface%%@*}
|
||||
[ -z "$iface" ] || ip link delete $iface
|
||||
done
|
||||
ip link delete cni0
|
||||
ip link delete flannel.1
|
||||
rm -rf /var/lib/cni/
|
||||
iptables-save | grep -v KUBE- | grep -v CNI- | iptables-restore
|
||||
EOF
|
||||
$SUDO chmod 755 ${KILLALL_K3S_SH}
|
||||
$SUDO chown root:root ${KILLALL_K3S_SH}
|
||||
}
|
||||
|
||||
# --- create uninstall script ---
|
||||
create_uninstall() {
|
||||
[ "${INSTALL_K3S_BIN_DIR_READ_ONLY}" = true ] && return
|
||||
info "Creating uninstall script ${UNINSTALL_K3S_SH}"
|
||||
$SUDO tee ${UNINSTALL_K3S_SH} >/dev/null << EOF
|
||||
#!/bin/sh
|
||||
set -x
|
||||
[ \$(id -u) -eq 0 ] || exec sudo \$0 \$@
|
||||
|
||||
${KILLALL_K3S_SH}
|
||||
|
||||
if command -v systemctl; then
|
||||
systemctl disable ${SYSTEM_NAME}
|
||||
systemctl reset-failed ${SYSTEM_NAME}
|
||||
systemctl daemon-reload
|
||||
fi
|
||||
if command -v rc-update; then
|
||||
rc-update delete ${SYSTEM_NAME} default
|
||||
fi
|
||||
|
||||
rm -f ${FILE_K3S_SERVICE}
|
||||
rm -f ${FILE_K3S_ENV}
|
||||
|
||||
remove_uninstall() {
|
||||
rm -f ${UNINSTALL_K3S_SH}
|
||||
}
|
||||
trap remove_uninstall EXIT
|
||||
|
||||
if (ls ${SYSTEMD_DIR}/k3s*.service || ls /etc/init.d/k3s*) >/dev/null 2>&1; then
|
||||
set +x; echo 'Additional k3s services installed, skipping uninstall of k3s'; set -x
|
||||
exit
|
||||
fi
|
||||
|
||||
for cmd in kubectl crictl ctr; do
|
||||
if [ -L ${BIN_DIR}/\$cmd ]; then
|
||||
rm -f ${BIN_DIR}/\$cmd
|
||||
fi
|
||||
done
|
||||
|
||||
rm -rf /etc/rancher/k3s
|
||||
rm -rf /run/k3s
|
||||
rm -rf /run/flannel
|
||||
rm -rf /var/lib/rancher/k3s
|
||||
rm -rf /var/lib/kubelet
|
||||
rm -f ${BIN_DIR}/k3s
|
||||
rm -f ${KILLALL_K3S_SH}
|
||||
|
||||
if type yum >/dev/null 2>&1; then
|
||||
yum remove -y k3s-selinux
|
||||
rm -f /etc/yum.repos.d/rancher-k3s-common*.repo
|
||||
fi
|
||||
EOF
|
||||
$SUDO chmod 755 ${UNINSTALL_K3S_SH}
|
||||
$SUDO chown root:root ${UNINSTALL_K3S_SH}
|
||||
}
|
||||
|
||||
# --- disable current service if loaded --
|
||||
systemd_disable() {
|
||||
$SUDO systemctl disable ${SYSTEM_NAME} >/dev/null 2>&1 || true
|
||||
$SUDO rm -f /etc/systemd/system/${SERVICE_K3S} || true
|
||||
$SUDO rm -f /etc/systemd/system/${SERVICE_K3S}.env || true
|
||||
}
|
||||
|
||||
# --- capture current env and create file containing k3s_ variables ---
|
||||
create_env_file() {
|
||||
info "env: Creating environment file ${FILE_K3S_ENV}"
|
||||
$SUDO touch ${FILE_K3S_ENV}
|
||||
$SUDO chmod 0600 ${FILE_K3S_ENV}
|
||||
env | grep '^K3S_' | $SUDO tee ${FILE_K3S_ENV} >/dev/null
|
||||
env | grep -Ei '^(NO|HTTP|HTTPS)_PROXY' | $SUDO tee -a ${FILE_K3S_ENV} >/dev/null
|
||||
}
|
||||
|
||||
# --- write systemd service file ---
|
||||
create_systemd_service_file() {
|
||||
info "systemd: Creating service file ${FILE_K3S_SERVICE}"
|
||||
$SUDO tee ${FILE_K3S_SERVICE} >/dev/null << EOF
|
||||
[Unit]
|
||||
Description=Lightweight Kubernetes
|
||||
Documentation=https://k3s.io
|
||||
Wants=network-online.target
|
||||
After=network-online.target
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
[Service]
|
||||
Type=${SYSTEMD_TYPE}
|
||||
EnvironmentFile=-/etc/default/%N
|
||||
EnvironmentFile=-/etc/sysconfig/%N
|
||||
EnvironmentFile=-${FILE_K3S_ENV}
|
||||
KillMode=process
|
||||
Delegate=yes
|
||||
# Having non-zero Limit*s causes performance problems due to accounting overhead
|
||||
# in the kernel. We recommend using cgroups to do container-local accounting.
|
||||
LimitNOFILE=1048576
|
||||
LimitNPROC=infinity
|
||||
LimitCORE=infinity
|
||||
TasksMax=infinity
|
||||
TimeoutStartSec=0
|
||||
Restart=always
|
||||
RestartSec=5s
|
||||
ExecStartPre=-/sbin/modprobe br_netfilter
|
||||
ExecStartPre=-/sbin/modprobe overlay
|
||||
ExecStart=${BIN_DIR}/k3s \\
|
||||
${CMD_K3S_EXEC}
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# --- write openrc service file ---
|
||||
create_openrc_service_file() {
|
||||
LOG_FILE=/var/log/${SYSTEM_NAME}.log
|
||||
|
||||
info "openrc: Creating service file ${FILE_K3S_SERVICE}"
|
||||
$SUDO tee ${FILE_K3S_SERVICE} >/dev/null << EOF
|
||||
#!/sbin/openrc-run
|
||||
|
||||
depend() {
|
||||
after network-online
|
||||
want cgroups
|
||||
}
|
||||
|
||||
start_pre() {
|
||||
rm -f /tmp/k3s.*
|
||||
}
|
||||
|
||||
supervisor=supervise-daemon
|
||||
name=${SYSTEM_NAME}
|
||||
command="${BIN_DIR}/k3s"
|
||||
command_args="$(escape_dq "${CMD_K3S_EXEC}")
|
||||
>>${LOG_FILE} 2>&1"
|
||||
|
||||
output_log=${LOG_FILE}
|
||||
error_log=${LOG_FILE}
|
||||
|
||||
pidfile="/var/run/${SYSTEM_NAME}.pid"
|
||||
respawn_delay=5
|
||||
respawn_max=0
|
||||
|
||||
set -o allexport
|
||||
if [ -f /etc/environment ]; then source /etc/environment; fi
|
||||
if [ -f ${FILE_K3S_ENV} ]; then source ${FILE_K3S_ENV}; fi
|
||||
set +o allexport
|
||||
EOF
|
||||
$SUDO chmod 0755 ${FILE_K3S_SERVICE}
|
||||
|
||||
$SUDO tee /etc/logrotate.d/${SYSTEM_NAME} >/dev/null << EOF
|
||||
${LOG_FILE} {
|
||||
missingok
|
||||
notifempty
|
||||
copytruncate
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
# --- write systemd or openrc service file ---
|
||||
create_service_file() {
|
||||
[ "${HAS_SYSTEMD}" = true ] && create_systemd_service_file
|
||||
[ "${HAS_OPENRC}" = true ] && create_openrc_service_file
|
||||
return 0
|
||||
}
|
||||
|
||||
# --- get hashes of the current k3s bin and service files
|
||||
get_installed_hashes() {
|
||||
$SUDO sha256sum ${BIN_DIR}/k3s ${FILE_K3S_SERVICE} ${FILE_K3S_ENV} 2>&1 || true
|
||||
}
|
||||
|
||||
# --- enable and start systemd service ---
|
||||
systemd_enable() {
|
||||
info "systemd: Enabling ${SYSTEM_NAME} unit"
|
||||
$SUDO systemctl enable ${FILE_K3S_SERVICE} >/dev/null
|
||||
$SUDO systemctl daemon-reload >/dev/null
|
||||
}
|
||||
|
||||
systemd_start() {
|
||||
info "systemd: Starting ${SYSTEM_NAME}"
|
||||
$SUDO systemctl restart ${SYSTEM_NAME}
|
||||
}
|
||||
|
||||
# --- enable and start openrc service ---
|
||||
openrc_enable() {
|
||||
info "openrc: Enabling ${SYSTEM_NAME} service for default runlevel"
|
||||
$SUDO rc-update add ${SYSTEM_NAME} default >/dev/null
|
||||
}
|
||||
|
||||
openrc_start() {
|
||||
info "openrc: Starting ${SYSTEM_NAME}"
|
||||
$SUDO ${FILE_K3S_SERVICE} restart
|
||||
}
|
||||
|
||||
# --- startup systemd or openrc service ---
|
||||
service_enable_and_start() {
|
||||
[ "${INSTALL_K3S_SKIP_ENABLE}" = true ] && return
|
||||
|
||||
[ "${HAS_SYSTEMD}" = true ] && systemd_enable
|
||||
[ "${HAS_OPENRC}" = true ] && openrc_enable
|
||||
|
||||
[ "${INSTALL_K3S_SKIP_START}" = true ] && return
|
||||
|
||||
POST_INSTALL_HASHES=$(get_installed_hashes)
|
||||
if [ "${PRE_INSTALL_HASHES}" = "${POST_INSTALL_HASHES}" ] && [ "${INSTALL_K3S_FORCE_RESTART}" != true ]; then
|
||||
info 'No change detected so skipping service start'
|
||||
return
|
||||
fi
|
||||
|
||||
[ "${HAS_SYSTEMD}" = true ] && systemd_start
|
||||
[ "${HAS_OPENRC}" = true ] && openrc_start
|
||||
return 0
|
||||
}
|
||||
|
||||
# --- re-evaluate args to include env command ---
|
||||
eval set -- $(escape "${INSTALL_K3S_EXEC}") $(quote "$@")
|
||||
|
||||
# --- run the install process --
|
||||
{
|
||||
verify_system
|
||||
setup_env "$@"
|
||||
download_and_verify
|
||||
setup_selinux
|
||||
create_symlinks
|
||||
create_killall
|
||||
create_uninstall
|
||||
systemd_disable
|
||||
create_env_file
|
||||
create_service_file
|
||||
service_enable_and_start
|
||||
}
|
||||
@@ -1,507 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
if [ "${DEBUG}" = 1 ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
# Usage:
|
||||
# curl ... | ENV_VAR=... sh -
|
||||
# or
|
||||
# ENV_VAR=... ./install.sh
|
||||
#
|
||||
|
||||
# Environment variables:
|
||||
#
|
||||
# - INSTALL_RKE2_CHANNEL
|
||||
# Channel to use for fetching rke2 download URL.
|
||||
# Defaults to 'latest'.
|
||||
#
|
||||
# - INSTALL_RKE2_METHOD
|
||||
# The installation method to use.
|
||||
# Default is on RPM-based systems is "rpm", all else "tar".
|
||||
#
|
||||
# - INSTALL_RKE2_TYPE
|
||||
# Type of rke2 service. Can be either "server" or "agent".
|
||||
# Default is "server".
|
||||
#
|
||||
# - INSTALL_RKE2_EXEC
|
||||
# This is an alias for INSTALL_RKE2_TYPE, included for compatibility with K3s.
|
||||
# If both are set, INSTALL_RKE2_TYPE is preferred.
|
||||
#
|
||||
# - INSTALL_RKE2_VERSION
|
||||
# Version of rke2 to download from github.
|
||||
#
|
||||
# - INSTALL_RKE2_RPM_RELEASE_VERSION
|
||||
# Version of the rke2 RPM release to install.
|
||||
# Format would be like "1.el7" or "2.el8"
|
||||
#
|
||||
# - INSTALL_RKE2_TAR_PREFIX
|
||||
# Installation prefix when using the tar installation method.
|
||||
# Default is /usr/local, unless /usr/local is read-only or has a dedicated mount point,
|
||||
# in which case /opt/rke2 is used instead.
|
||||
#
|
||||
# - INSTALL_RKE2_COMMIT
|
||||
# Commit of RKE2 to download from temporary cloud storage.
|
||||
# If set, this forces INSTALL_RKE2_METHOD=tar.
|
||||
# * (for developer & QA use)
|
||||
#
|
||||
# - INSTALL_RKE2_AGENT_IMAGES_DIR
|
||||
# Installation path for airgap images when installing from CI commit
|
||||
# Default is /var/lib/rancher/rke2/agent/images
|
||||
#
|
||||
# - INSTALL_RKE2_ARTIFACT_PATH
|
||||
# If set, the install script will use the local path for sourcing the rke2.linux-$SUFFIX and sha256sum-$ARCH.txt files
|
||||
# rather than the downloading the files from the internet.
|
||||
# Default is not set.
|
||||
#
|
||||
|
||||
|
||||
# info logs the given argument at info log level.
|
||||
info() {
|
||||
echo "[INFO] " "$@"
|
||||
}
|
||||
|
||||
# warn logs the given argument at warn log level.
|
||||
warn() {
|
||||
echo "[WARN] " "$@" >&2
|
||||
}
|
||||
|
||||
# fatal logs the given argument at fatal log level.
|
||||
fatal() {
|
||||
echo "[ERROR] " "$@" >&2
|
||||
if [ -n "${SUFFIX}" ]; then
|
||||
echo "[ALT] Please visit 'https://github.com/rancher/rke2/releases' directly and download the latest rke2.${SUFFIX}.tar.gz" >&2
|
||||
fi
|
||||
exit 1
|
||||
}
|
||||
|
||||
# check_target_mountpoint return success if the target directory is on a dedicated mount point
|
||||
check_target_mountpoint() {
|
||||
mountpoint -q "${INSTALL_RKE2_TAR_PREFIX}"
|
||||
}
|
||||
|
||||
# check_target_ro returns success if the target directory is read-only
|
||||
check_target_ro() {
|
||||
touch "${INSTALL_RKE2_TAR_PREFIX}"/.rke2-ro-test && rm -rf "${INSTALL_RKE2_TAR_PREFIX}"/.rke2-ro-test
|
||||
test $? -ne 0
|
||||
}
|
||||
|
||||
|
||||
# setup_env defines needed environment variables.
|
||||
setup_env() {
|
||||
STORAGE_URL="https://storage.googleapis.com/rke2-ci-builds"
|
||||
INSTALL_RKE2_GITHUB_URL="https://github.com/rancher/rke2"
|
||||
DEFAULT_TAR_PREFIX="/usr/local"
|
||||
# --- bail if we are not root ---
|
||||
if [ ! $(id -u) -eq 0 ]; then
|
||||
fatal "You need to be root to perform this install"
|
||||
fi
|
||||
|
||||
# --- make sure install channel has a value
|
||||
if [ -z "${INSTALL_RKE2_CHANNEL}" ]; then
|
||||
INSTALL_RKE2_CHANNEL="stable"
|
||||
fi
|
||||
|
||||
# --- make sure install type has a value
|
||||
if [ -z "${INSTALL_RKE2_TYPE}" ]; then
|
||||
INSTALL_RKE2_TYPE="${INSTALL_RKE2_EXEC:-server}"
|
||||
fi
|
||||
|
||||
# --- use yum install method if available by default
|
||||
if [ -z "${INSTALL_RKE2_ARTIFACT_PATH}" ] && [ -z "${INSTALL_RKE2_COMMIT}" ] && [ -z "${INSTALL_RKE2_METHOD}" ] && command -v yum >/dev/null 2>&1; then
|
||||
INSTALL_RKE2_METHOD="yum"
|
||||
fi
|
||||
|
||||
# --- install tarball to /usr/local by default, except if /usr/local is on a separate partition or is read-only
|
||||
# --- in which case we go into /opt/rke2.
|
||||
if [ -z "${INSTALL_RKE2_TAR_PREFIX}" ]; then
|
||||
INSTALL_RKE2_TAR_PREFIX=${DEFAULT_TAR_PREFIX}
|
||||
if check_target_mountpoint || check_target_ro; then
|
||||
INSTALL_RKE2_TAR_PREFIX="/opt/rke2"
|
||||
warn "${DEFAULT_TAR_PREFIX} is read-only or a mount point; installing to ${INSTALL_RKE2_TAR_PREFIX}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -z "${INSTALL_RKE2_AGENT_IMAGES_DIR}" ]; then
|
||||
INSTALL_RKE2_AGENT_IMAGES_DIR="/var/lib/rancher/rke2/agent/images"
|
||||
fi
|
||||
}
|
||||
|
||||
# check_method_conflict will exit with an error if the user attempts to install
|
||||
# via tar method on a host with existing RPMs.
|
||||
check_method_conflict() {
|
||||
case ${INSTALL_RKE2_METHOD} in
|
||||
yum | rpm | dnf)
|
||||
return
|
||||
;;
|
||||
*)
|
||||
if rpm -q rke2-common >/dev/null 2>&1; then
|
||||
fatal "Cannot perform ${INSTALL_RKE2_METHOD:-tar} install on host with existing RKE2 RPMs - please run rke2-uninstall.sh first"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# setup_arch set arch and suffix,
|
||||
# fatal if architecture not supported.
|
||||
setup_arch() {
|
||||
case ${ARCH:=$(uname -m)} in
|
||||
amd64)
|
||||
ARCH=amd64
|
||||
SUFFIX=$(uname -s | tr '[:upper:]' '[:lower:]')-${ARCH}
|
||||
;;
|
||||
x86_64)
|
||||
ARCH=amd64
|
||||
SUFFIX=$(uname -s | tr '[:upper:]' '[:lower:]')-${ARCH}
|
||||
;;
|
||||
*)
|
||||
fatal "unsupported architecture ${ARCH}"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# verify_downloader verifies existence of
|
||||
# network downloader executable.
|
||||
verify_downloader() {
|
||||
cmd="$(command -v "${1}")"
|
||||
if [ -z "${cmd}" ]; then
|
||||
return 1
|
||||
fi
|
||||
if [ ! -x "${cmd}" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Set verified executable as our downloader program and return success
|
||||
DOWNLOADER=${cmd}
|
||||
return 0
|
||||
}
|
||||
|
||||
# setup_tmp creates a temporary directory
|
||||
# and cleans up when done.
|
||||
setup_tmp() {
|
||||
TMP_DIR=$(mktemp -d -t rke2-install.XXXXXXXXXX)
|
||||
TMP_CHECKSUMS=${TMP_DIR}/rke2.checksums
|
||||
TMP_TARBALL=${TMP_DIR}/rke2.tarball
|
||||
TMP_AIRGAP_CHECKSUMS=${TMP_DIR}/rke2-images.checksums
|
||||
TMP_AIRGAP_TARBALL=${TMP_DIR}/rke2-images.tarball
|
||||
cleanup() {
|
||||
code=$?
|
||||
set +e
|
||||
trap - EXIT
|
||||
rm -rf "${TMP_DIR}"
|
||||
exit $code
|
||||
}
|
||||
trap cleanup INT EXIT
|
||||
}
|
||||
|
||||
# --- use desired rke2 version if defined or find version from channel ---
|
||||
get_release_version() {
|
||||
if [ -n "${INSTALL_RKE2_COMMIT}" ]; then
|
||||
version="commit ${INSTALL_RKE2_COMMIT}"
|
||||
elif [ -n "${INSTALL_RKE2_VERSION}" ]; then
|
||||
version=${INSTALL_RKE2_VERSION}
|
||||
else
|
||||
info "finding release for channel ${INSTALL_RKE2_CHANNEL}"
|
||||
INSTALL_RKE2_CHANNEL_URL=${INSTALL_RKE2_CHANNEL_URL:-'https://update.rke2.io/v1-release/channels'}
|
||||
version_url="${INSTALL_RKE2_CHANNEL_URL}/${INSTALL_RKE2_CHANNEL}"
|
||||
case ${DOWNLOADER} in
|
||||
*curl)
|
||||
version=$(${DOWNLOADER} -w "%{url_effective}" -L -s -S "${version_url}" -o /dev/null | sed -e 's|.*/||')
|
||||
;;
|
||||
*wget)
|
||||
version=$(${DOWNLOADER} -SqO /dev/null "${version_url}" 2>&1 | grep -i Location | sed -e 's|.*/||')
|
||||
;;
|
||||
*)
|
||||
fatal "Unsupported downloader executable '${DOWNLOADER}'"
|
||||
;;
|
||||
esac
|
||||
INSTALL_RKE2_VERSION="${version}"
|
||||
fi
|
||||
}
|
||||
|
||||
# check_download performs a HEAD request to see if a file exists at a given url
|
||||
check_download() {
|
||||
case ${DOWNLOADER} in
|
||||
*curl)
|
||||
curl -o "/dev/null" -fsLI -X HEAD "$1"
|
||||
;;
|
||||
*wget)
|
||||
wget -q --spider "$1"
|
||||
;;
|
||||
*)
|
||||
fatal "downloader executable not supported: '${DOWNLOADER}'"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# download downloads a file from a url using either curl or wget
|
||||
download() {
|
||||
if [ $# -ne 2 ]; then
|
||||
fatal "download needs exactly 2 arguments"
|
||||
fi
|
||||
|
||||
case ${DOWNLOADER} in
|
||||
*curl)
|
||||
curl -o "$1" -fsSL "$2"
|
||||
;;
|
||||
*wget)
|
||||
wget -qO "$1" "$2"
|
||||
;;
|
||||
*)
|
||||
fatal "downloader executable not supported: '${DOWNLOADER}'"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Abort if download command failed
|
||||
if [ $? -ne 0 ]; then
|
||||
fatal "download failed"
|
||||
fi
|
||||
}
|
||||
|
||||
# download_checksums downloads hash from github url.
|
||||
download_checksums() {
|
||||
if [ -n "${INSTALL_RKE2_COMMIT}" ]; then
|
||||
CHECKSUMS_URL=${STORAGE_URL}/rke2.${SUFFIX}-${INSTALL_RKE2_COMMIT}.tar.gz.sha256sum
|
||||
else
|
||||
CHECKSUMS_URL=${INSTALL_RKE2_GITHUB_URL}/releases/download/${INSTALL_RKE2_VERSION}/sha256sum-${ARCH}.txt
|
||||
fi
|
||||
info "downloading checksums at ${CHECKSUMS_URL}"
|
||||
download "${TMP_CHECKSUMS}" "${CHECKSUMS_URL}"
|
||||
CHECKSUM_EXPECTED=$(grep "rke2.${SUFFIX}.tar.gz" "${TMP_CHECKSUMS}" | awk '{print $1}')
|
||||
}
|
||||
|
||||
# download_tarball downloads binary from github url.
|
||||
download_tarball() {
|
||||
if [ -n "${INSTALL_RKE2_COMMIT}" ]; then
|
||||
TARBALL_URL=${STORAGE_URL}/rke2.${SUFFIX}-${INSTALL_RKE2_COMMIT}.tar.gz
|
||||
else
|
||||
TARBALL_URL=${INSTALL_RKE2_GITHUB_URL}/releases/download/${INSTALL_RKE2_VERSION}/rke2.${SUFFIX}.tar.gz
|
||||
fi
|
||||
info "downloading tarball at ${TARBALL_URL}"
|
||||
download "${TMP_TARBALL}" "${TARBALL_URL}"
|
||||
}
|
||||
|
||||
# stage_local_checksums stages the local checksum hash for validation.
|
||||
stage_local_checksums() {
|
||||
info "staging local checksums from ${INSTALL_RKE2_ARTIFACT_PATH}/sha256sum-${ARCH}.txt"
|
||||
cp -f "${INSTALL_RKE2_ARTIFACT_PATH}/sha256sum-${ARCH}.txt" "${TMP_CHECKSUMS}"
|
||||
CHECKSUM_EXPECTED=$(grep "rke2.${SUFFIX}.tar.gz" "${TMP_CHECKSUMS}" | awk '{print $1}')
|
||||
if [ -f "${INSTALL_RKE2_ARTIFACT_PATH}/rke2-images.${SUFFIX}.tar.zst" ]; then
|
||||
AIRGAP_CHECKSUM_EXPECTED=$(grep "rke2-images.${SUFFIX}.tar.zst" "${TMP_CHECKSUMS}" | awk '{print $1}')
|
||||
elif [ -f "${INSTALL_RKE2_ARTIFACT_PATH}/rke2-images.${SUFFIX}.tar.gz" ]; then
|
||||
AIRGAP_CHECKSUM_EXPECTED=$(grep "rke2-images.${SUFFIX}.tar.gz" "${TMP_CHECKSUMS}" | awk '{print $1}')
|
||||
fi
|
||||
}
|
||||
|
||||
# stage_local_tarball stages the local tarball.
|
||||
stage_local_tarball() {
|
||||
info "staging tarball from ${INSTALL_RKE2_ARTIFACT_PATH}/rke2.${SUFFIX}.tar.gz"
|
||||
cp -f "${INSTALL_RKE2_ARTIFACT_PATH}/rke2.${SUFFIX}.tar.gz" "${TMP_TARBALL}"
|
||||
}
|
||||
|
||||
# stage_local_airgap_tarball stages the local checksum hash for validation.
|
||||
stage_local_airgap_tarball() {
|
||||
if [ -f "${INSTALL_RKE2_ARTIFACT_PATH}/rke2-images.${SUFFIX}.tar.zst" ]; then
|
||||
info "staging zst airgap image tarball from ${INSTALL_RKE2_ARTIFACT_PATH}/rke2-images.${SUFFIX}.tar.zst"
|
||||
cp -f "${INSTALL_RKE2_ARTIFACT_PATH}/rke2-images.${SUFFIX}.tar.zst" "${TMP_AIRGAP_TARBALL}"
|
||||
AIRGAP_TARBALL_FORMAT=zst
|
||||
elif [ -f "${INSTALL_RKE2_ARTIFACT_PATH}/rke2-images.${SUFFIX}.tar.gz" ]; then
|
||||
info "staging gzip airgap image tarball from ${INSTALL_RKE2_ARTIFACT_PATH}/rke2-images.${SUFFIX}.tar.gz"
|
||||
cp -f "${INSTALL_RKE2_ARTIFACT_PATH}/rke2-images.${SUFFIX}.tar.gz" "${TMP_AIRGAP_TARBALL}"
|
||||
AIRGAP_TARBALL_FORMAT=gz
|
||||
fi
|
||||
}
|
||||
|
||||
# verify_tarball verifies the downloaded installer checksum.
|
||||
verify_tarball() {
|
||||
info "verifying tarball"
|
||||
CHECKSUM_ACTUAL=$(sha256sum "${TMP_TARBALL}" | awk '{print $1}')
|
||||
if [ "${CHECKSUM_EXPECTED}" != "${CHECKSUM_ACTUAL}" ]; then
|
||||
fatal "download sha256 does not match ${CHECKSUM_EXPECTED}, got ${CHECKSUM_ACTUAL}"
|
||||
fi
|
||||
}
|
||||
|
||||
# unpack_tarball extracts the tarball, correcting paths and moving systemd units as necessary
|
||||
unpack_tarball() {
|
||||
info "unpacking tarball file to ${INSTALL_RKE2_TAR_PREFIX}"
|
||||
mkdir -p ${INSTALL_RKE2_TAR_PREFIX}
|
||||
tar xzf "${TMP_TARBALL}" -C "${INSTALL_RKE2_TAR_PREFIX}"
|
||||
if [ "${INSTALL_RKE2_TAR_PREFIX}" != "${DEFAULT_TAR_PREFIX}" ]; then
|
||||
info "updating tarball contents to reflect install path"
|
||||
sed -i "s|${DEFAULT_TAR_PREFIX}|${INSTALL_RKE2_TAR_PREFIX}|" ${INSTALL_RKE2_TAR_PREFIX}/lib/systemd/system/rke2-*.service ${INSTALL_RKE2_TAR_PREFIX}/bin/rke2-uninstall.sh
|
||||
info "moving systemd units to /etc/systemd/system"
|
||||
mv -f ${INSTALL_RKE2_TAR_PREFIX}/lib/systemd/system/rke2-*.service /etc/systemd/system/
|
||||
info "install complete; you may want to run: export PATH=\$PATH:${INSTALL_RKE2_TAR_PREFIX}/bin"
|
||||
fi
|
||||
}
|
||||
|
||||
# download_airgap_checksums downloads the checksum file for the airgap image tarball
|
||||
# and prepares the checksum value for later validation.
|
||||
download_airgap_checksums() {
|
||||
if [ -z "${INSTALL_RKE2_COMMIT}" ]; then
|
||||
return
|
||||
fi
|
||||
AIRGAP_CHECKSUMS_URL=${STORAGE_URL}/rke2-images.${SUFFIX}-${INSTALL_RKE2_COMMIT}.tar.zst.sha256sum
|
||||
# try for zst first; if that fails use gz for older release branches
|
||||
if ! check_download "${AIRGAP_CHECKSUMS_URL}"; then
|
||||
AIRGAP_CHECKSUMS_URL=${STORAGE_URL}/rke2-images.${SUFFIX}-${INSTALL_RKE2_COMMIT}.tar.gz.sha256sum
|
||||
fi
|
||||
info "downloading airgap checksums at ${AIRGAP_CHECKSUMS_URL}"
|
||||
download "${TMP_AIRGAP_CHECKSUMS}" "${AIRGAP_CHECKSUMS_URL}"
|
||||
AIRGAP_CHECKSUM_EXPECTED=$(grep "rke2-images.${SUFFIX}.tar" "${TMP_AIRGAP_CHECKSUMS}" | awk '{print $1}')
|
||||
}
|
||||
|
||||
# download_airgap_tarball downloads the airgap image tarball.
|
||||
download_airgap_tarball() {
|
||||
if [ -z "${INSTALL_RKE2_COMMIT}" ]; then
|
||||
return
|
||||
fi
|
||||
AIRGAP_TARBALL_URL=${STORAGE_URL}/rke2-images.${SUFFIX}-${INSTALL_RKE2_COMMIT}.tar.zst
|
||||
# try for zst first; if that fails use gz for older release branches
|
||||
if ! check_download "${AIRGAP_TARBALL_URL}"; then
|
||||
AIRGAP_TARBALL_URL=${STORAGE_URL}/rke2-images.${SUFFIX}-${INSTALL_RKE2_COMMIT}.tar.gz
|
||||
fi
|
||||
info "downloading airgap tarball at ${AIRGAP_TARBALL_URL}"
|
||||
download "${TMP_AIRGAP_TARBALL}" "${AIRGAP_TARBALL_URL}"
|
||||
}
|
||||
|
||||
# verify_airgap_tarball compares the airgap image tarball checksum to the value
|
||||
# calculated by CI when the file was uploaded.
|
||||
verify_airgap_tarball() {
|
||||
if [ -z "${AIRGAP_CHECKSUM_EXPECTED}" ]; then
|
||||
return
|
||||
fi
|
||||
info "verifying airgap tarball"
|
||||
AIRGAP_CHECKSUM_ACTUAL=$(sha256sum "${TMP_AIRGAP_TARBALL}" | awk '{print $1}')
|
||||
if [ "${AIRGAP_CHECKSUM_EXPECTED}" != "${AIRGAP_CHECKSUM_ACTUAL}" ]; then
|
||||
fatal "download sha256 does not match ${AIRGAP_CHECKSUM_EXPECTED}, got ${AIRGAP_CHECKSUM_ACTUAL}"
|
||||
fi
|
||||
}
|
||||
|
||||
# install_airgap_tarball moves the airgap image tarball into place.
|
||||
install_airgap_tarball() {
|
||||
if [ -z "${AIRGAP_CHECKSUM_EXPECTED}" ]; then
|
||||
return
|
||||
fi
|
||||
mkdir -p "${INSTALL_RKE2_AGENT_IMAGES_DIR}"
|
||||
# releases that provide zst artifacts can read from the compressed archive; older releases
|
||||
# that produce only gzip artifacts need to have the tarball decompressed ahead of time
|
||||
if grep -qF '.tar.zst' "${TMP_AIRGAP_CHECKSUMS}" || [ "${AIRGAP_TARBALL_FORMAT}" = "zst" ]; then
|
||||
info "installing airgap tarball to ${INSTALL_RKE2_AGENT_IMAGES_DIR}"
|
||||
mv -f "${TMP_AIRGAP_TARBALL}" "${INSTALL_RKE2_AGENT_IMAGES_DIR}/rke2-images.${SUFFIX}.tar.zst"
|
||||
else
|
||||
info "decompressing airgap tarball to ${INSTALL_RKE2_AGENT_IMAGES_DIR}"
|
||||
gzip -dc "${TMP_AIRGAP_TARBALL}" > "${INSTALL_RKE2_AGENT_IMAGES_DIR}/rke2-images.${SUFFIX}.tar"
|
||||
fi
|
||||
}
|
||||
|
||||
# do_install_rpm builds a yum repo config from the channel and version to be installed,
|
||||
# and calls yum to install the required packates.
|
||||
do_install_rpm() {
|
||||
maj_ver="7"
|
||||
if [ -r /etc/redhat-release ] || [ -r /etc/centos-release ] || [ -r /etc/oracle-release ]; then
|
||||
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
|
||||
maj_ver=$(echo "$dist_version" | sed -E -e "s/^([0-9]+)\.?[0-9]*$/\1/")
|
||||
case ${maj_ver} in
|
||||
7|8)
|
||||
:
|
||||
;;
|
||||
*) # In certain cases, like installing on Fedora, maj_ver will end up being something that is not 7 or 8
|
||||
maj_ver="7"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
case "${INSTALL_RKE2_CHANNEL}" in
|
||||
v*.*)
|
||||
# We are operating with a version-based channel, so we should parse our version out
|
||||
rke2_majmin=$(echo "${INSTALL_RKE2_CHANNEL}" | sed -E -e "s/^v([0-9]+\.[0-9]+).*/\1/")
|
||||
rke2_rpm_channel=$(echo "${INSTALL_RKE2_CHANNEL}" | sed -E -e "s/^v[0-9]+\.[0-9]+-(.*)/\1/")
|
||||
# If our regex fails to capture a "sane" channel out of the specified channel, fall back to `stable`
|
||||
if [ "${rke2_rpm_channel}" = ${INSTALL_RKE2_CHANNEL} ]; then
|
||||
info "using stable RPM repositories"
|
||||
rke2_rpm_channel="stable"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
get_release_version
|
||||
rke2_majmin=$(echo "${INSTALL_RKE2_VERSION}" | sed -E -e "s/^v([0-9]+\.[0-9]+).*/\1/")
|
||||
rke2_rpm_channel=${1}
|
||||
;;
|
||||
esac
|
||||
info "using ${rke2_majmin} series from channel ${rke2_rpm_channel}"
|
||||
rpm_site="rpm.rancher.io"
|
||||
if [ "${rke2_rpm_channel}" = "testing" ]; then
|
||||
rpm_site="rpm-${rke2_rpm_channel}.rancher.io"
|
||||
fi
|
||||
rm -f /etc/yum.repos.d/rancher-rke2*.repo
|
||||
cat <<-EOF >"/etc/yum.repos.d/rancher-rke2.repo"
|
||||
[rancher-rke2-common-${rke2_rpm_channel}]
|
||||
name=Rancher RKE2 Common (${1})
|
||||
baseurl=https://${rpm_site}/rke2/${rke2_rpm_channel}/common/centos/${maj_ver}/noarch
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://${rpm_site}/public.key
|
||||
[rancher-rke2-${rke2_majmin}-${rke2_rpm_channel}]
|
||||
name=Rancher RKE2 ${rke2_majmin} (${1})
|
||||
baseurl=https://${rpm_site}/rke2/${rke2_rpm_channel}/${rke2_majmin}/centos/${maj_ver}/x86_64
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://${rpm_site}/public.key
|
||||
EOF
|
||||
if [ -z "${INSTALL_RKE2_VERSION}" ]; then
|
||||
yum -y install "rke2-${INSTALL_RKE2_TYPE}"
|
||||
else
|
||||
rke2_rpm_version=$(echo "${INSTALL_RKE2_VERSION}" | sed -E -e "s/[\+-]/~/g" | sed -E -e "s/v(.*)/\1/")
|
||||
if [ -n "${INSTALL_RKE2_RPM_RELEASE_VERSION}" ]; then
|
||||
yum -y install "rke2-${INSTALL_RKE2_TYPE}-${rke2_rpm_version}-${INSTALL_RKE2_RPM_RELEASE_VERSION}"
|
||||
else
|
||||
yum -y install "rke2-${INSTALL_RKE2_TYPE}-${rke2_rpm_version}"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
do_install_tar() {
|
||||
setup_tmp
|
||||
|
||||
if [ -n "${INSTALL_RKE2_ARTIFACT_PATH}" ]; then
|
||||
stage_local_checksums
|
||||
stage_local_airgap_tarball
|
||||
stage_local_tarball
|
||||
else
|
||||
get_release_version
|
||||
info "using ${INSTALL_RKE2_VERSION:-commit $INSTALL_RKE2_COMMIT} as release"
|
||||
download_airgap_checksums
|
||||
download_airgap_tarball
|
||||
download_checksums
|
||||
download_tarball
|
||||
fi
|
||||
|
||||
verify_airgap_tarball
|
||||
install_airgap_tarball
|
||||
verify_tarball
|
||||
unpack_tarball
|
||||
systemctl daemon-reload
|
||||
}
|
||||
|
||||
do_install() {
|
||||
setup_env
|
||||
check_method_conflict
|
||||
setup_arch
|
||||
if [ -z "${INSTALL_RKE2_ARTIFACT_PATH}" ]; then
|
||||
verify_downloader curl || verify_downloader wget || fatal "can not find curl or wget for downloading files"
|
||||
fi
|
||||
|
||||
case ${INSTALL_RKE2_METHOD} in
|
||||
yum | rpm | dnf)
|
||||
do_install_rpm "${INSTALL_RKE2_CHANNEL}"
|
||||
;;
|
||||
*)
|
||||
do_install_tar "${INSTALL_RKE2_CHANNEL}"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
do_install
|
||||
exit 0
|
||||
@@ -1,173 +0,0 @@
|
||||
package driver
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
_ "embed"
|
||||
"fmt"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/imdario/mergo"
|
||||
"github.com/rancherfederal/hauler/pkg/packager/images"
|
||||
"io"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"sigs.k8s.io/cli-utils/pkg/object"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
const (
|
||||
k3sReleaseUrl = "https://github.com/k3s-io/k3s/releases/download"
|
||||
)
|
||||
|
||||
//go:embed embed/k3s-init.sh
|
||||
var k3sInit string
|
||||
|
||||
type K3s struct {
|
||||
Version string
|
||||
|
||||
Config K3sConfig
|
||||
}
|
||||
|
||||
//TODO: Would be nice if these just pointed to k3s/pkg/cli/cmds
|
||||
type K3sConfig struct {
|
||||
DataDir string `json:"data-dir,omitempty"`
|
||||
KubeConfig string `json:"write-kubeconfig,omitempty"`
|
||||
KubeConfigMode string `json:"write-kubeconfig-mode,omitempty"`
|
||||
|
||||
Disable []string `json:"disable,omitempty"`
|
||||
}
|
||||
|
||||
//NewK3s returns a new k3s driver
|
||||
func NewK3s() K3s {
|
||||
//TODO: Allow for configuration overrides
|
||||
return K3s{
|
||||
Config: K3sConfig{
|
||||
DataDir: "/var/lib/rancher/k3s",
|
||||
KubeConfig: "/etc/rancher/k3s/k3s.yaml",
|
||||
KubeConfigMode: "0644",
|
||||
Disable: []string{},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (k K3s) Name() string { return "k3s" }
|
||||
|
||||
func (k K3s) KubeConfigPath() string { return k.Config.KubeConfig }
|
||||
|
||||
func (k K3s) DataPath(elem ...string) string {
|
||||
base := []string{k.Config.DataDir}
|
||||
return filepath.Join(append(base, elem...)...)
|
||||
}
|
||||
|
||||
func (k K3s) WriteConfig() error {
|
||||
kCfgPath := filepath.Dir(k.Config.KubeConfig)
|
||||
if err := os.MkdirAll(kCfgPath, os.ModePerm); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
data, err := yaml.Marshal(k.Config)
|
||||
|
||||
c := make(map[string]interface{})
|
||||
if err := yaml.Unmarshal(data, &c); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var uc map[string]interface{}
|
||||
path := filepath.Join(kCfgPath, "config.yaml")
|
||||
if data, err := os.ReadFile(path); err != nil {
|
||||
err := yaml.Unmarshal(data, &uc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
//Merge with user defined configs taking precedence
|
||||
if err := mergo.Merge(&c, uc); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
mergedData, err := yaml.Marshal(&c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return os.WriteFile(path, mergedData, 0644)
|
||||
}
|
||||
|
||||
func (k K3s) Images(ctx context.Context) (map[name.Reference]v1.Image, error) {
|
||||
imgs, err := k.listImages()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return images.ResolveRemoteRefs(imgs...)
|
||||
}
|
||||
|
||||
func (k K3s) Binary() (io.ReadCloser, error) {
|
||||
u, err := url.Parse(fmt.Sprintf("%s/%s/%s", k3sReleaseUrl, k.Version, k.Name()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
resp, err := http.Get(u.String())
|
||||
if err != nil || resp.StatusCode != 200 {
|
||||
return nil, fmt.Errorf("failed to return executable for k3s %s from %s", k.Version, u.String())
|
||||
}
|
||||
return resp.Body, nil
|
||||
}
|
||||
|
||||
//SystemObjects returns a slice of object.ObjMetadata required for driver to be functional and accept new resources
|
||||
//hauler's bootstrapping sequence will always wait for SystemObjects to be in a Ready status before proceeding
|
||||
func (k K3s) SystemObjects() (objs []object.ObjMetadata) {
|
||||
for _, dep := range []string{"coredns"} {
|
||||
objMeta, _ := object.CreateObjMetadata("kube-system", dep, schema.GroupKind{Kind: "Deployment", Group: "apps"})
|
||||
objs = append(objs, objMeta)
|
||||
}
|
||||
return objs
|
||||
}
|
||||
|
||||
func (k K3s) Start(out io.Writer) error {
|
||||
if err := os.WriteFile("/opt/hauler/bin/k3s-init.sh", []byte(k3sInit), 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cmd := exec.Command("/bin/sh", "/opt/hauler/bin/k3s-init.sh")
|
||||
|
||||
cmd.Env = append(os.Environ(), []string{
|
||||
"INSTALL_K3S_SKIP_DOWNLOAD=true",
|
||||
"INSTALL_K3S_SELINUX_WARN=true",
|
||||
"INSTALL_K3S_SKIP_SELINUX_RPM=true",
|
||||
"INSTALL_K3S_BIN_DIR=/opt/hauler/bin",
|
||||
|
||||
//TODO: Provide a real dryrun option
|
||||
//"INSTALL_K3S_SKIP_START=true",
|
||||
}...)
|
||||
|
||||
cmd.Stdout = out
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
func (k K3s) listImages() ([]string, error) {
|
||||
u, err := url.Parse(fmt.Sprintf("%s/%s/k3s-images.txt", k3sReleaseUrl, k.Version))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
resp, err := http.Get(u.String())
|
||||
if err != nil || resp.StatusCode != 200 {
|
||||
return nil, fmt.Errorf("failed to return images for k3s %s from %s", k.Version, u.String())
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var imgs []string
|
||||
scanner := bufio.NewScanner(resp.Body)
|
||||
for scanner.Scan() {
|
||||
imgs = append(imgs, scanner.Text())
|
||||
}
|
||||
|
||||
return imgs, nil
|
||||
}
|
||||
211
pkg/fs/fs.go
211
pkg/fs/fs.go
@@ -1,211 +0,0 @@
|
||||
package fs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/rancherfederal/hauler/pkg/packager/images"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty"
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
fleetapi "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"github.com/spf13/afero"
|
||||
"helm.sh/helm/v3/pkg/cli"
|
||||
"helm.sh/helm/v3/pkg/downloader"
|
||||
"helm.sh/helm/v3/pkg/getter"
|
||||
"k8s.io/apimachinery/pkg/util/json"
|
||||
)
|
||||
|
||||
type PkgFs struct {
|
||||
FS *afero.BasePathFs
|
||||
root string
|
||||
}
|
||||
|
||||
func NewPkgFS(dir string) PkgFs {
|
||||
var p PkgFs
|
||||
p.FS = afero.NewBasePathFs(afero.NewOsFs(), dir).(*afero.BasePathFs)
|
||||
|
||||
// TODO: absolutely no way this'll bite us in the butt later...
|
||||
abs, _ := filepath.Abs(dir)
|
||||
p.root = abs
|
||||
return p
|
||||
}
|
||||
|
||||
func (p PkgFs) Path(elem ...string) string {
|
||||
complete := []string{p.root}
|
||||
return filepath.Join(append(complete, elem...)...)
|
||||
}
|
||||
|
||||
func (p PkgFs) Bundle() PkgFs {
|
||||
return PkgFs{
|
||||
FS: afero.NewBasePathFs(p.FS, v1alpha1.BundlesDir).(*afero.BasePathFs),
|
||||
root: p.Path(v1alpha1.BundlesDir),
|
||||
}
|
||||
}
|
||||
|
||||
func (p PkgFs) Image() PkgFs {
|
||||
return PkgFs{
|
||||
FS: afero.NewBasePathFs(p.FS, v1alpha1.LayoutDir).(*afero.BasePathFs),
|
||||
root: p.Path(v1alpha1.LayoutDir),
|
||||
}
|
||||
}
|
||||
|
||||
func (p PkgFs) Bin() PkgFs {
|
||||
return PkgFs{
|
||||
FS: afero.NewBasePathFs(p.FS, v1alpha1.BinDir).(*afero.BasePathFs),
|
||||
root: p.Path(v1alpha1.BinDir),
|
||||
}
|
||||
}
|
||||
|
||||
func (p PkgFs) Chart() PkgFs {
|
||||
return PkgFs{
|
||||
FS: afero.NewBasePathFs(p.FS, v1alpha1.ChartDir).(*afero.BasePathFs),
|
||||
root: p.Path(v1alpha1.ChartDir),
|
||||
}
|
||||
}
|
||||
|
||||
//AddBundle will add a bundle to a package and all images that are autodetected from it
|
||||
func (p PkgFs) AddBundle(b *fleetapi.Bundle) (map[name.Reference]v1.Image, error) {
|
||||
if err := p.mkdirIfNotExists(v1alpha1.BundlesDir, os.ModePerm); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
data, err := json.Marshal(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := p.Bundle().WriteFile(fmt.Sprintf("%s.json", b.Name), data, 0644); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
imgs, err := images.ImageMapFromBundle(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return imgs, nil
|
||||
}
|
||||
|
||||
func (p PkgFs) AddBin(r io.Reader, name string) error {
|
||||
if err := p.mkdirIfNotExists(v1alpha1.BinDir, os.ModePerm); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
f, err := p.Bin().FS.OpenFile(name, os.O_WRONLY|os.O_CREATE, 0755)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = io.Copy(f, r)
|
||||
return err
|
||||
}
|
||||
|
||||
//AddImage will add an image to the pkgfs in OCI layout fmt
|
||||
//TODO: Extra work is done to ensure this is unique within the index.json
|
||||
func (p PkgFs) AddImage(ref name.Reference, img v1.Image) error {
|
||||
if err := p.mkdirIfNotExists(v1alpha1.LayoutDir, os.ModePerm); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
annotations := make(map[string]string)
|
||||
annotations[ocispec.AnnotationRefName] = ref.Name()
|
||||
|
||||
lp, err := p.layout()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
//TODO: Change to ReplaceImage
|
||||
return lp.AppendImage(img, layout.WithAnnotations(annotations))
|
||||
}
|
||||
|
||||
//TODO: Not very robust
|
||||
//For ref: https://github.com/helm/helm/blob/bf486a25cdc12017c7dac74d1582a8a16acd37ea/pkg/action/pull.go#L75
|
||||
func (p PkgFs) AddChart(ref string, version string) error {
|
||||
if err := p.mkdirIfNotExists(v1alpha1.ChartDir, os.ModePerm); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d := downloader.ChartDownloader{
|
||||
Out: nil,
|
||||
Verify: downloader.VerifyNever,
|
||||
Getters: getter.All(cli.New()), // TODO: Probably shouldn't do this...
|
||||
Options: []getter.Option{
|
||||
getter.WithInsecureSkipVerifyTLS(true),
|
||||
},
|
||||
}
|
||||
|
||||
_, _, err := d.DownloadTo(ref, version, p.Chart().Path())
|
||||
return err
|
||||
}
|
||||
|
||||
func (p PkgFs) layout() (layout.Path, error) {
|
||||
path := p.Image().Path(".")
|
||||
lp, err := layout.FromPath(path)
|
||||
if os.IsNotExist(err) {
|
||||
lp, err = layout.Write(path, empty.Index)
|
||||
}
|
||||
|
||||
return lp, err
|
||||
}
|
||||
|
||||
//WriteFile is a helper method to write a file within the PkgFs
|
||||
func (p PkgFs) WriteFile(name string, data []byte, perm os.FileMode) error {
|
||||
f, err := p.FS.OpenFile(name, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = f.Write(data)
|
||||
if err1 := f.Close(); err1 != nil && err == nil {
|
||||
err = err1
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (p PkgFs) MapLayout() (map[name.Reference]v1.Image, error) {
|
||||
imgRefs := make(map[name.Reference]v1.Image)
|
||||
|
||||
//TODO: Factor this out to a Store interface
|
||||
lp, err := p.layout()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ii, _ := lp.ImageIndex()
|
||||
im, _ := ii.IndexManifest()
|
||||
|
||||
for _, m := range im.Manifests {
|
||||
ref, err := name.ParseReference(m.Annotations[ocispec.AnnotationRefName])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
img, err := lp.Image(m.Digest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
imgRefs[ref] = img
|
||||
}
|
||||
|
||||
return imgRefs, err
|
||||
}
|
||||
|
||||
//TODO: Is this actually faster than just os.MkdirAll?
|
||||
func (p PkgFs) mkdirIfNotExists(dir string, perm os.FileMode) error {
|
||||
_, err := os.Stat(p.Path(dir))
|
||||
if os.IsNotExist(err) {
|
||||
mkdirErr := p.FS.MkdirAll(dir, perm)
|
||||
if mkdirErr != nil {
|
||||
return mkdirErr
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
6
pkg/helmtemplater/docs.go
Normal file
6
pkg/helmtemplater/docs.go
Normal file
@@ -0,0 +1,6 @@
|
||||
// Package helmtemplater
|
||||
// This is an almost complete copy paste of fleet's helmdeployer package, but repurporsed without the need for fleet's helm fork.
|
||||
// The modifications include:
|
||||
// * removing the need for fleet's helm fork by removing the custom field on "ForceAdopt"/*
|
||||
// * Removing the majority of the uninstall/install/upgrade helm install logic, since hauler is only using fleet's templating engine/*
|
||||
package helmtemplater
|
||||
244
pkg/helmtemplater/helm.go
Normal file
244
pkg/helmtemplater/helm.go
Normal file
@@ -0,0 +1,244 @@
|
||||
package helmtemplater
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/rancher/fleet/modules/agent/pkg/deployer"
|
||||
fleetapi "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
|
||||
"github.com/rancher/fleet/pkg/manifest"
|
||||
"github.com/rancher/fleet/pkg/render"
|
||||
corecontrollers "github.com/rancher/wrangler/pkg/generated/controllers/core/v1"
|
||||
"github.com/rancher/wrangler/pkg/name"
|
||||
"github.com/sirupsen/logrus"
|
||||
"helm.sh/helm/v3/pkg/action"
|
||||
"helm.sh/helm/v3/pkg/chart"
|
||||
"helm.sh/helm/v3/pkg/chart/loader"
|
||||
"helm.sh/helm/v3/pkg/kube"
|
||||
"helm.sh/helm/v3/pkg/release"
|
||||
"k8s.io/cli-runtime/pkg/genericclioptions"
|
||||
)
|
||||
|
||||
type helm struct {
|
||||
agentNamespace string
|
||||
serviceAccountCache corecontrollers.ServiceAccountCache
|
||||
configmapCache corecontrollers.ConfigMapCache
|
||||
secretCache corecontrollers.SecretCache
|
||||
getter genericclioptions.RESTClientGetter
|
||||
globalCfg action.Configuration
|
||||
useGlobalCfg bool
|
||||
template bool
|
||||
defaultNamespace string
|
||||
labelPrefix string
|
||||
}
|
||||
|
||||
func (h *helm) Deploy(bundleID string, manifest *manifest.Manifest, options fleetapi.BundleDeploymentOptions) (*deployer.Resources, error) {
|
||||
if options.Helm == nil {
|
||||
options.Helm = &fleetapi.HelmOptions{}
|
||||
}
|
||||
if options.Kustomize == nil {
|
||||
options.Kustomize = &fleetapi.KustomizeOptions{}
|
||||
}
|
||||
|
||||
tar, err := render.ToChart(bundleID, manifest, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
chart, err := loader.LoadArchive(tar)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if chart.Metadata.Annotations == nil {
|
||||
chart.Metadata.Annotations = map[string]string{}
|
||||
}
|
||||
chart.Metadata.Annotations[ServiceAccountNameAnnotation] = options.ServiceAccount
|
||||
chart.Metadata.Annotations[BundleIDAnnotation] = bundleID
|
||||
chart.Metadata.Annotations[AgentNamespaceAnnotation] = h.agentNamespace
|
||||
if manifest.Commit != "" {
|
||||
chart.Metadata.Annotations[CommitAnnotation] = manifest.Commit
|
||||
}
|
||||
|
||||
if resources, err := h.install(bundleID, manifest, chart, options, true); err != nil {
|
||||
return nil, err
|
||||
} else if h.template {
|
||||
return releaseToResources(resources)
|
||||
}
|
||||
|
||||
release, err := h.install(bundleID, manifest, chart, options, false)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return releaseToResources(release)
|
||||
}
|
||||
|
||||
func (h *helm) install(bundleID string, manifest *manifest.Manifest, chart *chart.Chart, options fleetapi.BundleDeploymentOptions, dryRun bool) (*release.Release, error) {
|
||||
timeout, defaultNamespace, releaseName := h.getOpts(bundleID, options)
|
||||
|
||||
values, err := h.getValues(options, defaultNamespace)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cfg, err := h.getCfg(defaultNamespace, options.ServiceAccount)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pr := &postRender{
|
||||
labelPrefix: h.labelPrefix,
|
||||
bundleID: bundleID,
|
||||
manifest: manifest,
|
||||
opts: options,
|
||||
chart: chart,
|
||||
}
|
||||
|
||||
if !h.useGlobalCfg {
|
||||
mapper, err := cfg.RESTClientGetter.ToRESTMapper()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pr.mapper = mapper
|
||||
}
|
||||
|
||||
u := action.NewInstall(&cfg)
|
||||
u.ClientOnly = h.template || dryRun
|
||||
|
||||
// NOTE: All this copy pasta for this :|
|
||||
// u.ForceAdopt = options.Helm.TakeOwnership
|
||||
|
||||
u.Replace = true
|
||||
u.ReleaseName = releaseName
|
||||
u.CreateNamespace = true
|
||||
u.Namespace = defaultNamespace
|
||||
u.Timeout = timeout
|
||||
u.DryRun = dryRun
|
||||
u.PostRenderer = pr
|
||||
if u.Timeout > 0 {
|
||||
u.Wait = true
|
||||
}
|
||||
if !dryRun {
|
||||
logrus.Infof("Helm: Installing %s", bundleID)
|
||||
}
|
||||
return u.Run(chart, values)
|
||||
}
|
||||
|
||||
func (h *helm) getCfg(namespace, serviceAccountName string) (action.Configuration, error) {
|
||||
var (
|
||||
cfg action.Configuration
|
||||
getter = h.getter
|
||||
)
|
||||
|
||||
if h.useGlobalCfg {
|
||||
return h.globalCfg, nil
|
||||
}
|
||||
|
||||
serviceAccountNamespace, serviceAccountName, err := h.getServiceAccount(serviceAccountName)
|
||||
if err != nil {
|
||||
return cfg, err
|
||||
}
|
||||
|
||||
if serviceAccountName != "" {
|
||||
getter, err = newImpersonatingGetter(serviceAccountNamespace, serviceAccountName, h.getter)
|
||||
if err != nil {
|
||||
return cfg, err
|
||||
}
|
||||
}
|
||||
|
||||
kClient := kube.New(getter)
|
||||
kClient.Namespace = namespace
|
||||
|
||||
err = cfg.Init(getter, namespace, "secrets", logrus.Infof)
|
||||
cfg.Releases.MaxHistory = 5
|
||||
cfg.KubeClient = kClient
|
||||
|
||||
return cfg, err
|
||||
}
|
||||
|
||||
func (h *helm) getOpts(bundleID string, options fleetapi.BundleDeploymentOptions) (time.Duration, string, string) {
|
||||
if options.Helm == nil {
|
||||
options.Helm = &fleetapi.HelmOptions{}
|
||||
}
|
||||
|
||||
var timeout time.Duration
|
||||
if options.Helm.TimeoutSeconds > 0 {
|
||||
timeout = time.Second * time.Duration(options.Helm.TimeoutSeconds)
|
||||
}
|
||||
|
||||
if options.TargetNamespace != "" {
|
||||
options.DefaultNamespace = options.TargetNamespace
|
||||
}
|
||||
|
||||
if options.DefaultNamespace == "" {
|
||||
options.DefaultNamespace = h.defaultNamespace
|
||||
}
|
||||
|
||||
// releaseName has a limit of 53 in helm https://github.com/helm/helm/blob/main/pkg/action/install.go#L58
|
||||
releaseName := name.Limit(bundleID, 53)
|
||||
if options.Helm != nil && options.Helm.ReleaseName != "" {
|
||||
releaseName = options.Helm.ReleaseName
|
||||
}
|
||||
|
||||
return timeout, options.DefaultNamespace, releaseName
|
||||
}
|
||||
|
||||
func (h *helm) getValues(options fleetapi.BundleDeploymentOptions, defaultNamespace string) (map[string]interface{}, error) {
|
||||
if options.Helm == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var values map[string]interface{}
|
||||
if options.Helm.Values != nil {
|
||||
values = options.Helm.Values.Data
|
||||
}
|
||||
|
||||
// do not run this when using template
|
||||
if !h.template {
|
||||
for _, valuesFrom := range options.Helm.ValuesFrom {
|
||||
var tempValues map[string]interface{}
|
||||
if valuesFrom.SecretKeyRef != nil {
|
||||
name := valuesFrom.SecretKeyRef.Name
|
||||
namespace := valuesFrom.SecretKeyRef.Namespace
|
||||
if namespace == "" {
|
||||
namespace = defaultNamespace
|
||||
}
|
||||
key := valuesFrom.SecretKeyRef.Key
|
||||
if key == "" {
|
||||
key = DefaultKey
|
||||
}
|
||||
secret, err := h.secretCache.Get(namespace, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tempValues, err = processValuesFromObject(name, namespace, key, secret, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else if valuesFrom.ConfigMapKeyRef != nil {
|
||||
name := valuesFrom.ConfigMapKeyRef.Name
|
||||
namespace := valuesFrom.ConfigMapKeyRef.Namespace
|
||||
if namespace == "" {
|
||||
namespace = defaultNamespace
|
||||
}
|
||||
key := valuesFrom.ConfigMapKeyRef.Key
|
||||
if key == "" {
|
||||
key = DefaultKey
|
||||
}
|
||||
configMap, err := h.configmapCache.Get(namespace, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tempValues, err = processValuesFromObject(name, namespace, key, nil, configMap)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if tempValues != nil {
|
||||
values = mergeValues(values, tempValues)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return values, nil
|
||||
}
|
||||
82
pkg/helmtemplater/impersonate.go
Normal file
82
pkg/helmtemplater/impersonate.go
Normal file
@@ -0,0 +1,82 @@
|
||||
package helmtemplater
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/rancher/wrangler/pkg/ratelimit"
|
||||
apierror "k8s.io/apimachinery/pkg/api/errors"
|
||||
"k8s.io/cli-runtime/pkg/genericclioptions"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
|
||||
)
|
||||
|
||||
func (h *helm) getServiceAccount(name string) (string, string, error) {
|
||||
currentName := name
|
||||
if currentName == "" {
|
||||
currentName = DefaultServiceAccount
|
||||
}
|
||||
_, err := h.serviceAccountCache.Get(h.agentNamespace, currentName)
|
||||
if apierror.IsNotFound(err) && name == "" {
|
||||
// if we can't find the service account, but none was asked for, don't use any
|
||||
return "", "", nil
|
||||
} else if err != nil {
|
||||
return "", "", fmt.Errorf("looking up service account %s/%s: %w", h.agentNamespace, currentName, err)
|
||||
}
|
||||
return h.agentNamespace, currentName, nil
|
||||
}
|
||||
|
||||
type impersonatingGetter struct {
|
||||
genericclioptions.RESTClientGetter
|
||||
|
||||
config clientcmd.ClientConfig
|
||||
restConfig *rest.Config
|
||||
}
|
||||
|
||||
func newImpersonatingGetter(namespace, name string, getter genericclioptions.RESTClientGetter) (genericclioptions.RESTClientGetter, error) {
|
||||
config := clientcmd.NewDefaultClientConfig(impersonationConfig(namespace, name), &clientcmd.ConfigOverrides{})
|
||||
|
||||
restConfig, err := config.ClientConfig()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
restConfig.RateLimiter = ratelimit.None
|
||||
|
||||
return &impersonatingGetter{
|
||||
RESTClientGetter: getter,
|
||||
config: config,
|
||||
restConfig: restConfig,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (i *impersonatingGetter) ToRESTConfig() (*rest.Config, error) {
|
||||
return i.restConfig, nil
|
||||
}
|
||||
|
||||
func (i *impersonatingGetter) ToRawKubeConfigLoader() clientcmd.ClientConfig {
|
||||
return i.config
|
||||
}
|
||||
|
||||
func impersonationConfig(namespace, name string) clientcmdapi.Config {
|
||||
return clientcmdapi.Config{
|
||||
Clusters: map[string]*clientcmdapi.Cluster{
|
||||
"cluster": {
|
||||
Server: "https://kubernetes.default",
|
||||
CertificateAuthority: "/run/secrets/kubernetes.io/serviceaccount/ca.crt",
|
||||
},
|
||||
},
|
||||
AuthInfos: map[string]*clientcmdapi.AuthInfo{
|
||||
"user": {
|
||||
TokenFile: "/run/secrets/kubernetes.io/serviceaccount/token",
|
||||
Impersonate: fmt.Sprintf("system:serviceaccount:%s:%s", namespace, name),
|
||||
},
|
||||
},
|
||||
Contexts: map[string]*clientcmdapi.Context{
|
||||
"default": {
|
||||
Cluster: "cluster",
|
||||
AuthInfo: "user",
|
||||
},
|
||||
},
|
||||
CurrentContext: "default",
|
||||
}
|
||||
}
|
||||
89
pkg/helmtemplater/postrenderer.go
Normal file
89
pkg/helmtemplater/postrenderer.go
Normal file
@@ -0,0 +1,89 @@
|
||||
package helmtemplater
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"github.com/rancher/fleet/modules/agent/pkg/deployer"
|
||||
fleetapi "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
|
||||
"github.com/rancher/fleet/pkg/kustomize"
|
||||
"github.com/rancher/fleet/pkg/manifest"
|
||||
"github.com/rancher/fleet/pkg/rawyaml"
|
||||
"github.com/rancher/wrangler/pkg/apply"
|
||||
"github.com/rancher/wrangler/pkg/yaml"
|
||||
"helm.sh/helm/v3/pkg/chart"
|
||||
"k8s.io/apimachinery/pkg/api/meta"
|
||||
)
|
||||
|
||||
type postRender struct {
|
||||
labelPrefix string
|
||||
bundleID string
|
||||
manifest *manifest.Manifest
|
||||
chart *chart.Chart
|
||||
mapper meta.RESTMapper
|
||||
opts fleetapi.BundleDeploymentOptions
|
||||
}
|
||||
|
||||
func (p *postRender) Run(renderedManifests *bytes.Buffer) (modifiedManifests *bytes.Buffer, err error) {
|
||||
data := renderedManifests.Bytes()
|
||||
|
||||
objs, err := yaml.ToObjects(bytes.NewBuffer(data))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(objs) == 0 {
|
||||
data = nil
|
||||
}
|
||||
|
||||
newObjs, processed, err := kustomize.Process(p.manifest, data, p.opts.Kustomize.Dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if processed {
|
||||
objs = newObjs
|
||||
}
|
||||
|
||||
yamlObjs, err := rawyaml.ToObjects(p.chart)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
objs = append(objs, yamlObjs...)
|
||||
|
||||
labels, annotations, err := apply.GetLabelsAndAnnotations(p.GetSetID(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, obj := range objs {
|
||||
m, err := meta.Accessor(obj)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
m.SetLabels(mergeMaps(m.GetLabels(), labels))
|
||||
m.SetAnnotations(mergeMaps(m.GetAnnotations(), annotations))
|
||||
|
||||
if p.opts.TargetNamespace != "" {
|
||||
if p.mapper != nil {
|
||||
gvk := obj.GetObjectKind().GroupVersionKind()
|
||||
mapping, err := p.mapper.RESTMapping(gvk.GroupKind(), gvk.Version)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if mapping.Scope.Name() == meta.RESTScopeNameRoot {
|
||||
apiVersion, kind := gvk.ToAPIVersionAndKind()
|
||||
return nil, fmt.Errorf("invalid cluster scoped object [name=%s kind=%v apiVersion=%s] found, consider using \"defaultNamespace\", not \"namespace\" in fleet.yaml", m.GetName(),
|
||||
kind, apiVersion)
|
||||
}
|
||||
}
|
||||
m.SetNamespace(p.opts.TargetNamespace)
|
||||
}
|
||||
}
|
||||
|
||||
data, err = yaml.ToBytes(objs)
|
||||
return bytes.NewBuffer(data), err
|
||||
}
|
||||
|
||||
func (p *postRender) GetSetID() string {
|
||||
return deployer.GetSetID(p.bundleID, p.labelPrefix)
|
||||
}
|
||||
133
pkg/helmtemplater/templater.go
Normal file
133
pkg/helmtemplater/templater.go
Normal file
@@ -0,0 +1,133 @@
|
||||
package helmtemplater
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/rancher/fleet/modules/agent/pkg/deployer"
|
||||
fleetapi "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
|
||||
"github.com/rancher/fleet/pkg/manifest"
|
||||
"github.com/rancher/wrangler/pkg/yaml"
|
||||
"github.com/sirupsen/logrus"
|
||||
"helm.sh/helm/v3/pkg/action"
|
||||
"helm.sh/helm/v3/pkg/chartutil"
|
||||
"helm.sh/helm/v3/pkg/kube/fake"
|
||||
"helm.sh/helm/v3/pkg/release"
|
||||
"helm.sh/helm/v3/pkg/storage"
|
||||
"helm.sh/helm/v3/pkg/storage/driver"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
const (
|
||||
BundleIDAnnotation = "fleet.cattle.io/bundle-id"
|
||||
CommitAnnotation = "fleet.cattle.io/commit"
|
||||
AgentNamespaceAnnotation = "fleet.cattle.io/agent-namespace"
|
||||
ServiceAccountNameAnnotation = "fleet.cattle.io/service-account"
|
||||
DefaultServiceAccount = "fleet-default"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNoRelease = errors.New("failed to find release")
|
||||
DefaultKey = "values.yaml"
|
||||
)
|
||||
|
||||
func Template(bundleID string, manifest *manifest.Manifest, options fleetapi.BundleDeploymentOptions) ([]runtime.Object, error) {
|
||||
h := &helm{
|
||||
globalCfg: action.Configuration{},
|
||||
useGlobalCfg: true,
|
||||
template: true,
|
||||
}
|
||||
|
||||
mem := driver.NewMemory()
|
||||
mem.SetNamespace("default")
|
||||
|
||||
h.globalCfg.Capabilities = chartutil.DefaultCapabilities
|
||||
h.globalCfg.KubeClient = &fake.PrintingKubeClient{Out: ioutil.Discard}
|
||||
h.globalCfg.Log = logrus.Infof
|
||||
h.globalCfg.Releases = storage.Init(mem)
|
||||
|
||||
resources, err := h.Deploy(bundleID, manifest, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return resources.Objects, nil
|
||||
}
|
||||
|
||||
func releaseToResources(release *release.Release) (*deployer.Resources, error) {
|
||||
var (
|
||||
err error
|
||||
)
|
||||
resources := &deployer.Resources{
|
||||
DefaultNamespace: release.Namespace,
|
||||
ID: fmt.Sprintf("%s/%s:%d", release.Namespace, release.Name, release.Version),
|
||||
}
|
||||
|
||||
resources.Objects, err = yaml.ToObjects(bytes.NewBufferString(release.Manifest))
|
||||
return resources, err
|
||||
}
|
||||
|
||||
func processValuesFromObject(name, namespace, key string, secret *corev1.Secret, configMap *corev1.ConfigMap) (map[string]interface{}, error) {
|
||||
var m map[string]interface{}
|
||||
if secret != nil {
|
||||
values, ok := secret.Data[key]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("key %s is missing from secret %s/%s, can't use it in valuesFrom", key, namespace, name)
|
||||
}
|
||||
if err := yaml.Unmarshal(values, &m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else if configMap != nil {
|
||||
values, ok := configMap.Data[key]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("key %s is missing from configmap %s/%s, can't use it in valuesFrom", key, namespace, name)
|
||||
}
|
||||
if err := yaml.Unmarshal([]byte(values), &m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func mergeMaps(base, other map[string]string) map[string]string {
|
||||
result := map[string]string{}
|
||||
for k, v := range base {
|
||||
result[k] = v
|
||||
}
|
||||
for k, v := range other {
|
||||
result[k] = v
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// mergeValues merges source and destination map, preferring values
|
||||
// from the source values. This is slightly adapted from:
|
||||
// https://github.com/helm/helm/blob/2332b480c9cb70a0d8a85247992d6155fbe82416/cmd/helm/install.go#L359
|
||||
func mergeValues(dest, src map[string]interface{}) map[string]interface{} {
|
||||
for k, v := range src {
|
||||
// If the key doesn't exist already, then just set the key to that value
|
||||
if _, exists := dest[k]; !exists {
|
||||
dest[k] = v
|
||||
continue
|
||||
}
|
||||
nextMap, ok := v.(map[string]interface{})
|
||||
// If it isn't another map, overwrite the value
|
||||
if !ok {
|
||||
dest[k] = v
|
||||
continue
|
||||
}
|
||||
// Edge case: If the key exists in the destination, but isn't a map
|
||||
destMap, isMap := dest[k].(map[string]interface{})
|
||||
// If the source map has a map for this key, prefer it
|
||||
if !isMap {
|
||||
dest[k] = v
|
||||
continue
|
||||
}
|
||||
// If we got to this point, it is a map in both, so merge them
|
||||
dest[k] = mergeValues(destMap, nextMap)
|
||||
}
|
||||
return dest
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
package kube
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/rest"
|
||||
"k8s.io/client-go/tools/clientcmd"
|
||||
"path/filepath"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
)
|
||||
|
||||
func NewKubeConfig() (*rest.Config, error) {
|
||||
loadingRules := &clientcmd.ClientConfigLoadingRules{
|
||||
Precedence: []string{
|
||||
filepath.Join("/etc/rancher/k3s/k3s.yaml"),
|
||||
filepath.Join("/etc/rancher/rke2/rke2.yaml"),
|
||||
},
|
||||
WarnIfAllMissing: true,
|
||||
}
|
||||
|
||||
cfgOverrides := &clientcmd.ConfigOverrides{}
|
||||
|
||||
kubeConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, cfgOverrides)
|
||||
|
||||
return kubeConfig.ClientConfig()
|
||||
}
|
||||
|
||||
//NewClient returns a fresh kube client
|
||||
func NewClient() (client.Client, error) {
|
||||
cfg, err := NewKubeConfig()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
scheme := runtime.NewScheme()
|
||||
|
||||
return client.New(cfg, client.Options{
|
||||
Scheme: scheme,
|
||||
})
|
||||
}
|
||||
@@ -1,92 +0,0 @@
|
||||
package kube
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"k8s.io/client-go/rest"
|
||||
"sigs.k8s.io/cli-utils/pkg/kstatus/polling"
|
||||
"sigs.k8s.io/cli-utils/pkg/kstatus/polling/aggregator"
|
||||
"sigs.k8s.io/cli-utils/pkg/kstatus/polling/collector"
|
||||
"sigs.k8s.io/cli-utils/pkg/kstatus/polling/event"
|
||||
"sigs.k8s.io/cli-utils/pkg/kstatus/status"
|
||||
"sigs.k8s.io/cli-utils/pkg/object"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client/apiutil"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type StatusChecker struct {
|
||||
poller *polling.StatusPoller
|
||||
client client.Client
|
||||
|
||||
interval time.Duration
|
||||
timeout time.Duration
|
||||
}
|
||||
|
||||
func NewStatusChecker(kubeConfig *rest.Config, interval time.Duration, timeout time.Duration) (*StatusChecker, error) {
|
||||
restMapper, err := apiutil.NewDynamicRESTMapper(kubeConfig)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
c, err := client.New(kubeConfig, client.Options{Mapper: restMapper})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &StatusChecker{
|
||||
poller: polling.NewStatusPoller(c, restMapper),
|
||||
client: c,
|
||||
interval: interval,
|
||||
timeout: timeout,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (c *StatusChecker) WaitForCondition(objs ...object.ObjMetadata) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), c.timeout)
|
||||
defer cancel()
|
||||
|
||||
eventsChan := c.poller.Poll(ctx, objs, polling.Options{
|
||||
PollInterval: c.interval,
|
||||
UseCache: true,
|
||||
})
|
||||
coll := collector.NewResourceStatusCollector(objs)
|
||||
|
||||
done := coll.ListenWithObserver(eventsChan, desiredStatusNotifierFunc(cancel, status.CurrentStatus))
|
||||
<-done
|
||||
|
||||
for _, rs := range coll.ResourceStatuses {
|
||||
switch rs.Status {
|
||||
case status.CurrentStatus:
|
||||
fmt.Printf("%s: %s ready\n", rs.Identifier.Name, strings.ToLower(rs.Identifier.GroupKind.Kind))
|
||||
case status.NotFoundStatus:
|
||||
fmt.Println(fmt.Errorf("%s: %s not found", rs.Identifier.Name, strings.ToLower(rs.Identifier.GroupKind.Kind)))
|
||||
default:
|
||||
fmt.Println(fmt.Errorf("%s: %s not ready", rs.Identifier.Name, strings.ToLower(rs.Identifier.GroupKind.Kind)))
|
||||
}
|
||||
}
|
||||
|
||||
if coll.Error != nil || ctx.Err() == context.DeadlineExceeded {
|
||||
return errors.New("timed out waiting for condition")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// desiredStatusNotifierFunc returns an Observer function for the
|
||||
// ResourceStatusCollector that will cancel the context (using the cancelFunc)
|
||||
// when all resources have reached the desired status.
|
||||
func desiredStatusNotifierFunc(cancelFunc context.CancelFunc, desired status.Status) collector.ObserverFunc {
|
||||
return func(rsc *collector.ResourceStatusCollector, _ event.Event) {
|
||||
var rss []*event.ResourceStatus
|
||||
for _, rs := range rsc.ResourceStatuses {
|
||||
rss = append(rss, rs)
|
||||
}
|
||||
aggStatus := aggregator.AggregateStatus(rss, desired)
|
||||
if aggStatus == desired {
|
||||
cancelFunc()
|
||||
}
|
||||
}
|
||||
}
|
||||
108
pkg/log/log.go
108
pkg/log/log.go
@@ -1,23 +1,30 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"github.com/pterm/pterm"
|
||||
"context"
|
||||
"io"
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
)
|
||||
|
||||
type Logger interface {
|
||||
SetLevel(string)
|
||||
With(Fields) *logger
|
||||
WithContext(context.Context) context.Context
|
||||
Errorf(string, ...interface{})
|
||||
Infof(string, ...interface{})
|
||||
Warnf(string, ...interface{})
|
||||
Debugf(string, ...interface{})
|
||||
Successf(string, ...interface{})
|
||||
}
|
||||
|
||||
type standardLogger struct {
|
||||
//TODO: Actually check this
|
||||
level string
|
||||
type logger struct {
|
||||
zl zerolog.Logger
|
||||
}
|
||||
|
||||
type Fields map[string]string
|
||||
|
||||
type Event struct {
|
||||
id int
|
||||
message string
|
||||
@@ -27,47 +34,60 @@ var (
|
||||
invalidArgMessage = Event{1, "Invalid arg: %s"}
|
||||
)
|
||||
|
||||
func NewLogger(out io.Writer) *standardLogger {
|
||||
return &standardLogger{}
|
||||
}
|
||||
|
||||
func (l *standardLogger) Errorf(format string, args ...interface{}) {
|
||||
l.logf("error", format, args...)
|
||||
}
|
||||
|
||||
func (l *standardLogger) Infof(format string, args ...interface{}) {
|
||||
l.logf("info", format, args...)
|
||||
}
|
||||
|
||||
func (l *standardLogger) Warnf(format string, args ...interface{}) {
|
||||
l.logf("warn", format, args...)
|
||||
}
|
||||
|
||||
func (l *standardLogger) Debugf(format string, args ...interface{}) {
|
||||
l.logf("debug", format, args...)
|
||||
}
|
||||
|
||||
func (l *standardLogger) Successf(format string, args ...interface{}) {
|
||||
l.logf("success", format, args...)
|
||||
}
|
||||
|
||||
func (l *standardLogger) logf(level string, format string, args ...interface{}) {
|
||||
switch level {
|
||||
case "debug":
|
||||
pterm.Debug.Printfln(format, args...)
|
||||
case "info":
|
||||
pterm.Info.Printfln(format, args...)
|
||||
case "warn":
|
||||
pterm.Warning.Printfln(format, args...)
|
||||
case "success":
|
||||
pterm.Success.Printfln(format, args...)
|
||||
case "error":
|
||||
pterm.Error.Printfln(format, args...)
|
||||
default:
|
||||
pterm.Error.Printfln("%s is not a valid log level", level)
|
||||
func NewLogger(out io.Writer) Logger {
|
||||
l := log.Output(zerolog.ConsoleWriter{Out: os.Stderr})
|
||||
return &logger{
|
||||
zl: l.With().Timestamp().Logger(),
|
||||
}
|
||||
}
|
||||
|
||||
func (l *standardLogger) InvalidArg(arg string) {
|
||||
func FromContext(ctx context.Context) Logger {
|
||||
zl := zerolog.Ctx(ctx)
|
||||
return &logger{
|
||||
zl: *zl,
|
||||
}
|
||||
}
|
||||
|
||||
func (l *logger) SetLevel(level string) {
|
||||
lvl, err := zerolog.ParseLevel(level)
|
||||
if err != nil {
|
||||
lvl, _ = zerolog.ParseLevel("info")
|
||||
}
|
||||
|
||||
zerolog.SetGlobalLevel(lvl)
|
||||
}
|
||||
|
||||
func (l *logger) WithContext(ctx context.Context) context.Context {
|
||||
return l.zl.WithContext(ctx)
|
||||
}
|
||||
|
||||
func (l *logger) With(fields Fields) *logger {
|
||||
zl := l.zl.With()
|
||||
for k, v := range fields {
|
||||
zl = zl.Str(k, v)
|
||||
}
|
||||
|
||||
return &logger{
|
||||
zl: zl.Logger(),
|
||||
}
|
||||
}
|
||||
|
||||
func (l *logger) Errorf(format string, args ...interface{}) {
|
||||
l.zl.Error().Msgf(format, args...)
|
||||
}
|
||||
|
||||
func (l *logger) Infof(format string, args ...interface{}) {
|
||||
l.zl.Info().Msgf(format, args...)
|
||||
}
|
||||
|
||||
func (l *logger) Warnf(format string, args ...interface{}) {
|
||||
l.zl.Warn().Msgf(format, args...)
|
||||
}
|
||||
|
||||
func (l *logger) Debugf(format string, args ...interface{}) {
|
||||
l.zl.Debug().Msgf(format, args...)
|
||||
}
|
||||
|
||||
func (l *logger) InvalidArg(arg string) {
|
||||
l.Errorf(invalidArgMessage.message, arg)
|
||||
}
|
||||
|
||||
@@ -1,40 +0,0 @@
|
||||
package oci
|
||||
|
||||
import (
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout"
|
||||
)
|
||||
|
||||
const refNameAnnotation = "org.opencontainers.image.ref.name"
|
||||
|
||||
func getIndexManifestsDescriptors(layout layout.Path) []v1.Descriptor {
|
||||
imageIndex, err := layout.ImageIndex()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
indexManifests, err := imageIndex.IndexManifest()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return indexManifests.Manifests
|
||||
}
|
||||
|
||||
func ListDigests(layout layout.Path) []v1.Hash {
|
||||
var digests []v1.Hash
|
||||
for _, desc := range getIndexManifestsDescriptors(layout) {
|
||||
digests = append(digests, desc.Digest)
|
||||
}
|
||||
return digests
|
||||
}
|
||||
|
||||
func ListImages(layout layout.Path) map[string]v1.Hash {
|
||||
images := make(map[string]v1.Hash)
|
||||
for _, desc := range getIndexManifestsDescriptors(layout) {
|
||||
if image, ok := desc.Annotations[refNameAnnotation]; ok {
|
||||
images[image] = desc.Digest
|
||||
}
|
||||
}
|
||||
return images
|
||||
}
|
||||
@@ -1,74 +0,0 @@
|
||||
package oci
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/google/go-containerregistry/pkg/v1/empty"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/layout"
|
||||
"github.com/google/go-containerregistry/pkg/v1/random"
|
||||
)
|
||||
|
||||
func Test_ListImages(t *testing.T) {
|
||||
tmpdir, err := os.MkdirTemp(".", "hauler")
|
||||
if err != nil {
|
||||
t.Errorf("failed to setup test scaffolding: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(tmpdir)
|
||||
|
||||
img, err := random.Image(1024, 5)
|
||||
|
||||
if err != nil {
|
||||
fmt.Printf("error creating test image: %v", err)
|
||||
}
|
||||
|
||||
ly, err := createLayout(img, tmpdir)
|
||||
if err != nil {
|
||||
t.Errorf("%v", err)
|
||||
}
|
||||
|
||||
dg, err := getDigest(img)
|
||||
if err != nil {
|
||||
t.Errorf("%v", err)
|
||||
}
|
||||
|
||||
m := ListImages(ly)
|
||||
|
||||
for _, hash := range m {
|
||||
if hash != dg {
|
||||
t.Errorf("error got %v want %v", hash, dg)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func createLayout(img v1.Image, path string) (layout.Path, error) {
|
||||
p, err := layout.FromPath(path)
|
||||
if os.IsNotExist(err) {
|
||||
p, err = layout.Write(path, empty.Index)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("error creating layout: %v", err)
|
||||
}
|
||||
if err := p.AppendImage(img); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func getDigest(img v1.Image) (v1.Hash, error) {
|
||||
digest, err := img.Digest()
|
||||
|
||||
if err != nil {
|
||||
return v1.Hash{}, fmt.Errorf("error getting digest: %v", err)
|
||||
}
|
||||
|
||||
return digest, nil
|
||||
}
|
||||
@@ -1,79 +0,0 @@
|
||||
package oci
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/containerd/containerd/remotes"
|
||||
"github.com/containerd/containerd/remotes/docker"
|
||||
"github.com/deislabs/oras/pkg/content"
|
||||
"github.com/deislabs/oras/pkg/oras"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
const (
|
||||
haulerMediaType = "application/vnd.oci.image"
|
||||
)
|
||||
|
||||
// Get wraps the oras go module to get artifacts from a registry
|
||||
func Get(ctx context.Context, src string, dst string) error {
|
||||
|
||||
store := content.NewFileStore(dst)
|
||||
defer store.Close()
|
||||
|
||||
resolver, err := resolver()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
allowedMediaTypes := []string{
|
||||
haulerMediaType,
|
||||
}
|
||||
|
||||
// Pull file(s) from registry and save to disk
|
||||
fmt.Printf("pulling from %s and saving to %s\n", src, dst)
|
||||
desc, _, err := oras.Pull(ctx, resolver, src, store, oras.WithAllowedMediaTypes(allowedMediaTypes))
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Printf("pulled from %s with digest %s\n", src, desc.Digest)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Put wraps the oras go module to put artifacts into a registry
|
||||
func Put(ctx context.Context, src string, dst string) error {
|
||||
|
||||
data, err := os.ReadFile(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resolver, err := resolver()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
store := content.NewMemoryStore()
|
||||
|
||||
contents := []ocispec.Descriptor{
|
||||
store.Add(src, haulerMediaType, data),
|
||||
}
|
||||
|
||||
desc, err := oras.Push(ctx, resolver, dst, store, contents)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Printf("pushed %s to %s with digest: %s", src, dst, desc.Digest)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resolver() (remotes.Resolver, error) {
|
||||
resolver := docker.NewResolver(docker.ResolverOptions{PlainHTTP: true})
|
||||
return resolver, nil
|
||||
}
|
||||
@@ -1,59 +0,0 @@
|
||||
package oci
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/registry"
|
||||
)
|
||||
|
||||
const timeout = 1 * time.Minute
|
||||
|
||||
func Test_Get_Put(t *testing.T) {
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
defer cancel()
|
||||
|
||||
// Set up a fake registry.
|
||||
s := httptest.NewServer(registry.New())
|
||||
defer s.Close()
|
||||
|
||||
u, err := url.Parse(s.URL)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
file, err := ioutil.TempFile(".", "artifact.txt")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
text := []byte("Some stuff!")
|
||||
if _, err = file.Write(text); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
img := fmt.Sprintf("%s/artifact:latest", u.Host)
|
||||
|
||||
if err := Put(ctx, file.Name(), img); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
dir, err := ioutil.TempDir(".", "tmp")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := Get(ctx, img, dir); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
defer os.Remove(file.Name())
|
||||
defer os.RemoveAll(dir)
|
||||
}
|
||||
@@ -1,48 +0,0 @@
|
||||
package packager
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mholt/archiver/v3"
|
||||
)
|
||||
|
||||
type Archiver interface {
|
||||
String() string
|
||||
|
||||
Archive([]string, string) error
|
||||
Unarchive(string, string) error
|
||||
}
|
||||
|
||||
func NewArchiver() Archiver {
|
||||
return &archiver.TarZstd{
|
||||
Tar: &archiver.Tar{
|
||||
OverwriteExisting: true,
|
||||
MkdirAll: true,
|
||||
ImplicitTopLevelFolder: false,
|
||||
StripComponents: 0,
|
||||
ContinueOnError: false,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func Package(a Archiver, src string, output string) error {
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.Chdir(cwd)
|
||||
|
||||
err = os.Chdir(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
path := filepath.Join(cwd, fmt.Sprintf("%s.%s", output, a.String()))
|
||||
return a.Archive([]string{"."}, path)
|
||||
}
|
||||
|
||||
func Unpackage(a Archiver, src, dest string) error {
|
||||
return a.Unarchive(src, dest)
|
||||
}
|
||||
@@ -1,164 +0,0 @@
|
||||
package images
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
fleetapi "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
|
||||
"github.com/rancher/fleet/pkg/helmdeployer"
|
||||
"github.com/rancher/fleet/pkg/manifest"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/util/json"
|
||||
"k8s.io/client-go/util/jsonpath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Imager interface {
|
||||
Images() ([]string, error)
|
||||
}
|
||||
|
||||
type discoveredImages []string
|
||||
|
||||
func (d discoveredImages) Images() ([]string, error) {
|
||||
return d, nil
|
||||
}
|
||||
|
||||
//MapImager will gather images from various Imager sources and return a single slice
|
||||
func MapImager(imager ...Imager) (map[name.Reference]v1.Image, error) {
|
||||
m := make(map[name.Reference]v1.Image)
|
||||
|
||||
for _, i := range imager {
|
||||
ims, err := i.Images()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
remoteMap, err := ResolveRemoteRefs(ims...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
//TODO: Is there a more efficient way to merge?
|
||||
for k, v := range remoteMap {
|
||||
m[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func ImageMapFromBundle(b *fleetapi.Bundle) (map[name.Reference]v1.Image, error) {
|
||||
opts := fleetapi.BundleDeploymentOptions{}
|
||||
|
||||
//TODO: Why doesn't fleet do this...
|
||||
if b.Spec.Helm != nil {
|
||||
opts.Helm = b.Spec.Helm
|
||||
}
|
||||
|
||||
if b.Spec.Kustomize != nil {
|
||||
opts.Kustomize = b.Spec.Kustomize
|
||||
}
|
||||
|
||||
if b.Spec.YAML != nil {
|
||||
opts.YAML = b.Spec.YAML
|
||||
}
|
||||
|
||||
m, err := manifest.New(&b.Spec)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
//TODO: I think this is right?
|
||||
objs, err := helmdeployer.Template(b.Name, m, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var di discoveredImages
|
||||
for _, o := range objs {
|
||||
imgs, err := imageFromRuntimeObject(o.(*unstructured.Unstructured))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
di = append(di, imgs...)
|
||||
}
|
||||
|
||||
return ResolveRemoteRefs(di...)
|
||||
}
|
||||
|
||||
//ResolveRemoteRefs will return a slice of remote images resolved from their fully qualified name
|
||||
func ResolveRemoteRefs(images ...string) (map[name.Reference]v1.Image, error) {
|
||||
m := make(map[name.Reference]v1.Image)
|
||||
|
||||
for _, i := range images {
|
||||
if i == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
//TODO: This will error out if remote is a v1 image, do better error handling for this
|
||||
ref, err := name.ParseReference(i)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
img, err := remote.Image(ref, remote.WithAuthFromKeychain(authn.DefaultKeychain))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
m[ref] = img
|
||||
}
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
//TODO: Add user defined paths
|
||||
var knownImagePaths = []string{
|
||||
// Deployments & DaemonSets
|
||||
"{.spec.template.spec.initContainers[*].image}",
|
||||
"{.spec.template.spec.containers[*].image}",
|
||||
|
||||
// Pods
|
||||
"{.spec.initContainers[*].image}",
|
||||
"{.spec.containers[*].image}",
|
||||
}
|
||||
|
||||
//imageFromRuntimeObject will return any images found in known obj specs
|
||||
func imageFromRuntimeObject(obj *unstructured.Unstructured) (images []string, err error) {
|
||||
objData, _ := obj.MarshalJSON()
|
||||
|
||||
var data interface{}
|
||||
if err := json.Unmarshal(objData, &data); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
j := jsonpath.New("")
|
||||
j.AllowMissingKeys(true)
|
||||
|
||||
for _, path := range knownImagePaths {
|
||||
r, err := parseJSONPath(data, j, path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
images = append(images, r...)
|
||||
}
|
||||
|
||||
return images, nil
|
||||
}
|
||||
|
||||
func parseJSONPath(input interface{}, parser *jsonpath.JSONPath, template string) ([]string, error) {
|
||||
buf := new(bytes.Buffer)
|
||||
if err := parser.Parse(template); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := parser.Execute(buf, input); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f := func(s rune) bool { return s == ' ' }
|
||||
r := strings.FieldsFunc(buf.String(), f)
|
||||
return r, nil
|
||||
}
|
||||
@@ -1,84 +0,0 @@
|
||||
package images
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/util/json"
|
||||
"k8s.io/client-go/util/jsonpath"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var (
|
||||
jsona = []byte(`{
|
||||
"flatImage": "name/of/image:with-tag",
|
||||
"deeply": {
|
||||
"nested": {
|
||||
"image": "another/image/name:with-a-tag",
|
||||
"set": [
|
||||
{ "image": "first/in/list:123" },
|
||||
{ "image": "second/in:456" }
|
||||
]
|
||||
}
|
||||
}
|
||||
}`)
|
||||
)
|
||||
|
||||
func Test_parseJSONPath(t *testing.T) {
|
||||
var data interface{}
|
||||
if err := json.Unmarshal(jsona, &data); err != nil {
|
||||
t.Errorf("failed to unmarshal test article, %v", err)
|
||||
}
|
||||
|
||||
j := jsonpath.New("")
|
||||
|
||||
type args struct {
|
||||
input interface{}
|
||||
name string
|
||||
template string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want []string
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "should find flat path with string result",
|
||||
args: args{
|
||||
input: data,
|
||||
name: "wut",
|
||||
template: "{.flatImage}",
|
||||
},
|
||||
want: []string{"name/of/image:with-tag"},
|
||||
},
|
||||
{
|
||||
name: "should find nested path with string result",
|
||||
args: args{
|
||||
input: data,
|
||||
name: "wut",
|
||||
template: "{.deeply.nested.image}",
|
||||
},
|
||||
want: []string{"another/image/name:with-a-tag"},
|
||||
},
|
||||
{
|
||||
name: "should find nested path with slice result",
|
||||
args: args{
|
||||
input: data,
|
||||
name: "wut",
|
||||
template: "{.deeply.nested.set[*].image}",
|
||||
},
|
||||
want: []string{"first/in/list:123", "second/in:456"},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := parseJSONPath(tt.args.input, j, tt.args.template)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("parseJSONPath() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
}
|
||||
if !reflect.DeepEqual(got, tt.want) {
|
||||
t.Errorf("parseJSONPath() got = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,177 +0,0 @@
|
||||
package packager
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
fleetapi "github.com/rancher/fleet/pkg/apis/fleet.cattle.io/v1alpha1"
|
||||
"github.com/rancher/fleet/pkg/bundle"
|
||||
"github.com/rancherfederal/hauler/pkg/apis/hauler.cattle.io/v1alpha1"
|
||||
"github.com/rancherfederal/hauler/pkg/driver"
|
||||
"github.com/rancherfederal/hauler/pkg/fs"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
"github.com/rancherfederal/hauler/pkg/packager/images"
|
||||
"k8s.io/apimachinery/pkg/util/json"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
type Packager interface {
|
||||
Archive(Archiver, v1alpha1.Package, string) error
|
||||
|
||||
PackageBundles(context.Context, ...string) ([]*fleetapi.Bundle, error)
|
||||
|
||||
PackageDriver(context.Context, driver.Driver) error
|
||||
|
||||
PackageFleet(context.Context, v1alpha1.Fleet) error
|
||||
|
||||
PackageImages(context.Context, ...string) error
|
||||
}
|
||||
|
||||
type pkg struct {
|
||||
fs fs.PkgFs
|
||||
|
||||
logger log.Logger
|
||||
}
|
||||
|
||||
//NewPackager loads a new packager given a path on disk
|
||||
func NewPackager(path string, logger log.Logger) Packager {
|
||||
return pkg{
|
||||
fs: fs.NewPkgFS(path),
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
func (p pkg) Archive(a Archiver, pkg v1alpha1.Package, output string) error {
|
||||
data, err := json.Marshal(pkg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err = p.fs.WriteFile("package.json", data, 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return Package(a, p.fs.Path(), output)
|
||||
}
|
||||
|
||||
func (p pkg) PackageBundles(ctx context.Context, path ...string) ([]*fleetapi.Bundle, error) {
|
||||
p.logger.Infof("Packaging %d bundle(s)", len(path))
|
||||
|
||||
opts := &bundle.Options{
|
||||
Compress: true,
|
||||
}
|
||||
|
||||
var cImgs int
|
||||
|
||||
var bundles []*fleetapi.Bundle
|
||||
for _, pth := range path {
|
||||
p.logger.Infof("Creating bundle from path: %s", pth)
|
||||
|
||||
bundleName := filepath.Base(pth)
|
||||
fb, err := bundle.Open(ctx, bundleName, pth, "", opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
//TODO: Figure out why bundle.Open doesn't return with GVK
|
||||
bn := fleetapi.NewBundle("fleet-local", bundleName, *fb.Definition)
|
||||
|
||||
imgs, err := p.fs.AddBundle(bn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := p.pkgImages(ctx, imgs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bundles = append(bundles, bn)
|
||||
cImgs += len(imgs)
|
||||
}
|
||||
|
||||
p.logger.Successf("Finished packaging %d bundle(s) along with %d autodetected image(s)", len(path), cImgs)
|
||||
return bundles, nil
|
||||
}
|
||||
|
||||
func (p pkg) PackageDriver(ctx context.Context, d driver.Driver) error {
|
||||
p.logger.Infof("Packaging %s components", d.Name())
|
||||
|
||||
p.logger.Infof("Adding %s executable to package", d.Name())
|
||||
rc, err := d.Binary()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := p.fs.AddBin(rc, d.Name()); err != nil {
|
||||
return err
|
||||
}
|
||||
rc.Close()
|
||||
|
||||
p.logger.Infof("Adding required images for %s to package", d.Name())
|
||||
imgMap, err := d.Images(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = p.pkgImages(ctx, imgMap)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.logger.Successf("Finished packaging %s components", d.Name())
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p pkg) PackageImages(ctx context.Context, imgs ...string) error {
|
||||
p.logger.Infof("Packaging %d user defined images", len(imgs))
|
||||
imgMap, err := images.ResolveRemoteRefs(imgs...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := p.pkgImages(ctx, imgMap); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.logger.Successf("Finished packaging %d user defined images", len(imgs))
|
||||
return nil
|
||||
}
|
||||
|
||||
//TODO: Add this to PackageDriver?
|
||||
func (p pkg) PackageFleet(ctx context.Context, fl v1alpha1.Fleet) error {
|
||||
p.logger.Infof("Packaging fleet components")
|
||||
|
||||
imgMap, err := images.MapImager(fl)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := p.pkgImages(ctx, imgMap); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.logger.Infof("Adding fleet crds to package")
|
||||
if err := p.fs.AddChart(fl.CRDChart(), fl.Version); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.logger.Infof("Adding fleet to package")
|
||||
if err := p.fs.AddChart(fl.Chart(), fl.Version); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.logger.Successf("Finished packaging fleet components")
|
||||
return nil
|
||||
}
|
||||
|
||||
//pkgImages is a helper function to loop through an image map and add it to a layout
|
||||
func (p pkg) pkgImages(ctx context.Context, imgMap map[name.Reference]v1.Image) error {
|
||||
var i int
|
||||
for ref, im := range imgMap {
|
||||
p.logger.Infof("Packaging image (%d/%d): %s", i+1, len(imgMap), ref.Name())
|
||||
if err := p.fs.AddImage(ref, im); err != nil {
|
||||
return err
|
||||
}
|
||||
i++
|
||||
}
|
||||
return nil
|
||||
}
|
||||
47
pkg/store/add.go
Normal file
47
pkg/store/add.go
Normal file
@@ -0,0 +1,47 @@
|
||||
package store
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/content"
|
||||
"github.com/rancherfederal/hauler/pkg/log"
|
||||
)
|
||||
|
||||
type addOptions struct {
|
||||
repo string
|
||||
}
|
||||
|
||||
type AddOption func(*addOptions)
|
||||
|
||||
func makeAddOptions(opts ...AddOption) addOptions {
|
||||
opt := addOptions{}
|
||||
for _, o := range opts {
|
||||
o(&opt)
|
||||
}
|
||||
return opt
|
||||
}
|
||||
|
||||
func (s *Store) Add(ctx context.Context, oci content.Oci, opts ...AddOption) error {
|
||||
l := log.FromContext(ctx)
|
||||
opt := makeAddOptions(opts...)
|
||||
|
||||
if err := s.precheck(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if opt.repo == "" {
|
||||
}
|
||||
|
||||
if err := oci.Copy(ctx, s.registryURL()); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_ = l
|
||||
return nil
|
||||
}
|
||||
|
||||
func OverrideRepo(r string) AddOption {
|
||||
return func(opts *addOptions) {
|
||||
opts.repo = r
|
||||
}
|
||||
}
|
||||
192
pkg/store/store.go
Normal file
192
pkg/store/store.go
Normal file
@@ -0,0 +1,192 @@
|
||||
package store
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"regexp"
|
||||
"time"
|
||||
|
||||
"github.com/distribution/distribution/v3/configuration"
|
||||
dcontext "github.com/distribution/distribution/v3/context"
|
||||
"github.com/distribution/distribution/v3/registry/handlers"
|
||||
_ "github.com/distribution/distribution/v3/registry/storage/driver/filesystem"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/sirupsen/logrus"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/rancherfederal/hauler/pkg/content"
|
||||
)
|
||||
|
||||
var (
|
||||
httpRegex = regexp.MustCompile("https?://")
|
||||
|
||||
contents = make(map[metav1.TypeMeta]content.Oci)
|
||||
)
|
||||
|
||||
// Store is a simple wrapper around distribution/distribution to enable hauler's use case
|
||||
type Store struct {
|
||||
DataDir string
|
||||
DefaultRepository string
|
||||
|
||||
config *configuration.Configuration
|
||||
handler http.Handler
|
||||
|
||||
server *httptest.Server
|
||||
}
|
||||
|
||||
// NewStore creates a new registry store, designed strictly for use within haulers embedded operations and _not_ for serving
|
||||
func NewStore(ctx context.Context, dataDir string) *Store {
|
||||
cfg := &configuration.Configuration{
|
||||
Version: "0.1",
|
||||
Storage: configuration.Storage{
|
||||
"cache": configuration.Parameters{"blobdescriptor": "inmemory"},
|
||||
"filesystem": configuration.Parameters{"rootdirectory": dataDir},
|
||||
},
|
||||
}
|
||||
cfg.Log.Level = "panic"
|
||||
cfg.HTTP.Headers = http.Header{"X-Content-Type-Options": []string{"nosniff"}}
|
||||
|
||||
handler := setupHandler(ctx, cfg)
|
||||
|
||||
return &Store{
|
||||
DataDir: dataDir,
|
||||
|
||||
// TODO: Opt this
|
||||
DefaultRepository: "hauler",
|
||||
|
||||
config: cfg,
|
||||
handler: handler,
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: Refactor to a feature register model for content types
|
||||
func Register(gvk metav1.TypeMeta, oci content.Oci) {
|
||||
if oci == nil {
|
||||
panic("store: Register content is nil")
|
||||
}
|
||||
if _, dup := contents[gvk]; dup {
|
||||
panic("store: Register called twice for content " + gvk.String())
|
||||
}
|
||||
contents[gvk] = oci
|
||||
}
|
||||
|
||||
// Open will create a new server and start it, it's up to the consumer to close it
|
||||
func (s *Store) Open() *httptest.Server {
|
||||
server := httptest.NewServer(s.handler)
|
||||
s.server = server
|
||||
return server
|
||||
}
|
||||
|
||||
func (s *Store) Close() {
|
||||
s.server.Close()
|
||||
s.server = nil
|
||||
return
|
||||
}
|
||||
|
||||
// Remove TODO: will remove an oci artifact from the registry store
|
||||
func (s *Store) Remove() error {
|
||||
if err := s.precheck(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func RelocateReference(ref name.Reference, registry string) (name.Reference, error) {
|
||||
var sep string
|
||||
if _, err := name.NewDigest(ref.Name()); err == nil {
|
||||
sep = "@"
|
||||
} else {
|
||||
sep = ":"
|
||||
}
|
||||
return name.ParseReference(
|
||||
fmt.Sprintf("%s%s%s", ref.Context().RepositoryStr(), sep, ref.Identifier()),
|
||||
name.WithDefaultRegistry(registry),
|
||||
)
|
||||
}
|
||||
|
||||
func (s *Store) RelocateReference(ref name.Reference) name.Reference {
|
||||
var sep string
|
||||
if _, err := name.NewDigest(ref.Name()); err == nil {
|
||||
sep = "@"
|
||||
} else {
|
||||
sep = ":"
|
||||
}
|
||||
relocatedRef, _ := name.ParseReference(
|
||||
fmt.Sprintf("%s%s%s", ref.Context().RepositoryStr(), sep, ref.Identifier()),
|
||||
name.WithDefaultRegistry(s.registryURL()),
|
||||
)
|
||||
return relocatedRef
|
||||
}
|
||||
|
||||
// precheck checks whether server is appropriately started and errors if it's not
|
||||
// used to safely run Store operations without fear of panics
|
||||
func (s *Store) precheck() error {
|
||||
if s.server == nil || s.server.URL == "" {
|
||||
return fmt.Errorf("server is not started yet")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// registryURL returns the registries URL without the protocol, suitable for image relocation operations
|
||||
func (s *Store) registryURL() string {
|
||||
return httpRegex.ReplaceAllString(s.server.URL, "")
|
||||
}
|
||||
|
||||
func alive(path string, handler http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if r.URL.Path == path {
|
||||
w.Header().Set("Cache-Control", "no-cache")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
return
|
||||
}
|
||||
handler.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
// setupHandler will set up the registry handler
|
||||
func setupHandler(ctx context.Context, config *configuration.Configuration) http.Handler {
|
||||
ctx, _ = configureLogging(ctx, config)
|
||||
|
||||
app := handlers.NewApp(ctx, config)
|
||||
app.RegisterHealthChecks()
|
||||
handler := alive("/", app)
|
||||
|
||||
return handler
|
||||
}
|
||||
|
||||
func configureLogging(ctx context.Context, cfg *configuration.Configuration) (context.Context, context.CancelFunc) {
|
||||
logrus.SetLevel(logLevel(cfg.Log.Level))
|
||||
|
||||
formatter := cfg.Log.Formatter
|
||||
if formatter == "" {
|
||||
formatter = "text"
|
||||
}
|
||||
|
||||
logrus.SetFormatter(&logrus.TextFormatter{
|
||||
TimestampFormat: time.RFC3339Nano,
|
||||
})
|
||||
|
||||
if len(cfg.Log.Fields) > 0 {
|
||||
var fields []interface{}
|
||||
for k := range cfg.Log.Fields {
|
||||
fields = append(fields, k)
|
||||
}
|
||||
|
||||
ctx = dcontext.WithValues(ctx, cfg.Log.Fields)
|
||||
ctx = dcontext.WithLogger(ctx, dcontext.GetLogger(ctx, fields...))
|
||||
}
|
||||
|
||||
dcontext.SetDefaultLogger(dcontext.GetLogger(ctx))
|
||||
return context.WithCancel(ctx)
|
||||
}
|
||||
|
||||
func logLevel(level configuration.Loglevel) logrus.Level {
|
||||
l, err := logrus.ParseLevel(string(level))
|
||||
if err != nil {
|
||||
l = logrus.InfoLevel
|
||||
logrus.Warnf("error parsing log level %q: %v, using %q", level, err, l)
|
||||
}
|
||||
return l
|
||||
}
|
||||
@@ -1,92 +0,0 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"github.com/mholt/archiver/v3"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
type dir struct {
|
||||
Path string
|
||||
Permission os.FileMode
|
||||
}
|
||||
|
||||
type FSLayout struct {
|
||||
Root string
|
||||
dirs []dir
|
||||
}
|
||||
|
||||
type Layout interface {
|
||||
Create() error
|
||||
AddDir()
|
||||
Archive(archiver2 archiver.Archiver) error
|
||||
Remove() error
|
||||
}
|
||||
|
||||
func NewLayout(root string) *FSLayout {
|
||||
absRoot, _ := filepath.Abs(root)
|
||||
return &FSLayout{
|
||||
Root: absRoot,
|
||||
dirs: nil,
|
||||
}
|
||||
}
|
||||
|
||||
//Create will create the FSLayout at the FSLayout.Root
|
||||
func (l FSLayout) Create() error {
|
||||
for _, dir := range l.dirs {
|
||||
fullPath := filepath.Join(l.Root, dir.Path)
|
||||
if err := os.MkdirAll(fullPath, dir.Permission); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
//AddDir will add a folder to the FSLayout
|
||||
func (l *FSLayout) AddDir(relPath string, perm os.FileMode) {
|
||||
l.dirs = append(l.dirs, dir{
|
||||
Path: relPath,
|
||||
Permission: perm,
|
||||
})
|
||||
}
|
||||
|
||||
func (l FSLayout) Remove() error {
|
||||
return os.RemoveAll(l.Root)
|
||||
}
|
||||
|
||||
func (l FSLayout) Archive(a *archiver.TarZstd, name string) error {
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.Chdir(l.Root)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.Chdir(cwd)
|
||||
|
||||
archiveFile := filepath.Join(cwd, fmt.Sprintf("%s.%s", name, a.String()))
|
||||
if err := a.Archive([]string{"."}, archiveFile); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func LinesToSlice(r io.ReadCloser) ([]string, error) {
|
||||
var lines []string
|
||||
|
||||
scanner := bufio.NewScanner(r)
|
||||
for scanner.Scan() {
|
||||
lines = append(lines, scanner.Text())
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return lines, nil
|
||||
}
|
||||
64
testdata/contents.yaml
vendored
Normal file
64
testdata/contents.yaml
vendored
Normal file
@@ -0,0 +1,64 @@
|
||||
apiVersion: content.hauler.cattle.io/v1alpha1
|
||||
kind: Files
|
||||
metadata:
|
||||
name: myfile
|
||||
spec:
|
||||
files:
|
||||
# hauler can save/redistribute files on disk
|
||||
- ref: testdata/contents.yaml
|
||||
|
||||
# when directories are specified, they will be archived and stored as a file
|
||||
- ref: testdata/
|
||||
|
||||
# hauler can also fetch remote content, and will "smartly" identify filenames _when possible_
|
||||
# filename below = "k3s-airgap-images-arm64.tar.zst"
|
||||
- ref: "https://github.com/k3s-io/k3s/releases/download/v1.22.2%2Bk3s2/k3s-airgap-images-arm64.tar.zst"
|
||||
|
||||
# when not possible to identify a filename from remote content, a name should be specified
|
||||
- ref: https://get.k3s.io
|
||||
name: get-k3s.sh
|
||||
|
||||
---
|
||||
apiVersion: content.hauler.cattle.io/v1alpha1
|
||||
kind: Images
|
||||
metadata:
|
||||
name: myimage
|
||||
spec:
|
||||
images:
|
||||
# images can be referenced by their tag
|
||||
- ref: rancher/k3s:v1.22.2-k3s2
|
||||
|
||||
# or by their digest:
|
||||
- ref: registry@sha256:42043edfae481178f07aa077fa872fcc242e276d302f4ac2026d9d2eb65b955f
|
||||
|
||||
---
|
||||
apiVersion: content.hauler.cattle.io/v1alpha1
|
||||
kind: Charts
|
||||
metadata:
|
||||
name: mychart
|
||||
spec:
|
||||
charts:
|
||||
# charts are also fetched and served as OCI content (currently experimental in helm)
|
||||
# HELM_EXPERIMENTAL_OCI=1 helm chart pull <hauler-registry>/hauler/rancher:2.6.2
|
||||
- name: rancher
|
||||
repoURL: https://releases.rancher.com/server-charts/latest
|
||||
version: 2.6.2
|
||||
|
||||
---
|
||||
apiVersion: content.hauler.cattle.io/v1alpha1
|
||||
kind: Driver
|
||||
metadata:
|
||||
name: mydriver
|
||||
spec:
|
||||
type: k3s
|
||||
version: v1.22.2+k3s2
|
||||
|
||||
|
||||
---
|
||||
apiVersion: collection.hauler.cattle.io/v1alpha1
|
||||
kind: K3s
|
||||
metadata:
|
||||
name: mycollection
|
||||
spec:
|
||||
version: 0.1.0
|
||||
|
||||
Reference in New Issue
Block a user