mirror of
https://github.com/hauler-dev/hauler.git
synced 2026-03-04 02:31:29 +00:00
Compare commits
3 Commits
chunk-the-
...
v1.4.2-rc.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a4b16c723d | ||
|
|
666d220d6c | ||
|
|
4ed7504264 |
17
.github/workflows/tests.yaml
vendored
17
.github/workflows/tests.yaml
vendored
@@ -250,13 +250,6 @@ jobs:
|
||||
hauler store save --filename store.tar.zst
|
||||
# verify via save with filename and platform (amd64)
|
||||
hauler store save --filename store-amd64.tar.zst --platform linux/amd64
|
||||
# verify via save with chunk-size (splits into haul-chunked_0.tar.zst, haul-chunked_1.tar.zst, ...)
|
||||
hauler store save --filename haul-chunked.tar.zst --chunk-size 50M
|
||||
# verify chunk files exist and original is removed
|
||||
ls haul-chunked_*.tar.zst
|
||||
! test -f haul-chunked.tar.zst
|
||||
# verify at least two chunks were produced
|
||||
[ $(ls haul-chunked_*.tar.zst | wc -l) -ge 2 ]
|
||||
|
||||
- name: Remove Hauler Store Contents
|
||||
run: |
|
||||
@@ -276,14 +269,6 @@ jobs:
|
||||
hauler store load --filename store.tar.zst --tempdir /opt
|
||||
# verify via load with filename and platform (amd64)
|
||||
hauler store load --filename store-amd64.tar.zst
|
||||
# verify via load from chunks using explicit first chunk
|
||||
rm -rf store
|
||||
hauler store load --filename haul-chunked_0.tar.zst
|
||||
hauler store info
|
||||
# verify via load from chunks using base filename (auto-detect)
|
||||
rm -rf store
|
||||
hauler store load --filename haul-chunked.tar.zst
|
||||
hauler store info
|
||||
|
||||
- name: Verify Hauler Store Contents
|
||||
run: |
|
||||
@@ -306,7 +291,7 @@ jobs:
|
||||
|
||||
- name: Remove Hauler Store Contents
|
||||
run: |
|
||||
rm -rf store haul.tar.zst store.tar.zst store-amd64.tar.zst haul-chunked_*.tar.zst
|
||||
rm -rf store haul.tar.zst store.tar.zst store-amd64.tar.zst
|
||||
hauler store info
|
||||
|
||||
- name: Verify - hauler store sync
|
||||
|
||||
144
DEVELOPMENT.md
Normal file
144
DEVELOPMENT.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Development Guide
|
||||
|
||||
This document covers how to build Hauler locally and how the project's branching strategy works. It's intended for contributors making code changes or maintainers managing releases.
|
||||
|
||||
---
|
||||
|
||||
## Local Build
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Go** — check `go.mod` for the minimum required version
|
||||
- **Make**
|
||||
- **Docker** (optional, for container image builds)
|
||||
- **Git**
|
||||
|
||||
### Clone the Repository
|
||||
|
||||
```bash
|
||||
git clone https://github.com/hauler-dev/hauler.git
|
||||
cd hauler
|
||||
```
|
||||
|
||||
### Build the Binary
|
||||
|
||||
Using Make:
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
Or directly with Go:
|
||||
|
||||
```bash
|
||||
go build -o hauler ./cmd/hauler
|
||||
```
|
||||
|
||||
The compiled binary will be output to the project root. You can run it directly:
|
||||
|
||||
```bash
|
||||
./hauler version
|
||||
```
|
||||
|
||||
### Run Tests
|
||||
|
||||
```bash
|
||||
make test
|
||||
```
|
||||
|
||||
Or with Go:
|
||||
|
||||
```bash
|
||||
go test ./...
|
||||
```
|
||||
|
||||
### Useful Tips
|
||||
|
||||
- The `--store` flag defaults to `./store` in the current working directory during local testing, so running `./hauler store add ...` from the project root is safe and self-contained. Use `rm -rf store` in the working directory to clear.
|
||||
- Set `--log-level debug` when developing to get verbose output.
|
||||
|
||||
---
|
||||
|
||||
## Branching Strategy
|
||||
|
||||
Hauler uses a **main-first, release branch** model. All development flows through `main`, and release branches are maintained for each minor version to support patching older release lines in parallel.
|
||||
|
||||
### Branch Structure
|
||||
|
||||
```
|
||||
main ← source of truth, all development targets here
|
||||
release/1.3 ← 1.3.x patch line
|
||||
release/1.4 ← 1.4.x patch line
|
||||
```
|
||||
|
||||
Release tags (`v1.4.1`, `v1.3.2`, etc.) are always cut from the corresponding `release/X.Y` branch, never directly from `main`.
|
||||
|
||||
### Where to Target Your Changes
|
||||
|
||||
All pull requests should target `main` by default. Maintainers are responsible for cherry-picking fixes onto release branches as part of the patch release process.
|
||||
|
||||
| Change type | Target branch |
|
||||
|---|---|
|
||||
| New features | `main` |
|
||||
| Bug fixes | `main` |
|
||||
| Security patches | `main` (expedited backport to affected branches) |
|
||||
| Release-specific fix (see below) | `release/X.Y` directly |
|
||||
|
||||
### Creating a New Release Branch
|
||||
|
||||
When `main` is ready to ship a new minor version, a release branch is cut:
|
||||
|
||||
```bash
|
||||
git checkout main
|
||||
git pull origin main
|
||||
git checkout -b release/1.4
|
||||
git push origin release/1.4
|
||||
```
|
||||
|
||||
The first release is then tagged from that branch:
|
||||
|
||||
```bash
|
||||
git tag v1.4.0
|
||||
git push origin v1.4.0
|
||||
```
|
||||
|
||||
Development on `main` immediately continues toward the next minor.
|
||||
|
||||
### Backporting a Fix to a Release Branch
|
||||
|
||||
When a bug fix merged to `main` also needs to apply to an active release line, cherry-pick the commit onto the release branch and open a PR targeting it:
|
||||
|
||||
```bash
|
||||
git checkout release/1.3
|
||||
git pull origin release/1.3
|
||||
git checkout -b backport/fix-description-to-1.3
|
||||
git cherry-pick <commit-sha>
|
||||
git push origin backport/fix-description-to-1.3
|
||||
```
|
||||
|
||||
Open a PR targeting `release/1.3` and reference the original PR in the description. If the cherry-pick doesn't apply cleanly, resolve conflicts and note them in the PR.
|
||||
|
||||
### Fixes That Only Apply to an Older Release Line
|
||||
|
||||
Sometimes a bug exists in an older release but the relevant code has been removed or significantly changed in `main` — making a forward-port unnecessary or nonsensical. In these cases, it's acceptable to open a PR directly against the affected `release/X.Y` branch.
|
||||
|
||||
When doing this, the PR description must explain:
|
||||
|
||||
- Which versions are affected
|
||||
- Why the fix does not apply to `main` or newer release lines (e.g., "this code path was removed in 1.4 when X was refactored")
|
||||
|
||||
This keeps the history auditable and prevents future contributors from wondering why the fix never made it forward.
|
||||
|
||||
### Summary
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────► main (next minor)
|
||||
│
|
||||
│ cherry-pick / backport PRs
|
||||
│ ─────────────────────────► release/1.4 (v1.4.0, v1.4.1 ...)
|
||||
│
|
||||
│ ─────────────────────────► release/1.3 (v1.3.0, v1.3.1 ...)
|
||||
│
|
||||
│ direct fix (older-only bug)
|
||||
│ ─────────────────────────► release/1.2 (critical fixes only)
|
||||
```
|
||||
@@ -521,7 +521,7 @@ func storeChart(ctx context.Context, s *store.Layout, cfg v1.Chart, opts *flags.
|
||||
var depCfg v1.Chart
|
||||
var err error
|
||||
|
||||
if strings.HasPrefix(dep.Repository, "file://") {
|
||||
if strings.HasPrefix(dep.Repository, "file://") || dep.Repository == "" {
|
||||
subchartPath := filepath.Join(chartPath, "charts", dep.Name)
|
||||
|
||||
depCfg = v1.Chart{Name: subchartPath, RepoURL: "", Version: ""}
|
||||
|
||||
@@ -39,9 +39,8 @@ func LoadCmd(ctx context.Context, o *flags.LoadOpts, rso *flags.StoreRootOpts, r
|
||||
l.Debugf("using temporary directory at [%s]", tempDir)
|
||||
|
||||
for _, fileName := range o.FileName {
|
||||
resolved := resolveHaulPath(fileName)
|
||||
l.Infof("loading haul [%s] to [%s]", resolved, o.StoreDir)
|
||||
err := unarchiveLayoutTo(ctx, resolved, o.StoreDir, tempDir)
|
||||
l.Infof("loading haul [%s] to [%s]", fileName, o.StoreDir)
|
||||
err := unarchiveLayoutTo(ctx, fileName, o.StoreDir, tempDir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -86,13 +85,6 @@ func unarchiveLayoutTo(ctx context.Context, haulPath string, dest string, tempDi
|
||||
}
|
||||
}
|
||||
|
||||
// reassemble chunk files if haulPath matches the chunk naming pattern
|
||||
joined, err := archives.JoinChunks(ctx, haulPath, tempDir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
haulPath = joined
|
||||
|
||||
if err := archives.Unarchive(ctx, haulPath, tempDir); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -147,29 +139,6 @@ func unarchiveLayoutTo(ctx context.Context, haulPath string, dest string, tempDi
|
||||
return err
|
||||
}
|
||||
|
||||
// resolveHaulPath returns path as-is if it exists or is a URL. If the file is
|
||||
// not found, it globs for chunk files matching <base>_*<ext> in the same
|
||||
// directory and returns the first match so JoinChunks can reassemble them.
|
||||
func resolveHaulPath(path string) string {
|
||||
if strings.HasPrefix(path, "http://") || strings.HasPrefix(path, "https://") {
|
||||
return path
|
||||
}
|
||||
if _, err := os.Stat(path); err == nil {
|
||||
return path
|
||||
}
|
||||
base := path
|
||||
ext := ""
|
||||
for filepath.Ext(base) != "" {
|
||||
ext = filepath.Ext(base) + ext
|
||||
base = strings.TrimSuffix(base, filepath.Ext(base))
|
||||
}
|
||||
matches, err := filepath.Glob(base + "_*" + ext)
|
||||
if err != nil || len(matches) == 0 {
|
||||
return path
|
||||
}
|
||||
return matches[0]
|
||||
}
|
||||
|
||||
func clearDir(path string) error {
|
||||
entries, err := os.ReadDir(path)
|
||||
if err != nil {
|
||||
|
||||
@@ -4,13 +4,10 @@ import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
referencev3 "github.com/distribution/distribution/v3/reference"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
@@ -75,47 +72,10 @@ func SaveCmd(ctx context.Context, o *flags.SaveOpts, rso *flags.StoreRootOpts, r
|
||||
return err
|
||||
}
|
||||
|
||||
if o.ChunkSize != "" {
|
||||
maxBytes, err := parseChunkSize(o.ChunkSize)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
chunks, err := archives.SplitArchive(ctx, absOutputfile, maxBytes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, c := range chunks {
|
||||
l.Infof("saving store [%s] to chunk [%s]", o.StoreDir, filepath.Base(c))
|
||||
}
|
||||
} else {
|
||||
l.Infof("saving store [%s] to archive [%s]", o.StoreDir, o.FileName)
|
||||
}
|
||||
|
||||
l.Infof("saving store [%s] to archive [%s]", o.StoreDir, o.FileName)
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseChunkSize parses a human-readable byte size string (e.g. "1G", "500M", "2GB")
|
||||
// into a byte count. Suffixes are treated as binary units (1K = 1024).
|
||||
func parseChunkSize(s string) (int64, error) {
|
||||
units := map[string]int64{
|
||||
"K": 1 << 10, "KB": 1 << 10,
|
||||
"M": 1 << 20, "MB": 1 << 20,
|
||||
"G": 1 << 30, "GB": 1 << 30,
|
||||
"T": 1 << 40, "TB": 1 << 40,
|
||||
}
|
||||
s = strings.ToUpper(strings.TrimSpace(s))
|
||||
for suffix, mult := range units {
|
||||
if strings.HasSuffix(s, suffix) {
|
||||
n, err := strconv.ParseInt(strings.TrimSuffix(s, suffix), 10, 64)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("invalid chunk size %q", s)
|
||||
}
|
||||
return n * mult, nil
|
||||
}
|
||||
}
|
||||
return strconv.ParseInt(s, 10, 64)
|
||||
}
|
||||
|
||||
type exports struct {
|
||||
digests []string
|
||||
records map[string]tarball.Descriptor
|
||||
|
||||
10
go.mod
10
go.mod
@@ -316,10 +316,10 @@ require (
|
||||
go.mongodb.org/mongo-driver v1.17.6 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 // indirect
|
||||
go.opentelemetry.io/otel v1.39.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.39.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.39.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.39.0 // indirect
|
||||
go.opentelemetry.io/otel v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.40.0 // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
go.uber.org/zap v1.27.1 // indirect
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
@@ -329,7 +329,7 @@ require (
|
||||
golang.org/x/mod v0.31.0 // indirect
|
||||
golang.org/x/net v0.48.0 // indirect
|
||||
golang.org/x/oauth2 v0.34.0 // indirect
|
||||
golang.org/x/sys v0.39.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
golang.org/x/term v0.38.0 // indirect
|
||||
golang.org/x/text v0.32.0 // indirect
|
||||
golang.org/x/time v0.14.0 // indirect
|
||||
|
||||
24
go.sum
24
go.sum
@@ -1027,22 +1027,22 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.6
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.63.0/go.mod h1:fvPi2qXDqFs8M4B4fmJhE92TyQs9Ydjlg3RvfUp+NbQ=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 h1:RbKq8BG0FI8OiXhBfcRtqqHcZcka+gU3cskNuf05R18=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0/go.mod h1:h06DGIukJOevXaj/xrNjhi/2098RZzcLTbc0jDAUbsg=
|
||||
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
|
||||
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
|
||||
go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
|
||||
go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0 h1:f0cb2XPmrqn4XMy9PNliTgRKJgS5WcL/u0/WRYGz4t0=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0/go.mod h1:vnakAaFckOMiMtOIhFI2MNH4FYrZzXCYxmb1LlhoGz8=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.39.0 h1:in9O8ESIOlwJAEGTkkf34DesGRAc/Pn8qJ7k3r/42LM=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.39.0/go.mod h1:Rp0EXBm5tfnv0WL+ARyO/PHBEaEAT8UUHQ6AGJcSq6c=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0 h1:Ckwye2FpXkYgiHX7fyVrN1uA/UYd9ounqqTuSNAv0k4=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0/go.mod h1:teIFJh5pW2y+AN7riv6IBPX2DuesS3HgP39mwOspKwU=
|
||||
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
|
||||
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
|
||||
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18=
|
||||
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew=
|
||||
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
|
||||
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
|
||||
go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
|
||||
go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
|
||||
go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
|
||||
go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=
|
||||
go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
|
||||
go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
|
||||
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
|
||||
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
|
||||
go.step.sm/crypto v0.75.0 h1:UAHYD6q6ggYyzLlIKHv1MCUVjZIesXRZpGTlRC/HSHw=
|
||||
@@ -1203,8 +1203,8 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
|
||||
@@ -10,7 +10,6 @@ type SaveOpts struct {
|
||||
FileName string
|
||||
Platform string
|
||||
ContainerdCompatibility bool
|
||||
ChunkSize string
|
||||
}
|
||||
|
||||
func (o *SaveOpts) AddFlags(cmd *cobra.Command) {
|
||||
@@ -19,6 +18,5 @@ func (o *SaveOpts) AddFlags(cmd *cobra.Command) {
|
||||
f.StringVarP(&o.FileName, "filename", "f", consts.DefaultHaulerArchiveName, "(Optional) Specify the name of outputted haul")
|
||||
f.StringVarP(&o.Platform, "platform", "p", "", "(Optional) Specify the platform for runtime imports... i.e. linux/amd64 (unspecified implies all)")
|
||||
f.BoolVar(&o.ContainerdCompatibility, "containerd", false, "(Optional) Enable import compatibility with containerd... removes oci-layout from the haul")
|
||||
f.StringVar(&o.ChunkSize, "chunk-size", "", "(Optional) Split the output archive into chunks of the specified size (e.g. 1G, 500M, 2048M)")
|
||||
|
||||
}
|
||||
|
||||
@@ -3,10 +3,8 @@ package archives
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/mholt/archives"
|
||||
"hauler.dev/go/hauler/pkg/log"
|
||||
@@ -104,85 +102,3 @@ func Archive(ctx context.Context, dir, outfile string, compression archives.Comp
|
||||
l.Debugf("archive created successfully [%s]", outfile)
|
||||
return nil
|
||||
}
|
||||
|
||||
// SplitArchive splits an existing archive into chunks of at most maxBytes each.
|
||||
// Chunks are named <base>_0<ext>, <base>_1<ext>, ... where base is the archive
|
||||
// path with all extensions stripped, and ext is the compound extension (e.g. .tar.zst).
|
||||
// The original archive is removed after successful splitting.
|
||||
func SplitArchive(ctx context.Context, archivePath string, maxBytes int64) ([]string, error) {
|
||||
l := log.FromContext(ctx)
|
||||
|
||||
// derive base path and compound extension by stripping all extensions
|
||||
base := archivePath
|
||||
ext := ""
|
||||
for filepath.Ext(base) != "" {
|
||||
ext = filepath.Ext(base) + ext
|
||||
base = strings.TrimSuffix(base, filepath.Ext(base))
|
||||
}
|
||||
|
||||
f, err := os.Open(archivePath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open archive for splitting: %w", err)
|
||||
}
|
||||
|
||||
var chunks []string
|
||||
buf := make([]byte, 32*1024)
|
||||
chunkIdx := 0
|
||||
var written int64
|
||||
var outf *os.File
|
||||
|
||||
for {
|
||||
if outf == nil {
|
||||
chunkPath := fmt.Sprintf("%s_%d%s", base, chunkIdx, ext)
|
||||
outf, err = os.Create(chunkPath)
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, fmt.Errorf("failed to create chunk %d: %w", chunkIdx, err)
|
||||
}
|
||||
chunks = append(chunks, chunkPath)
|
||||
l.Debugf("creating chunk [%s]", chunkPath)
|
||||
written = 0
|
||||
chunkIdx++
|
||||
}
|
||||
|
||||
remaining := maxBytes - written
|
||||
readSize := int64(len(buf))
|
||||
if readSize > remaining {
|
||||
readSize = remaining
|
||||
}
|
||||
|
||||
n, readErr := f.Read(buf[:readSize])
|
||||
if n > 0 {
|
||||
if _, writeErr := outf.Write(buf[:n]); writeErr != nil {
|
||||
outf.Close()
|
||||
f.Close()
|
||||
return nil, fmt.Errorf("failed to write to chunk: %w", writeErr)
|
||||
}
|
||||
written += int64(n)
|
||||
}
|
||||
|
||||
if readErr == io.EOF {
|
||||
outf.Close()
|
||||
outf = nil
|
||||
break
|
||||
}
|
||||
if readErr != nil {
|
||||
outf.Close()
|
||||
f.Close()
|
||||
return nil, fmt.Errorf("failed to read archive: %w", readErr)
|
||||
}
|
||||
|
||||
if written >= maxBytes {
|
||||
outf.Close()
|
||||
outf = nil
|
||||
}
|
||||
}
|
||||
|
||||
f.Close()
|
||||
if err := os.Remove(archivePath); err != nil {
|
||||
return nil, fmt.Errorf("failed to remove original archive after splitting: %w", err)
|
||||
}
|
||||
|
||||
l.Infof("split archive [%s] into %d chunk(s)", filepath.Base(archivePath), len(chunks))
|
||||
return chunks, nil
|
||||
}
|
||||
|
||||
@@ -6,9 +6,6 @@ import (
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/mholt/archives"
|
||||
@@ -159,77 +156,3 @@ func Unarchive(ctx context.Context, tarball, dst string) error {
|
||||
l.Infof("unarchiving completed successfully")
|
||||
return nil
|
||||
}
|
||||
|
||||
var chunkSuffixRe = regexp.MustCompile(`^(.+)_(\d+)$`)
|
||||
|
||||
// chunkInfo checks whether archivePath matches the chunk naming pattern (<base>_N<ext>).
|
||||
// Returns the base path (without index), compound extension, numeric index, and whether it matched.
|
||||
func chunkInfo(archivePath string) (base, ext string, index int, ok bool) {
|
||||
dir := filepath.Dir(archivePath)
|
||||
name := filepath.Base(archivePath)
|
||||
|
||||
// strip compound extension (e.g. .tar.zst)
|
||||
nameBase := name
|
||||
nameExt := ""
|
||||
for filepath.Ext(nameBase) != "" {
|
||||
nameExt = filepath.Ext(nameBase) + nameExt
|
||||
nameBase = strings.TrimSuffix(nameBase, filepath.Ext(nameBase))
|
||||
}
|
||||
|
||||
m := chunkSuffixRe.FindStringSubmatch(nameBase)
|
||||
if m == nil {
|
||||
return "", "", 0, false
|
||||
}
|
||||
|
||||
idx, _ := strconv.Atoi(m[2])
|
||||
return filepath.Join(dir, m[1]), nameExt, idx, true
|
||||
}
|
||||
|
||||
// JoinChunks detects whether archivePath is a chunk file and, if so, finds all
|
||||
// sibling chunks, concatenates them in numeric order into a single file in tempDir,
|
||||
// and returns the path to the joined file. If archivePath is not a chunk, it is
|
||||
// returned unchanged.
|
||||
func JoinChunks(ctx context.Context, archivePath, tempDir string) (string, error) {
|
||||
l := log.FromContext(ctx)
|
||||
|
||||
base, ext, _, ok := chunkInfo(archivePath)
|
||||
if !ok {
|
||||
return archivePath, nil
|
||||
}
|
||||
|
||||
matches, err := filepath.Glob(base + "_*" + ext)
|
||||
if err != nil || len(matches) == 0 {
|
||||
return archivePath, nil
|
||||
}
|
||||
|
||||
sort.Slice(matches, func(i, j int) bool {
|
||||
_, _, idxI, _ := chunkInfo(matches[i])
|
||||
_, _, idxJ, _ := chunkInfo(matches[j])
|
||||
return idxI < idxJ
|
||||
})
|
||||
|
||||
l.Debugf("joining %d chunk(s) for [%s]", len(matches), base)
|
||||
|
||||
joinedPath := filepath.Join(tempDir, filepath.Base(base)+ext)
|
||||
outf, err := os.Create(joinedPath)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create joined archive: %w", err)
|
||||
}
|
||||
defer outf.Close()
|
||||
|
||||
for _, chunk := range matches {
|
||||
l.Debugf("joining chunk [%s]", chunk)
|
||||
cf, err := os.Open(chunk)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to open chunk [%s]: %w", chunk, err)
|
||||
}
|
||||
if _, err := io.Copy(outf, cf); err != nil {
|
||||
cf.Close()
|
||||
return "", fmt.Errorf("failed to copy chunk [%s]: %w", chunk, err)
|
||||
}
|
||||
cf.Close()
|
||||
}
|
||||
|
||||
l.Infof("joined %d chunk(s) into [%s]", len(matches), filepath.Base(joinedPath))
|
||||
return joinedPath, nil
|
||||
}
|
||||
|
||||
BIN
testdata/chart-with-file-dependency-chart-1.0.0.tgz
vendored
BIN
testdata/chart-with-file-dependency-chart-1.0.0.tgz
vendored
Binary file not shown.
Reference in New Issue
Block a user