Compare commits

..

85 Commits

Author SHA1 Message Date
M. Mert Yildiran
ba1254f7e9 🔖 Bump the Helm chart version to 52.3.68 2024-06-17 04:39:02 +03:00
Alon Girmonsky
df1915cce6 Feature update bpf override (#1551)
* 🔧 Set worker BPF override from config

* 🔧 Disable `front` BPF override if capture is not `af_packet`

* feature condition change

Extend the feature visibility condition from explicitely using af_packet to not explicitly using ebpf, and therefore supporting all methods other than ebpf

* reversing the logic

fixing the previous comment logic as it was reversed.

---------

Co-authored-by: tiptophelmet <serhii.ponomarenko.jobs@gmail.com>
2024-06-14 17:33:10 -07:00
M. Mert Yildiran
88ea7120c4 Rename Bpf field of TapConfig struct to BpfOverride 2024-06-12 04:04:11 +03:00
M. Mert Yildiran
f43a61f891 Add Bpf field to TapConfig struct 2024-06-12 04:02:36 +03:00
Alon Girmonsky
067875d544 Merge branch 'master' of github.com:kubeshark/kubeshark 2024-06-08 11:06:34 -07:00
Alon Girmonsky
77ed1fdefe Merge branch 'master' of github.com:kubeshark/kubeshark 2024-06-08 11:06:31 -07:00
Alon Girmonsky
e1f8a24897 Merge branch 'master' of github.com:kubeshark/kubeshark 2024-06-08 10:59:34 -07:00
Alon Girmonsky
40177b8fa9 Fixed a bug in the Helm chart that did not
override the sniffer container once an override Worker config value was present
2024-06-08 10:58:36 -07:00
M. Mert Yildiran
6d0512fd57 🔧 Update the helm-install and logs- Makefile rules 2024-06-06 04:32:06 +03:00
M. Mert Yildiran
75931d9123 Add Profile field to MiscConfig struct 2024-06-06 04:17:03 +03:00
M. Mert Yildiran
d6143f5a6a Replace DisableCgroupIdResolution field with ResolutionStrategy of MiscConfig struct 2024-06-06 04:07:24 +03:00
M. Mert Yildiran
a58f72ed87 👕 Fix the linter error 2024-06-06 04:01:32 +03:00
M. Mert Yildiran
d22e30f86d 🔖 Bump the Helm chart version to 52.3.62 2024-06-01 16:37:22 +03:00
M. Mert Yildiran
806aa12feb Run make generate-manifests 2024-06-01 16:33:13 +03:00
Alon Girmonsky
30e6d28672 helm clone specific branch
Added instructions on how to clone a specific branch
2024-05-31 21:09:27 -07:00
Alon Girmonsky
ef84f90cd9 Returned ebpf as an explicit option and af-packet as the default option 2024-05-31 21:00:33 -07:00
Alon Girmonsky
b49ca767c9 change kernelModule.enabled to false
Promote AF_PACKET as the default option and make kernelModule as an explicit option.
This is a temporary change, until we bring back ebpf as the default option.
2024-05-31 21:00:21 -07:00
Alon Girmonsky
d1cc890cad set kernelModule.enabled default value to false
As a temporary remady:
1. ebpf and pf-ring become explicit options
2. af_packet becomes the default option
2024-05-31 20:59:51 -07:00
Alon Girmonsky
a9a75533af set kernelModule.enabled default value to false
in support for this PR
2024-05-31 20:59:16 -07:00
Alon Girmonsky
1aef7be3fb helm clone specific branch
Added instructions on how to clone a specific branch
2024-05-28 21:10:32 -07:00
M. Mert Yildiran
c1e812e449 🔖 Bump the Helm chart version to 52.3.59 2024-05-25 05:39:28 +03:00
M. Mert Yildiran
c2b73025f3 Add DisableCgroupIdResolution field to MiscConfig struct 2024-05-25 05:18:41 +03:00
M. Mert Yildiran
af2086a54d Add --grep flag to logs command 2024-05-23 01:20:55 +03:00
Ilya Gavrilov
359623c538 Add /etc/os-release for tracer sysevents (#1542)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-05-17 12:46:37 +01:00
Volodymyr Stoiko
3798bf7a01 Allow watching nodes (#1543)
* Allow watching nodes

* restore
2024-05-17 12:37:45 +01:00
M. Mert Yildiran
487f0b9332 Add OverrideTagConfig field to DockerConfig 2024-05-15 05:39:27 +03:00
M. Mert Yildiran
39c5df64e6 🔧 Add branch and switch-to-branch Makefile rules 2024-05-15 04:37:35 +03:00
guangwu
22a777ac79 fix: close config file (#1531)
Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>
2024-05-06 00:31:34 +03:00
radikaled
06e0def53e Update 14-openshift-security-context-constraints.yaml (#1539)
Add IPC_LOCK to allowedCapabilities otherwise kubeshark-worker-daemon-set will not deploy.
2024-05-05 10:45:25 -07:00
M. Mert Yildiran
b88f1c7014 🔖 Bump the Helm chart version to 52.3.0 2024-05-02 23:45:06 +03:00
Alon Girmonsky
f4e2d2f9ca Use eBPF as a traffic capture source by default if cgroup V2 is enabled. (#1540)
This behavior can be reversed by setting the `tap.packetCapture`
to a specific source or manually adding the command line property:
`-disable-ebpf` to both the `worker` and the `tracer`
2024-05-01 16:30:03 -07:00
M. Mert Yildiran
f017020f62 🔖 Bump the Helm chart version to 52.2.39 2024-04-24 16:05:46 +03:00
Alon Girmonsky
32ffa6132d Fix/disable ebpf by defalt again (#1538)
* Revert "Revert "as eBPF is a significant feature that can impact many users, this PR is meant (#1532)""

This reverts commit 7ab63ec745.

* Added the missing -disable-ebpf parameters to Tracer
2024-04-23 15:31:19 -07:00
Alon Girmonsky
0bb0c4b256 Merge branch 'master' of github.com:kubeshark/kubeshark 2024-04-22 17:08:56 -07:00
Alon Girmonsky
28696d2f5c - Consider cloudLicenseEnabled only if license is empty. If license isn't empty disregard cloudLicenseEnabled (#1536) 2024-04-22 15:14:06 -07:00
Alon Girmonsky
7ab63ec745 Revert "as eBPF is a significant feature that can impact many users, this PR is meant (#1532)"
This reverts commit 53c3dabcbf.
2024-04-22 14:57:00 -07:00
kindknow
ddabbac317 chore: fix some typos in comments (#1529)
Signed-off-by: kindknow <iturf@sina.com>
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-04-22 13:52:40 -07:00
Serhii Ponomarenko
5a4901f7bd License via authentication (#1526)
* 🔨 Add `cloudLicenseEnabled` helm value

* 🔨 Add `CLOUD_LICENSE_ENABLED` key to `ConfigMap`

* 🔨 Add `REACT_APP_CLOUD_LICENSE_ENABLED` `front` env

* 🎨 Reformat `ConfigStruct`

* 🔧 Set `cloudLicenseEnabled: true` by default

* 🔧 Override auth enabled/type if `cloudLicenseEnabled: true`

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-04-21 15:04:08 -07:00
M. Mert Yildiran
5a322fc58a 🔖 Bump the Helm chart version to 52.2.30 2024-04-19 17:59:51 +03:00
Alon Girmonsky
53c3dabcbf as eBPF is a significant feature that can impact many users, this PR is meant (#1532)
to provide it NOT as the default option, but require an explicit indication
to use it. To use eBPF instead of AF-PACKET or PF-RING, use:
--set tap.packetCapture=ebpf
2024-04-18 16:28:31 -07:00
Volodymyr Stoiko
6b6915c7ee helm: Use proper labels in selectors (#1528)
* Use proper selectorLabels in daemonset

* Update selector labels in deployments
2024-04-16 09:02:33 -07:00
M. Mert Yildiran
e819759c2d 🎨 Remove a whitespace in 09-worker-daemon-set.yaml 2024-04-16 00:27:18 +03:00
Ilya Gavrilov
b39c5dd5d3 add net capabilities for tracer (#1525)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-04-15 14:20:44 -07:00
M. Mert Yildiran
0f402789f1 Add TcpStreamChannelTimeoutShow field to MiscConfig 2024-04-15 22:46:18 +03:00
Volodymyr Stoiko
d4fade3599 Extend cluster-role permissions (#1527)
* Extend cluster-role permissions

* Format

* upd
2024-04-09 14:20:52 -07:00
Alon Girmonsky
054c4a9e8b Update the readme
Added a link to the live demo portal.
updated the homebrew and helm installation instructions.
2024-03-29 15:44:42 -07:00
M. Mert Yildiran
35c1a88724 🔖 Bump the Helm chart version to 52.2.1 2024-03-28 03:55:03 +03:00
M. Mert Yildiran
fe3f93c91b Revert srvPort to 30001 2024-03-28 03:54:06 +03:00
M. Mert Yildiran
24aa4db0bc Bring back the packet-capture flag 2024-03-28 01:42:16 +03:00
Alon Girmonsky
ef44257942 Update RELEASE.md.TEMPLATE
syntax fix
2024-03-27 12:24:35 -07:00
M. Mert Yildiran
0b58558f70 🔖 Bump the Helm chart version to 52.2.0 2024-03-27 21:50:27 +03:00
Alon Girmonsky
cdd306b890 Update RELEASE.md.TEMPLATE 2024-03-26 15:21:41 -07:00
M. Mert Yildiran
3cc9ff8616 🔖 Bump the Helm chart version to 52.1.77 2024-03-19 18:55:27 +03:00
Serhii Ponomarenko
247498492a Set custom timezone (#1517)
* 🔨 Add timezone config

* 🔨 Update `complete.yaml`

* 📝 Document `timezone` config

* 📝 Update `timezone` config docs

* 📝 Update `timezone` config docs

* 🔥 Remove unused `TIMEZONE` field from `ConfigMap`

* 🦺 Handle empty `tap.timezone` case

* 🔨 Move `timezone` from `.Values.tap` to `.Values`

* 🔨 Add `timezone` field to helm values

* 🔨 Update `complete.yaml`

* 📝 Update `timezone` config docs

* 🔨 Add `TIMEZONE` field to `ConfigMap`

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-03-19 12:06:50 +01:00
Volodymyr Stoiko
867c7058a0 👷 Remove kubeshark tap upgrades (#1519) 2024-03-18 17:32:56 +03:00
M. Mert Yildiran
f1021f61b6 👷 Change the Homebrew job's name 2024-03-15 21:16:14 +03:00
M. Mert Yildiran
9162c4fb64 🔖 Bump the Helm chart version to 52.1.75 2024-03-15 20:39:39 +03:00
Serhii Ponomarenko
e7fc7b791a 🐛 Fix front nginx and network policies ports (#1518)
* 🐛 Use `8080` listen port for front nginx config

* 🐛 Use `8080` ingress port for front/hub network policies
2024-03-14 15:18:24 -07:00
Volodymyr Stoiko
9914183d7d Move brew release into separate job (#1516) 2024-03-11 04:58:22 -07:00
Volodymyr Stoiko
c0751ad4cb Switch to lower ports (#1514)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-03-08 21:02:05 -08:00
Serhii Ponomarenko
0aca81fbcb 🔨 Disable scripting, targeted pods update & recording via ConfigMap keys (#1515)
* 🔨 Add `SCRIPTING_DISABLED` key to `ConfigMap`

* 🔨 Add `TARGETED_PODS_UPDATE_DISABLED` config

* 🔨 Add `RECORDING_DISABLED` key to `ConfigMap`

* 🎨 Reformat `TapConfig`

* 🔨 Update `complete.yaml`
2024-03-08 20:49:07 -08:00
Shunsuke Suzuki
24dccab3e4 fix: fix the asset name of the checksum file for windows/amd64 (#1509)
Pre-built binaries and checksum files are released at GitHub Releases.

https://github.com/kubeshark/kubeshark/releases

But checksum files for windows/amd64 have the following issues.

kubeshark.exe
kubeshark_windows_amd64.sha256

- The executable file name and the checksum file name don't conform to the naming convention
- We can't verify the pre-built binaries with checksum files because the pre-built binary name is different from the actual binary name

```console
$ cat kubeshark_windows_amd64.sha256
ea8fffa952bc8047f493469d024887ed80f966c0d74cf5fb039ea12f71174629  kubeshark_windows_amd64
```

```console
$ sha256sum -c kubeshark_windows_amd64.sha256
sha256sum: kubeshark_windows_amd64: No such file or directory
kubeshark_windows_amd64: FAILED open or read
sha256sum: WARNING: 1 listed file could not be read
```

The cause of these issues is pre-built binaries were renamed after checksum files were generated.

b125860d06/Makefile (L41)
b125860d06/Makefile (L61)

This commit resolves the issue by generating the checksum file after renaming the pre-built binary.

Co-authored-by: Volodymyr Stoiko <me@volodymyrstoiko.com>
2024-03-08 19:32:17 +03:00
Volodymyr Stoiko
db607aff16 Add network policies for kubeshark components (#1513)
* Add explicit network policies for kubeshark components

* allow exact ports

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-03-07 06:37:13 -08:00
Volodymyr Stoiko
ec1728ef91 Add kubeshark fork to use for homebrew release (#1512) 2024-03-06 11:02:08 +01:00
M. Mert Yildiran
93de6e8934 🔖 Bump the Helm chart version to 52.1.66 2024-03-06 00:12:02 +03:00
Alon Girmonsky
5998d00e6a Update README.md 2024-03-03 20:45:44 +02:00
Volodymyr Stoiko
afafb2c625 Add homebrew core version update release step (#1511) 2024-02-29 23:32:52 +02:00
M. Mert Yildiran
b125860d06 💚 Set prerelease to false 2024-02-29 01:53:32 +03:00
M. Mert Yildiran
68aabf262f 🔖 Bump the Helm chart version to 52.1.63 2024-02-29 01:45:41 +03:00
M. Mert Yildiran
d279b7272d 💚 Change ssh-key field to token 2024-02-29 01:45:11 +03:00
M. Mert Yildiran
d15e1cca54 🔖 Bump the Helm chart version to 52.1.62 2024-02-29 01:33:28 +03:00
M. Mert Yildiran
d8761e1e31 💚 Fix the secret name for Homebrew repo 2024-02-29 01:32:57 +03:00
M. Mert Yildiran
a9d2cb5ac2 🔖 Bump the Helm chart version to 52.1.61 2024-02-28 23:43:04 +03:00
M. Mert Yildiran
ddcf973e35 Revert "🔖 Bump the Helm chart version to 52.1.61"
This reverts commit b6d1804326.
2024-02-28 23:42:08 +03:00
M. Mert Yildiran
b6d1804326 🔖 Bump the Helm chart version to 52.1.61 2024-02-28 23:39:06 +03:00
Volodymyr Stoiko
6dc12af55b Add namespace prefix to cluster scope resources (#1506)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-02-28 12:14:03 -08:00
Volodymyr Stoiko
d78b0b987a Remove brew version before installing with script (#1503)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-02-28 11:48:43 -08:00
iluxa
9889787833 update comment for IPC_LOCK (#1507) 2024-02-27 11:52:07 -08:00
M. Mert Yildiran
8fe0544175 🔨 Remove CHECKPOINT_RESTORE capability from defaults 2024-02-26 21:40:14 +03:00
Volodymyr Stoiko
09afa1983a Add build-brew target for makefile (#1504) 2024-02-26 09:38:01 -08:00
Alon Girmonsky
669b5cb1f2 Update README.md 2024-02-25 13:55:08 -08:00
Volodymyr Stoiko
25e0949761 Template homebrew formulae (#1502) 2024-02-24 15:06:15 -08:00
Alon Girmonsky
fa07f973c0 Moving the installation script to the project's repo 2024-02-21 15:47:25 -08:00
M. Mert Yildiran
c38bdcd977 🔖 Bump the Helm chart version to 52.1.50 2024-02-20 21:25:10 +03:00
M. Mert Yildiran
51a4165304 🔧 Update the generate-helm-values Makefile rule 2024-02-15 19:54:40 +03:00
33 changed files with 705 additions and 245 deletions

46
.github/static/kubeshark.rb.tmpl vendored Normal file
View File

@@ -0,0 +1,46 @@
# typed: false
# frozen_string_literal: true
class Kubeshark < Formula
desc ""
homepage "https://github.com/kubeshark/kubeshark"
version "${CLEAN_VERSION}"
on_macos do
if Hardware::CPU.arm?
url "https://github.com/kubeshark/kubeshark/releases/download/${FULL_VERSION}/kubeshark_darwin_arm64"
sha256 "${DARWIN_ARM64_SHA256}"
def install
bin.install "kubeshark_darwin_arm64" => "kubeshark"
end
end
if Hardware::CPU.intel?
url "https://github.com/kubeshark/kubeshark/releases/download/${FULL_VERSION}/kubeshark_darwin_amd64"
sha256 "${DARWIN_AMD64_SHA256}"
def install
bin.install "kubeshark_darwin_amd64" => "kubeshark"
end
end
end
on_linux do
if Hardware::CPU.intel?
url "https://github.com/kubeshark/kubeshark/releases/download/${FULL_VERSION}/kubeshark_linux_amd64"
sha256 "${LINUX_AMD64_SHA256}"
def install
bin.install "kubeshark_linux_amd64" => "kubeshark"
end
end
if Hardware::CPU.arm? && Hardware::CPU.is_64_bit?
url "https://github.com/kubeshark/kubeshark/releases/download/${FULL_VERSION}/kubeshark_linux_arm64"
sha256 "${LINUX_ARM64_SHA256}"
def install
bin.install "kubeshark_linux_arm64" => "kubeshark"
end
end
end
end

View File

@@ -14,6 +14,8 @@ jobs:
release:
name: Build and publish a new release
runs-on: ubuntu-latest
outputs:
version: ${{ steps.version.outputs.tag }}
steps:
- name: Check out the repo
uses: actions/checkout@v3
@@ -47,44 +49,19 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
artifacts: "bin/*"
tag: ${{ steps.version.outputs.tag }}
prerelease: true
prerelease: false
bodyFile: 'bin/README.md'
brew-tap:
name: Create Homebrew formulae
runs-on: ubuntu-latest
brew:
name: Publish a new Homebrew formulae
needs: [release]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Bump core homebrew formula
uses: mislav/bump-homebrew-formula-action@v3
with:
fetch-depth: 0
- name: Version
id: version
shell: bash
run: |
{
echo "tag=${GITHUB_REF#refs/*/}"
echo "build_timestamp=$(date +%s)"
echo "branch=${GITHUB_REF#refs/heads/}"
} >> "$GITHUB_OUTPUT"
- name: Fetch all tags
run: git fetch --force --tags
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version-file: 'go.mod'
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v4
with:
distribution: goreleaser
version: ${{ env.GITHUB_REF_NAME }}
args: release --clean
# A PR will be sent to github.com/Homebrew/homebrew-core to update this formula:
formula-name: kubeshark
push-to: kubeshark/homebrew-core
env:
GITHUB_TOKEN: ${{ secrets.HOMEBREW_TOKEN }}
VER: ${{ steps.version.outputs.tag }}
BUILD_TIMESTAMP: ${{ steps.version.outputs.build_timestamp }}
COMMITTER_TOKEN: ${{ secrets.COMMITTER_TOKEN }}

View File

@@ -40,6 +40,21 @@ build-base: ## Build binary (select the platform via GOOS / GOARCH env variables
-o bin/kubeshark_$(SUFFIX) kubeshark.go && \
cd bin && shasum -a 256 kubeshark_${SUFFIX} > kubeshark_${SUFFIX}.sha256
build-brew: ## Build binary for brew/core CI
go build ${GCLFAGS} -ldflags="${LDFLAGS_EXT} \
-X 'github.com/kubeshark/kubeshark/misc.GitCommitHash=$(COMMIT_HASH)' \
-X 'github.com/kubeshark/kubeshark/misc.Branch=$(GIT_BRANCH)' \
-X 'github.com/kubeshark/kubeshark/misc.BuildTimestamp=$(BUILD_TIMESTAMP)' \
-X 'github.com/kubeshark/kubeshark/misc.Platform=$(SUFFIX)' \
-X 'github.com/kubeshark/kubeshark/misc.Ver=$(VER)'" \
-o kubeshark kubeshark.go
build-windows-amd64:
$(MAKE) build GOOS=windows GOARCH=amd64 && \
mv ./bin/kubeshark_windows_amd64 ./bin/kubeshark.exe && \
rm bin/kubeshark_windows_amd64.sha256 && \
cd bin && shasum -a 256 kubeshark.exe > kubeshark.exe.sha256
build-all: ## Build for all supported platforms.
export CGO_ENABLED=0
echo "Compiling for every OS and Platform" && \
@@ -48,8 +63,7 @@ build-all: ## Build for all supported platforms.
$(MAKE) build GOOS=linux GOARCH=arm64 && \
$(MAKE) build GOOS=darwin GOARCH=amd64 && \
$(MAKE) build GOOS=darwin GOARCH=arm64 && \
$(MAKE) build GOOS=windows GOARCH=amd64 && \
mv ./bin/kubeshark_windows_amd64 ./bin/kubeshark.exe && \
$(MAKE) build-windows-amd64 && \
echo "---------" && \
find ./bin -ls
@@ -70,21 +84,39 @@ kubectl-view-kubeshark-resources: ## This command outputs all Kubernetes resourc
./kubectl.sh view-kubeshark-resources
generate-helm-values: ## Generate the Helm values from config.yaml
./bin/kubeshark__ config > ./helm-chart/values.yaml
./bin/kubeshark__ config > ./helm-chart/values.yaml && sed -i 's/^license:.*/license: ""/' helm-chart/values.yaml
generate-manifests: ## Generate the manifests from the Helm chart using default configuration
helm template kubeshark -n default ./helm-chart > ./manifests/complete.yaml
logs-worker:
logs-sniffer:
export LOGS_POD_PREFIX=kubeshark-worker-
export LOGS_CONTAINER='-c sniffer'
export LOGS_FOLLOW=
${MAKE} logs
logs-worker-follow:
logs-sniffer-follow:
export LOGS_POD_PREFIX=kubeshark-worker-
export LOGS_CONTAINER='-c sniffer'
export LOGS_FOLLOW=--follow
${MAKE} logs
logs-tracer:
export LOGS_POD_PREFIX=kubeshark-worker-
export LOGS_CONTAINER='-c tracer'
export LOGS_FOLLOW=
${MAKE} logs
logs-tracer-follow:
export LOGS_POD_PREFIX=kubeshark-worker-
export LOGS_CONTAINER='-c tracer'
export LOGS_FOLLOW=--follow
${MAKE} logs
logs-worker: logs-sniffer
logs-worker-follow: logs-sniffer-follow
logs-hub:
export LOGS_POD_PREFIX=kubeshark-hub
export LOGS_FOLLOW=
@@ -106,7 +138,7 @@ logs-front-follow:
${MAKE} logs
logs:
kubectl logs $$(kubectl get pods | awk '$$1 ~ /^$(LOGS_POD_PREFIX)/' | awk 'END {print $$1}') $(LOGS_FOLLOW)
kubectl logs $$(kubectl get pods | awk '$$1 ~ /^$(LOGS_POD_PREFIX)/' | awk 'END {print $$1}') $(LOGS_CONTAINER) $(LOGS_FOLLOW)
ssh-node:
kubectl ssh node $$(kubectl get nodes | awk 'END {print $$1}')
@@ -127,22 +159,13 @@ exec:
kubectl exec --stdin --tty $$(kubectl get pods | awk '$$1 ~ /^$(EXEC_POD_PREFIX)/' | awk 'END {print $$1}') -- /bin/sh
helm-install:
cd helm-chart && helm install kubeshark . && cd ..
helm-install-canary:
cd helm-chart && helm install kubeshark . --set tap.docker.tag=canary && cd ..
helm-install-dev:
cd helm-chart && helm install kubeshark . --set tap.docker.tag=dev && cd ..
cd helm-chart && helm install kubeshark . --set tap.docker.tag=$(TAG) && cd ..
helm-install-debug:
cd helm-chart && helm install kubeshark . --set tap.debug=true && cd ..
cd helm-chart && helm install kubeshark . --set tap.docker.tag=$(TAG) --set tap.debug=true && cd ..
helm-install-debug-canary:
cd helm-chart && helm install kubeshark . --set tap.debug=true --set tap.docker.tag=canary && cd ..
helm-install-debug-dev:
cd helm-chart && helm install kubeshark . --set tap.debug=true --set tap.docker.tag=dev && cd ..
helm-install-profile:
cd helm-chart && helm install kubeshark . --set tap.docker.tag=$(TAG) --set tap.misc.profile=true && cd ..
helm-uninstall:
helm uninstall kubeshark
@@ -150,8 +173,8 @@ helm-uninstall:
proxy:
kubeshark proxy
port-forward-worker:
kubectl port-forward $$(kubectl get pods | awk '$$1 ~ /^$(LOGS_POD_PREFIX)/' | awk 'END {print $$1}') $(LOGS_FOLLOW) 30001:30001
port-forward:
kubectl port-forward $$(kubectl get pods | awk '$$1 ~ /^$(POD_PREFIX)/' | awk 'END {print $$1}') $(SRC_PORT):$(DST_PORT)
release:
@cd ../worker && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@@ -163,3 +186,15 @@ release:
@cd helm-chart && cp -r . ../../kubeshark.github.io/charts/chart
@cd ../../kubeshark.github.io/ && git add -A . && git commit -m ":sparkles: Update the Helm chart" && git push
@cd ../kubeshark
branch:
@cd ../worker && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)
@cd ../hub && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)
@cd ../front && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)
@cd ../kubeshark && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)
switch-to-branch:
@cd ../worker && git checkout $(name)
@cd ../hub && git checkout $(name)
@cd ../front && git checkout $(name)
@cd ../kubeshark && git checkout $(name)

View File

@@ -22,10 +22,8 @@
<p align="center">
<b>
NEW:
<a href="https://github.com/kubeshark/kubeshark/releases/latest">Version 52.1.30</a>
now available, featuring enhanced
<a href="https://docs.kubeshark.co/en/half_connections">Network Error Detection & Analysis</a>.
Want to see Kubeshark in action, right now? Visit this
<a href="https://demo.kubeshark.co/">live demo deployment</a> of Kubeshark.
</b>
</p>
@@ -51,18 +49,21 @@ Running any of the :point_up: above commands will open the [Web UI](https://docs
### Homebrew
[Homebrew](https://brew.sh/) :beer: users can add Kubeshark formulae with:
```shell
brew tap kubeshark/kubeshark
```
and install Kubeshark CLI with:
[Homebrew](https://brew.sh/) :beer: users install Kubeshark CLI with:
```shell
brew install kubeshark
```
### Helm
Add the helm repository and install the chart:
```shell
helm repo add kubeshark https://helm.kubeshark.co
helm install kubeshark kubeshark/kubeshark
```
## Building From Source
Clone this repository and run `make` command to build it. After the build is complete, the executable can be found at `./bin/kubeshark__`.

View File

@@ -1,5 +1,5 @@
# Kubeshark release _VER_
Kubeshark CHANGELOG is now part of [Kubeshark wiki](https://github.com/kubeshark/kubeshark/wiki/CHANGELOG)
Release notes coming soon ..
## Download Kubeshark for your platform

View File

@@ -22,7 +22,7 @@ func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, ctx con
if err != nil {
log.Error().
Err(errormessage.FormatError(err)).
Msg(fmt.Sprintf("Error occured while running K8s proxy. Try setting different port using --%s", proxyPortLabel))
Msg(fmt.Sprintf("Error occurred while running K8s proxy. Try setting different port using --%s", proxyPortLabel))
return
}
@@ -42,7 +42,7 @@ func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, ctx con
log.Error().
Str("pod-regex", podRegex.String()).
Err(errormessage.FormatError(err)).
Msg(fmt.Sprintf("Error occured while running port forward. Try setting different port using --%s", proxyPortLabel))
Msg(fmt.Sprintf("Error occurred while running port forward. Try setting different port using --%s", proxyPortLabel))
return
}
@@ -111,7 +111,7 @@ func dumpLogsIfNeeded(ctx context.Context, kubernetesProvider *kubernetes.Provid
}
dotDir := misc.GetDotFolderPath()
filePath := path.Join(dotDir, fmt.Sprintf("%s_logs_%s.zip", misc.Program, time.Now().Format("2006_01_02__15_04_05")))
if err := fsUtils.DumpLogs(ctx, kubernetesProvider, filePath); err != nil {
if err := fsUtils.DumpLogs(ctx, kubernetesProvider, filePath, config.Config.Logs.Grep); err != nil {
log.Error().Err(err).Msg("Failed to dump logs.")
}
}

View File

@@ -30,7 +30,7 @@ var logsCmd = &cobra.Command{
log.Debug().Str("logs-path", config.Config.Logs.FilePath()).Msg("Using this logs path...")
if dumpLogsErr := fsUtils.DumpLogs(ctx, kubernetesProvider, config.Config.Logs.FilePath()); dumpLogsErr != nil {
if dumpLogsErr := fsUtils.DumpLogs(ctx, kubernetesProvider, config.Config.Logs.FilePath(), config.Config.Logs.Grep); dumpLogsErr != nil {
log.Error().Err(dumpLogsErr).Msg("Failed to dump logs.")
}
@@ -47,4 +47,5 @@ func init() {
}
logsCmd.Flags().StringP(configStructs.FileLogsName, "f", defaultLogsConfig.FileStr, fmt.Sprintf("Path for zip file (default current <pwd>\\%s_logs.zip)", misc.Program))
logsCmd.Flags().StringP(configStructs.GrepLogsName, "g", defaultLogsConfig.Grep, "Regexp to do grepping on the logs")
}

View File

@@ -132,7 +132,11 @@ func runLicenseRecieverServer() {
log.Info().Msg("Alternatively enter your license key:")
var licenseKey string
fmt.Scanf("%s", &licenseKey)
_, err := fmt.Scanf("%s", &licenseKey)
if err != nil {
log.Error().Err(err).Send()
return
}
updateLicense(licenseKey)
}

View File

@@ -41,7 +41,7 @@ func InitConfig(cmd *cobra.Command) error {
var err error
DebugMode, err = cmd.Flags().GetBool(DebugFlag)
if err != nil {
log.Error().Err(err).Msg(fmt.Sprintf("Can't recieve '%s' flag", DebugFlag))
log.Error().Err(err).Msg(fmt.Sprintf("Can't receive '%s' flag", DebugFlag))
}
if DebugMode {
@@ -146,7 +146,8 @@ func loadConfigFile(config *ConfigStruct, silent bool) error {
} else {
ConfigFilePath = cwdConfig
}
defer reader.Close()
buf, err := io.ReadAll(reader)
if err != nil {
return err

View File

@@ -41,8 +41,6 @@ func CreateDefaultConfig() ConfigStruct {
"SYS_PTRACE",
// DAC_OVERRIDE is required to read /proc/PID/environ
"DAC_OVERRIDE",
// CHECKPOINT_RESTORE is required to readlink /proc/PID/exe (kernel > 5.9)
"CHECKPOINT_RESTORE",
},
KernelModule: []string{
// SYS_MODULE is required to install kernel modules
@@ -55,9 +53,8 @@ func CreateDefaultConfig() ConfigStruct {
"SYS_PTRACE",
// SYS_RESOURCE is required to change rlimits for eBPF
"SYS_RESOURCE",
// CHECKPOINT_RESTORE is required to readlink /proc/PID/exe (kernel > 5.9)
"CHECKPOINT_RESTORE",
// IPC_LOCK is required for ebpf perf rings (kernel > )
// IPC_LOCK is required for ebpf perf buffers allocations after some amount of size buffer size:
// https://github.com/kubeshark/tracer/blob/13e24725ba8b98216dd0e553262e6d9c56dce5fa/main.go#L82)
"IPC_LOCK",
},
},
@@ -90,15 +87,17 @@ type ManifestsConfig struct {
}
type ConfigStruct struct {
Tap configStructs.TapConfig `yaml:"tap" json:"tap"`
Logs configStructs.LogsConfig `yaml:"logs" json:"logs"`
Config configStructs.ConfigConfig `yaml:"config,omitempty" json:"config,omitempty"`
Kube KubeConfig `yaml:"kube" json:"kube"`
DumpLogs bool `yaml:"dumpLogs" json:"dumpLogs" default:"false"`
HeadlessMode bool `yaml:"headless" json:"headless" default:"false"`
License string `yaml:"license" json:"license" default:""`
Scripting configStructs.ScriptingConfig `yaml:"scripting" json:"scripting"`
Manifests ManifestsConfig `yaml:"manifests,omitempty" json:"manifests,omitempty"`
Tap configStructs.TapConfig `yaml:"tap" json:"tap"`
Logs configStructs.LogsConfig `yaml:"logs" json:"logs"`
Config configStructs.ConfigConfig `yaml:"config,omitempty" json:"config,omitempty"`
Kube KubeConfig `yaml:"kube" json:"kube"`
DumpLogs bool `yaml:"dumpLogs" json:"dumpLogs" default:"false"`
HeadlessMode bool `yaml:"headless" json:"headless" default:"false"`
License string `yaml:"license" json:"license" default:""`
CloudLicenseEnabled bool `yaml:"cloudLicenseEnabled" json:"cloudLicenseEnabled" default:"true"`
Scripting configStructs.ScriptingConfig `yaml:"scripting" json:"scripting"`
Manifests ManifestsConfig `yaml:"manifests,omitempty" json:"manifests,omitempty"`
Timezone string `yaml:"timezone" json:"timezone"`
}
func (config *ConfigStruct) ImagePullPolicy() v1.PullPolicy {

View File

@@ -10,10 +10,12 @@ import (
const (
FileLogsName = "file"
GrepLogsName = "grep"
)
type LogsConfig struct {
FileStr string `yaml:"file" json:"file"`
Grep string `yaml:"grep" json:"grep"`
}
func (config *LogsConfig) Validate() error {

View File

@@ -69,11 +69,18 @@ type ProxyConfig struct {
Host string `yaml:"host" json:"host" default:"127.0.0.1"`
}
type OverrideTagConfig struct {
Worker string `yaml:"worker" json:"worker"`
Hub string `yaml:"hub" json:"hub"`
Front string `yaml:"front" json:"front"`
}
type DockerConfig struct {
Registry string `yaml:"registry" json:"registry" default:"docker.io/kubeshark"`
Tag string `yaml:"tag" json:"tag" default:""`
ImagePullPolicy string `yaml:"imagePullPolicy" json:"imagePullPolicy" default:"Always"`
ImagePullSecrets []string `yaml:"imagePullSecrets" json:"imagePullSecrets"`
Registry string `yaml:"registry" json:"registry" default:"docker.io/kubeshark"`
Tag string `yaml:"tag" json:"tag" default:""`
ImagePullPolicy string `yaml:"imagePullPolicy" json:"imagePullPolicy" default:"Always"`
ImagePullSecrets []string `yaml:"imagePullSecrets" json:"imagePullSecrets"`
OverrideTag OverrideTagConfig `yaml:"overrideTag" json:"overrideTag"`
}
type ResourcesConfig struct {
@@ -131,7 +138,7 @@ type CapabilitiesConfig struct {
}
type KernelModuleConfig struct {
Enabled bool `yaml:"enabled" json:"enabled" default:"true"`
Enabled bool `yaml:"enabled" json:"enabled" default:"false"`
Image string `yaml:"image" json:"image" default:"kubeshark/pf-ring-module:all"`
UnloadOnDestroy bool `yaml:"unloadOnDestroy" json:"unloadOnDestroy" default:"false"`
}
@@ -141,44 +148,52 @@ type MetricsConfig struct {
}
type MiscConfig struct {
JsonTTL string `yaml:"jsonTTL" json:"jsonTTL" default:"5m"`
PcapTTL string `yaml:"pcapTTL" json:"pcapTTL" default:"10s"`
PcapErrorTTL string `yaml:"pcapErrorTTL" json:"pcapErrorTTL" default:"60s"`
JsonTTL string `yaml:"jsonTTL" json:"jsonTTL" default:"5m"`
PcapTTL string `yaml:"pcapTTL" json:"pcapTTL" default:"10s"`
PcapErrorTTL string `yaml:"pcapErrorTTL" json:"pcapErrorTTL" default:"60s"`
TrafficSampleRate int `yaml:"trafficSampleRate" json:"trafficSampleRate" default:"100"`
TcpStreamChannelTimeoutMs int `yaml:"tcpStreamChannelTimeoutMs" json:"tcpStreamChannelTimeoutMs" default:"10000"`
TcpStreamChannelTimeoutShow bool `yaml:"tcpStreamChannelTimeoutShow" json:"tcpStreamChannelTimeoutShow" default:"false"`
ResolutionStrategy string `yaml:"resolutionStrategy" json:"resolutionStrategy" default:"auto"`
Profile bool `yaml:"profile" json:"profile" default:"false"`
}
type TapConfig struct {
Docker DockerConfig `yaml:"docker" json:"docker"`
Proxy ProxyConfig `yaml:"proxy" json:"proxy"`
PodRegexStr string `yaml:"regex" json:"regex" default:".*"`
Namespaces []string `yaml:"namespaces" json:"namespaces" default:"[]"`
Release ReleaseConfig `yaml:"release" json:"release"`
PersistentStorage bool `yaml:"persistentStorage" json:"persistentStorage" default:"false"`
PersistentStorageStatic bool `yaml:"persistentStorageStatic" json:"persistentStorageStatic" default:"false"`
EfsFileSytemIdAndPath string `yaml:"efsFileSytemIdAndPath" json:"efsFileSytemIdAndPath" default:""`
StorageLimit string `yaml:"storageLimit" json:"storageLimit" default:"500Mi"`
StorageClass string `yaml:"storageClass" json:"storageClass" default:"standard"`
DryRun bool `yaml:"dryRun" json:"dryRun" default:"false"`
Resources ResourcesConfig `yaml:"resources" json:"resources"`
ServiceMesh bool `yaml:"serviceMesh" json:"serviceMesh" default:"true"`
Tls bool `yaml:"tls" json:"tls" default:"true"`
IgnoreTainted bool `yaml:"ignoreTainted" json:"ignoreTainted" default:"false"`
Labels map[string]string `yaml:"labels" json:"labels" default:"{}"`
Annotations map[string]string `yaml:"annotations" json:"annotations" default:"{}"`
NodeSelectorTerms []v1.NodeSelectorTerm `yaml:"nodeSelectorTerms" json:"nodeSelectorTerms" default:"[]"`
Auth AuthConfig `yaml:"auth" json:"auth"`
Ingress IngressConfig `yaml:"ingress" json:"ingress"`
IPv6 bool `yaml:"ipv6" json:"ipv6" default:"true"`
Debug bool `yaml:"debug" json:"debug" default:"false"`
KernelModule KernelModuleConfig `yaml:"kernelModule" json:"kernelModule"`
Telemetry TelemetryConfig `yaml:"telemetry" json:"telemetry"`
DefaultFilter string `yaml:"defaultFilter" json:"defaultFilter"`
ReplayDisabled bool `yaml:"replayDisabled" json:"replayDisabled" default:"false"`
Capabilities CapabilitiesConfig `yaml:"capabilities" json:"capabilities"`
GlobalFilter string `yaml:"globalFilter" json:"globalFilter"`
Metrics MetricsConfig `yaml:"metrics" json:"metrics"`
TrafficSampleRate int `yaml:"trafficSampleRate" json:"trafficSampleRate" default:"100"`
TcpStreamChannelTimeoutMs int `yaml:"tcpStreamChannelTimeoutMs" json:"tcpStreamChannelTimeoutMs" default:"10000"`
Misc MiscConfig `yaml:"misc" json:"misc"`
Docker DockerConfig `yaml:"docker" json:"docker"`
Proxy ProxyConfig `yaml:"proxy" json:"proxy"`
PodRegexStr string `yaml:"regex" json:"regex" default:".*"`
Namespaces []string `yaml:"namespaces" json:"namespaces" default:"[]"`
BpfOverride string `yaml:"bpfOverride" json:"bpfOverride" default:""`
Release ReleaseConfig `yaml:"release" json:"release"`
PersistentStorage bool `yaml:"persistentStorage" json:"persistentStorage" default:"false"`
PersistentStorageStatic bool `yaml:"persistentStorageStatic" json:"persistentStorageStatic" default:"false"`
EfsFileSytemIdAndPath string `yaml:"efsFileSytemIdAndPath" json:"efsFileSytemIdAndPath" default:""`
StorageLimit string `yaml:"storageLimit" json:"storageLimit" default:"500Mi"`
StorageClass string `yaml:"storageClass" json:"storageClass" default:"standard"`
DryRun bool `yaml:"dryRun" json:"dryRun" default:"false"`
Resources ResourcesConfig `yaml:"resources" json:"resources"`
ServiceMesh bool `yaml:"serviceMesh" json:"serviceMesh" default:"true"`
Tls bool `yaml:"tls" json:"tls" default:"true"`
PacketCapture string `yaml:"packetCapture" json:"packetCapture" default:"best"`
IgnoreTainted bool `yaml:"ignoreTainted" json:"ignoreTainted" default:"false"`
Labels map[string]string `yaml:"labels" json:"labels" default:"{}"`
Annotations map[string]string `yaml:"annotations" json:"annotations" default:"{}"`
NodeSelectorTerms []v1.NodeSelectorTerm `yaml:"nodeSelectorTerms" json:"nodeSelectorTerms" default:"[]"`
Auth AuthConfig `yaml:"auth" json:"auth"`
Ingress IngressConfig `yaml:"ingress" json:"ingress"`
IPv6 bool `yaml:"ipv6" json:"ipv6" default:"true"`
Debug bool `yaml:"debug" json:"debug" default:"false"`
KernelModule KernelModuleConfig `yaml:"kernelModule" json:"kernelModule"`
Telemetry TelemetryConfig `yaml:"telemetry" json:"telemetry"`
DefaultFilter string `yaml:"defaultFilter" json:"defaultFilter"`
ReplayDisabled bool `yaml:"replayDisabled" json:"replayDisabled" default:"false"`
ScriptingDisabled bool `yaml:"scriptingDisabled" json:"scriptingDisabled" default:"false"`
TargetedPodsUpdateDisabled bool `yaml:"targetedPodsUpdateDisabled" json:"targetedPodsUpdateDisabled" default:"false"`
RecordingDisabled bool `yaml:"recordingDisabled" json:"recordingDisabled" default:"false"`
Capabilities CapabilitiesConfig `yaml:"capabilities" json:"capabilities"`
GlobalFilter string `yaml:"globalFilter" json:"globalFilter"`
Metrics MetricsConfig `yaml:"metrics" json:"metrics"`
Misc MiscConfig `yaml:"misc" json:"misc"`
}
func (config *TapConfig) PodRegex() *regexp.Regexp {

1
go.mod
View File

@@ -14,6 +14,7 @@ require (
github.com/rs/zerolog v1.28.0
github.com/spf13/cobra v1.7.0
github.com/spf13/pflag v1.0.5
github.com/tanqiangyes/grep-go v0.0.0-20220515134556-b36bff9c3d8e
helm.sh/helm/v3 v3.12.0
k8s.io/api v0.28.3
k8s.io/apimachinery v0.28.3

2
go.sum
View File

@@ -618,6 +618,8 @@ github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o
github.com/stretchr/testify v1.8.3 h1:RP3t2pwF7cMEbC1dqtB6poj3niw/9gnV4Cjg5oW5gtY=
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tanqiangyes/grep-go v0.0.0-20220515134556-b36bff9c3d8e h1:+qDZ81UqxfZsWK6Vq9wET3AsdQxHGbViYOqkNxZ9FnU=
github.com/tanqiangyes/grep-go v0.0.0-20220515134556-b36bff9c3d8e/go.mod h1:ANZlXE3vfRYCYnkojePl2hJODYmOeCVD+XahuhDdTbI=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=

View File

@@ -1,6 +1,6 @@
apiVersion: v2
name: kubeshark
version: "52.1.45"
version: "52.3.68"
description: The API Traffic Analyzer for Kubernetes
home: https://kubeshark.co
keywords:

View File

@@ -23,6 +23,14 @@ git clone git@github.com:kubeshark/kubeshark.git --depth 1
cd kubeshark/helm-chart
```
In case you want to clone a specific tag of the repo (e.g. `v52.3.59`):
```shell
git clone git@github.com:kubeshark/kubeshark.git --depth 1 --branch <tag>
cd kubeshark/helm-chart
```
> See the list of available tags here: https://github.com/kubeshark/kubeshark/tags
Render the templates
```shell
@@ -157,7 +165,7 @@ Please refer to [metrics](./metrics.md) documentation for details.
| `tap.ingress.annotations` | `Ingress` annotations | `{}` |
| `tap.ipv6` | Enable IPv6 support for the front-end | `true` |
| `tap.debug` | Enable debug mode | `false` |
| `tap.kernelModule.enabled` | Use PF_RING kernel module([details](PF_RING.md)) | `true` |
| `tap.kernelModule.enabled` | Use PF_RING kernel module([details](PF_RING.md)) | `false` |
| `tap.kernelModule.image` | Container image containing PF_RING kernel module with supported kernel version([details](PF_RING.md)) | "kubeshark/pf-ring-module:all" |
| `tap.kernelModule.unloadOnDestroy` | Create additional container which watches for pod termination and unloads PF_RING kernel module. | `false`|
| `tap.telemetry.enabled` | Enable anonymous usage statistics collection | `true` |
@@ -173,6 +181,7 @@ Please refer to [metrics](./metrics.md) documentation for details.
| `scripting.source` | Source directory of the scripts | `""` |
| `scripting.watchScripts` | Enable watch mode for the scripts in source directory | `true` |
| `tap.metrics.port` | Pod port used to expose Prometheus metrics | `49100` |
| `timezone` | IANA time zone applied to time shown in the front-end | `""` (local time zone applies) |
KernelMapping pairs kernel versions with a
DriverContainer image. Kernel versions can be matched

View File

@@ -8,7 +8,7 @@ metadata:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-cluster-role
name: kubeshark-cluster-role-{{ .Release.Namespace }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
@@ -16,6 +16,7 @@ rules:
- extensions
- apps
resources:
- nodes
- pods
- services
- endpoints
@@ -24,6 +25,14 @@ rules:
- list
- get
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
resourceNames:
- kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role

View File

@@ -8,12 +8,12 @@ metadata:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-cluster-role-binding
name: kubeshark-cluster-role-binding-{{ .Release.Namespace }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubeshark-cluster-role
name: kubeshark-cluster-role-{{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: {{ include "kubeshark.serviceAccountName" . }}

View File

@@ -16,7 +16,7 @@ spec:
selector:
matchLabels:
app.kubeshark.co/app: hub
{{- include "kubeshark.labels" . | nindent 6 }}
{{- include "kubeshark.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
@@ -29,6 +29,8 @@ spec:
- name: kubeshark-hub
command:
- ./hub
- -port
- "8080"
{{- if .Values.tap.debug }}
- -debug
{{- end }}
@@ -43,7 +45,11 @@ spec:
fieldPath: metadata.namespace
- name: KUBESHARK_CLOUD_API_URL
value: 'https://api.kubeshark.co'
{{- if .Values.tap.docker.overrideTag.hub }}
image: '{{ .Values.tap.docker.registry }}/hub:{{ .Values.tap.docker.overrideTag.hub }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/hub:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (printf "v%s" .Chart.Version) }}'
{{- end }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
readinessProbe:
periodSeconds: 1
@@ -51,14 +57,14 @@ spec:
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
port: 8080
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
port: 8080
resources:
limits:
cpu: {{ .Values.tap.resources.hub.limits.cpu }}

View File

@@ -15,7 +15,7 @@ spec:
ports:
- name: kubeshark-hub
port: 80
targetPort: 80
targetPort: 8080
selector:
app.kubeshark.co/app: hub
type: ClusterIP

View File

@@ -15,7 +15,7 @@ spec:
selector:
matchLabels:
app.kubeshark.co/app: front
{{- include "kubeshark.labels" . | nindent 6 }}
{{- include "kubeshark.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
@@ -27,14 +27,38 @@ spec:
- name: REACT_APP_DEFAULT_FILTER
value: '{{ not (eq .Values.tap.defaultFilter "") | ternary .Values.tap.defaultFilter " " }}'
- name: REACT_APP_AUTH_ENABLED
value: '{{ .Values.tap.auth.enabled }}'
value: '{{- if and .Values.cloudLicenseEnabled (not (empty .Values.license)) -}}
"false"
{{- else -}}
{{ .Values.cloudLicenseEnabled | ternary "true" .Values.tap.auth.enabled }}
{{- end }}'
- name: REACT_APP_AUTH_TYPE
value: '{{ not (eq .Values.tap.auth.type "") | ternary .Values.tap.auth.type " " }}'
value: '{{ not (eq .Values.tap.auth.type "") | ternary (.Values.cloudLicenseEnabled | ternary "oidc" .Values.tap.auth.type) " " }}'
- name: REACT_APP_AUTH_SAML_IDP_METADATA_URL
value: '{{ not (eq .Values.tap.auth.saml.idpMetadataUrl "") | ternary .Values.tap.auth.saml.idpMetadataUrl " " }}'
- name: REACT_APP_TIMEZONE
value: '{{ not (eq .Values.timezone "") | ternary .Values.timezone " " }}'
- name: REACT_APP_REPLAY_DISABLED
value: '{{ .Values.tap.replayDisabled }}'
- name: REACT_APP_SCRIPTING_DISABLED
value: '{{ .Values.tap.scriptingDisabled }}'
- name: REACT_APP_TARGETED_PODS_UPDATE_DISABLED
value: '{{ .Values.tap.targetedPodsUpdateDisabled }}'
- name: REACT_APP_BPF_OVERRIDE_DISABLED
value: '{{ eq .Values.tap.packetCapture "ebpf" | ternary "true" "false" }}'
- name: REACT_APP_RECORDING_DISABLED
value: '{{ .Values.tap.recordingDisabled }}'
- name: 'REACT_APP_CLOUD_LICENSE_ENABLED'
value: '{{- if and .Values.cloudLicenseEnabled (not (empty .Values.license)) -}}
"false"
{{- else -}}
{{ .Values.cloudLicenseEnabled }}
{{- end }}'
{{- if .Values.tap.docker.overrideTag.front }}
image: '{{ .Values.tap.docker.registry }}/front:{{ .Values.tap.docker.overrideTag.front }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/front:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (printf "v%s" .Chart.Version) }}'
{{- end }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
name: kubeshark-front
livenessProbe:
@@ -43,14 +67,14 @@ spec:
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
port: 8080
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
port: 8080
timeoutSeconds: 1
resources:
limits:

View File

@@ -14,7 +14,7 @@ spec:
ports:
- name: kubeshark-front
port: 80
targetPort: 80
targetPort: 8080
selector:
app.kubeshark.co/app: front
type: ClusterIP

View File

@@ -16,7 +16,7 @@ spec:
selector:
matchLabels:
app.kubeshark.co/app: worker
{{- include "kubeshark.labels" . | nindent 6 }}
{{- include "kubeshark.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
@@ -51,6 +51,8 @@ spec:
- '{{ .Values.tap.proxy.worker.srvPort }}'
- -metrics-port
- '{{ .Values.tap.metrics.port }}'
- -packet-capture
- '{{ .Values.tap.packetCapture }}'
- -unixsocket
{{- if .Values.tap.serviceMesh }}
- -servicemesh
@@ -60,12 +62,21 @@ spec:
{{- if .Values.tap.kernelModule.enabled }}
- -kernel-module
{{- end }}
{{- if ne .Values.tap.packetCapture "ebpf" }}
- -disable-ebpf
{{- end }}
- -resolution-strategy
- '{{ .Values.tap.misc.resolutionStrategy }}'
{{- if .Values.tap.debug }}
- -debug
- -dumptracer
- "100000000"
{{- end }}
{{- if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (printf "v%s" .Chart.Version) }}'
{{- end }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
name: sniffer
ports:
@@ -82,9 +93,13 @@ spec:
fieldRef:
fieldPath: metadata.namespace
- name: TCP_STREAM_CHANNEL_TIMEOUT_MS
value: '{{ .Values.tap.tcpStreamChannelTimeoutMs }}'
value: '{{ .Values.tap.misc.tcpStreamChannelTimeoutMs }}'
- name: TCP_STREAM_CHANNEL_TIMEOUT_SHOW
value: '{{ .Values.tap.misc.tcpStreamChannelTimeoutShow }}'
- name: KUBESHARK_CLOUD_API_URL
value: 'https://api.kubeshark.co'
- name: PROFILING_ENABLED
value: '{{ .Values.tap.misc.profile }}'
resources:
limits:
cpu: {{ .Values.tap.resources.sniffer.limits.cpu }}
@@ -132,7 +147,7 @@ spec:
- name: unload-pf-ring
image: {{ .Values.tap.kernelModule.image }}
command: ["/bin/sh"]
args: ["-c", "trap 'rmmod pf_ring && sleep 3' SIGTERM; while true; do sleep 1; done"]
args: ["-c", "trap 'rmmod pf_ring && sleep 3' SIGTERM; while true; do sleep 1; done"]
securityContext:
capabilities:
add:
@@ -147,10 +162,17 @@ spec:
- ./tracer
- -procfs
- /hostproc
{{- if ne .Values.tap.packetCapture "ebpf" }}
- -disable-ebpf
{{- end }}
{{- if .Values.tap.debug }}
- -debug
{{- end }}
{{- if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (printf "v%s" .Chart.Version) }}'
{{- end }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
name: tracer
env:
@@ -175,6 +197,9 @@ spec:
{{- range .Values.tap.capabilities.ebpfCapture }}
{{ print "- " . }}
{{- end }}
{{- range .Values.tap.capabilities.networkCapture }}
{{ print "- " . }}
{{- end }}
drop:
- ALL
volumeMounts:
@@ -186,6 +211,9 @@ spec:
readOnly: true
- mountPath: /app/data
name: data
- mountPath: /etc/os-release
name: os-release
readOnly: true
{{- end }}
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
@@ -215,6 +243,9 @@ spec:
- name: lib-modules
hostPath:
path: /lib/modules
- hostPath:
path: /etc/os-release
name: os-release
- name: data
{{- if .Values.tap.persistentStorage }}
persistentVolumeClaim:

View File

@@ -9,9 +9,9 @@ metadata:
data:
default.conf: |
server {
listen 80;
listen 8080;
{{- if .Values.tap.ipv6 }}
listen [::]:80;
listen [::]:8080;
{{- end }}
access_log /dev/stdout;
error_log /dev/stdout;

View File

@@ -9,19 +9,34 @@ metadata:
data:
POD_REGEX: '{{ .Values.tap.regex }}'
NAMESPACES: '{{ gt (len .Values.tap.namespaces) 0 | ternary (join "," .Values.tap.namespaces) "" }}'
BPF_OVERRIDE: '{{ .Values.tap.bpfOverride }}'
SCRIPTING_SCRIPTS: '{}'
INGRESS_ENABLED: '{{ .Values.tap.ingress.enabled }}'
INGRESS_HOST: '{{ .Values.tap.ingress.host }}'
PROXY_FRONT_PORT: '{{ .Values.tap.proxy.front.port }}'
AUTH_ENABLED: '{{ .Values.tap.auth.enabled | ternary "true" "" }}'
AUTH_TYPE: '{{ .Values.tap.auth.type }}'
AUTH_ENABLED: '{{- if and .Values.cloudLicenseEnabled (not (empty .Values.license)) -}}
"false"
{{- else -}}
{{ .Values.cloudLicenseEnabled | ternary "true" (.Values.tap.auth.enabled | ternary "true" "") }}
{{- end }}'
AUTH_TYPE: '{{ .Values.cloudLicenseEnabled | ternary "oidc" (.Values.tap.auth.type) }}'
AUTH_SAML_IDP_METADATA_URL: '{{ .Values.tap.auth.saml.idpMetadataUrl }}'
AUTH_SAML_ROLE_ATTRIBUTE: '{{ .Values.tap.auth.saml.roleAttribute }}'
AUTH_SAML_ROLES: '{{ .Values.tap.auth.saml.roles | toJson }}'
TELEMETRY_DISABLED: '{{ not .Values.tap.telemetry.enabled | ternary "true" "" }}'
REPLAY_DISABLED: '{{ .Values.tap.replayDisabled | ternary "true" "" }}'
SCRIPTING_DISABLED: '{{ .Values.tap.scriptingDisabled | ternary "true" "" }}'
TARGETED_PODS_UPDATE_DISABLED: '{{ .Values.tap.targetedPodsUpdateDisabled | ternary "true" "" }}'
RECORDING_DISABLED: '{{ .Values.tap.recordingDisabled | ternary "true" "" }}'
GLOBAL_FILTER: {{ include "kubeshark.escapeDoubleQuotes" .Values.tap.globalFilter | quote }}
TRAFFIC_SAMPLE_RATE: '{{ .Values.tap.trafficSampleRate }}'
TRAFFIC_SAMPLE_RATE: '{{ .Values.tap.misc.trafficSampleRate }}'
JSON_TTL: '{{ .Values.tap.misc.jsonTTL }}'
PCAP_TTL: '{{ .Values.tap.misc.pcapTTL }}'
PCAP_ERROR_TTL: '{{ .Values.tap.misc.pcapErrorTTL }}'
TIMEZONE: '{{ not (eq .Values.timezone "") | ternary .Values.timezone " " }}'
CLOUD_LICENSE_ENABLED: '{{- if and .Values.cloudLicenseEnabled (not (empty .Values.license)) -}}
false
{{- else -}}
{{ .Values.cloudLicenseEnabled }}
{{- end }}'

View File

@@ -27,8 +27,8 @@ allowedCapabilities:
- SYS_PTRACE
- DAC_OVERRIDE
- SYS_RESOURCE
- CHECKPOINT_RESTORE
- SYS_MODULE
- IPC_LOCK
runAsUser:
type: RunAsAny
fsGroup:

View File

@@ -0,0 +1,58 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubeshark-hub-network-policy
namespace: {{ .Release.Namespace }}
spec:
podSelector:
matchLabels:
app.kubeshark.co/app: hub
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- protocol: TCP
port: 8080
egress:
- {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubeshark-front-network-policy
namespace: {{ .Release.Namespace }}
spec:
podSelector:
matchLabels:
app.kubeshark.co/app: front
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- protocol: TCP
port: 8080
egress:
- {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubeshark-worker-network-policy
namespace: {{ .Release.Namespace }}
spec:
podSelector:
matchLabels:
app.kubeshark.co/app: worker
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- protocol: TCP
port: {{ .Values.tap.proxy.worker.srvPort }}
- protocol: TCP
port: {{ .Values.tap.metrics.port }}
egress:
- {}

View File

@@ -3,6 +3,18 @@ Thank you for installing {{ title .Chart.Name }}.
Registry: {{ .Values.tap.docker.registry }}
Tag: {{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (printf "v%s" .Chart.Version) }}
{{- if .Values.tap.docker.overrideTag.worker }}
Overridden worker tag: {{ .Values.tap.docker.overrideTag.worker }}
{{ end }}
{{- if .Values.tap.docker.overrideTag.hub }}
Overridden hub tag: {{ .Values.tap.docker.overrideTag.hub }}
{{ end }}
{{- if .Values.tap.docker.overrideTag.front }}
Overridden front tag: {{ .Values.tap.docker.overrideTag.front }}
{{ end }}
Your deployment has been successful. The release is named `{{ .Release.Name }}` and it has been deployed in the `{{ .Release.Namespace }}` namespace.
{{- if .Values.tap.telemetry.enabled }}

View File

@@ -4,6 +4,10 @@ tap:
tag: ""
imagePullPolicy: Always
imagePullSecrets: []
overrideTag:
worker: ""
hub: ""
front: ""
proxy:
worker:
srvPort: 30001
@@ -14,6 +18,7 @@ tap:
host: 127.0.0.1
regex: .*
namespaces: []
bpfOverride: ""
release:
repo: https://helm.kubeshark.co
name: kubeshark
@@ -48,6 +53,7 @@ tap:
memory: 50Mi
serviceMesh: true
tls: true
packetCapture: best
ignoreTainted: false
labels: {}
annotations: {}
@@ -82,13 +88,16 @@ tap:
ipv6: true
debug: false
kernelModule:
enabled: true
enabled: false
image: kubeshark/pf-ring-module:all
unloadOnDestroy: false
telemetry:
enabled: true
defaultFilter: ""
replayDisabled: false
scriptingDisabled: false
targetedPodsUpdateDisabled: false
recordingDisabled: false
capabilities:
networkCapture:
- NET_RAW
@@ -97,33 +106,37 @@ tap:
- SYS_ADMIN
- SYS_PTRACE
- DAC_OVERRIDE
- CHECKPOINT_RESTORE
kernelModule:
- SYS_MODULE
ebpfCapture:
- SYS_ADMIN
- SYS_PTRACE
- SYS_RESOURCE
- CHECKPOINT_RESTORE
- IPC_LOCK
globalFilter: ""
metrics:
port: 49100
trafficSampleRate: 100
tcpStreamChannelTimeoutMs: 10000
misc:
jsonTTL: 5m
pcapTTL: 10s
pcapErrorTTL: 60s
trafficSampleRate: 100
tcpStreamChannelTimeoutMs: 10000
tcpStreamChannelTimeoutShow: false
resolutionStrategy: auto
profile: false
logs:
file: ""
grep: ""
kube:
configPath: ""
context: ""
dumpLogs: false
headless: false
license: ""
cloudLicenseEnabled: true
scripting:
env: {}
source: ""
watchScripts: true
timezone: ""

94
install.sh Normal file
View File

@@ -0,0 +1,94 @@
#!/bin/sh
EXE_NAME=kubeshark
ALIAS_NAME=ks
PROG_NAME=Kubeshark
INSTALL_PATH=/usr/local/bin/$EXE_NAME
ALIAS_PATH=/usr/local/bin/$ALIAS_NAME
REPO=https://github.com/kubeshark/kubeshark
OS=$(echo $(uname -s) | tr '[:upper:]' '[:lower:]')
ARCH=$(echo $(uname -m) | tr '[:upper:]' '[:lower:]')
SUPPORTED_PAIRS="linux_amd64 linux_arm64 darwin_amd64 darwin_arm64"
ESC="\033["
F_DEFAULT=39
F_RED=31
F_GREEN=32
F_YELLOW=33
B_DEFAULT=49
B_RED=41
B_BLUE=44
B_LIGHT_BLUE=104
if [ "$ARCH" = "x86_64" ]; then
ARCH="amd64"
fi
if [ "$ARCH" = "aarch64" ]; then
ARCH="arm64"
fi
echo $SUPPORTED_PAIRS | grep -w -q "${OS}_${ARCH}"
if [ $? != 0 ] ; then
echo "\n${ESC}${F_RED}m🛑 Unsupported OS \"$OS\" or architecture \"$ARCH\". Failed to install $PROG_NAME.${ESC}${F_DEFAULT}m"
echo "${ESC}${B_RED}mPlease report 🐛 to $REPO/issues${ESC}${F_DEFAULT}m"
exit 1
fi
# Check for Homebrew and kubeshark installation
if command -v brew >/dev/null; then
if brew list kubeshark &>/dev/null; then
echo "📦 Found $PROG_NAME instance installed with Homebrew"
echo "${ESC}${F_GREEN}m⬇ Removing before installation with script${ESC}${F_DEFAULT}m"
brew uninstall kubeshark
fi
fi
echo "\n🦈 ${ESC}${F_DEFAULT};${B_BLUE}m Started to download $PROG_NAME ${ESC}${B_DEFAULT};${F_DEFAULT}m"
if curl -# --fail -Lo $EXE_NAME ${REPO}/releases/latest/download/${EXE_NAME}_${OS}_${ARCH} ; then
chmod +x $PWD/$EXE_NAME
echo "\n${ESC}${F_GREEN}m⬇ $PROG_NAME is downloaded into $PWD/$EXE_NAME${ESC}${F_DEFAULT}m"
else
echo "\n${ESC}${F_RED}m🛑 Couldn't download ${REPO}/releases/latest/download/${EXE_NAME}_${OS}_${ARCH}\n\
⚠️ Check your internet connection.\n\
⚠️ Make sure 'curl' command is available.\n\
⚠️ Make sure there is no directory named '${EXE_NAME}' in ${PWD}\n\
${ESC}${F_DEFAULT}m"
echo "${ESC}${B_RED}mPlease report 🐛 to $REPO/issues${ESC}${F_DEFAULT}m"
exit 1
fi
use_cmd=$EXE_NAME
printf "Do you want to install system-wide? Requires sudo 😇 (y/N)? "
old_stty_cfg=$(stty -g)
stty raw -echo ; answer=$(head -c 1) ; stty $old_stty_cfg
if echo "$answer" | grep -iq "^y" ;then
echo "$answer"
sudo mv ./$EXE_NAME $INSTALL_PATH || exit 1
echo "${ESC}${F_GREEN}m$PROG_NAME is installed into $INSTALL_PATH${ESC}${F_DEFAULT}m\n"
ls $ALIAS_PATH >> /dev/null 2>&1
if [ $? != 0 ] ; then
printf "Do you want to add 'ks' alias for Kubeshark? (y/N)? "
old_stty_cfg=$(stty -g)
stty raw -echo ; answer=$(head -c 1) ; stty $old_stty_cfg
if echo "$answer" | grep -iq "^y" ; then
echo "$answer"
sudo ln -s $INSTALL_PATH $ALIAS_PATH
use_cmd=$ALIAS_NAME
else
echo "$answer"
fi
else
use_cmd=$ALIAS_NAME
fi
else
echo "$answer"
use_cmd="./$EXE_NAME"
fi
echo "${ESC}${F_GREEN}m✅ You can use the ${ESC}${F_DEFAULT};${B_LIGHT_BLUE}m $use_cmd ${ESC}${B_DEFAULT};${F_GREEN}m command now.${ESC}${F_DEFAULT}m"
echo "\n${ESC}${F_YELLOW}mPlease give us a star 🌟 on ${ESC}${F_DEFAULT}m$REPO${ESC}${F_YELLOW}m if you ❤️ $PROG_NAME!${ESC}${F_DEFAULT}m"

View File

@@ -1,6 +1,7 @@
package kubernetes
import (
"bufio"
"bytes"
"context"
"fmt"
@@ -8,12 +9,14 @@ import (
"net/url"
"path/filepath"
"regexp"
"strings"
"github.com/kubeshark/kubeshark/config"
"github.com/kubeshark/kubeshark/misc"
"github.com/kubeshark/kubeshark/semver"
"github.com/kubeshark/kubeshark/utils"
"github.com/rs/zerolog/log"
"github.com/tanqiangyes/grep-go/reader"
core "k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -142,7 +145,7 @@ func (provider *Provider) ListPodsByAppLabel(ctx context.Context, namespaces str
return pods.Items, err
}
func (provider *Provider) GetPodLogs(ctx context.Context, namespace string, podName string, containerName string) (string, error) {
func (provider *Provider) GetPodLogs(ctx context.Context, namespace string, podName string, containerName string, grep string) (string, error) {
podLogOpts := core.PodLogOptions{Container: containerName}
req := provider.clientSet.CoreV1().Pods(namespace).GetLogs(podName, &podLogOpts)
podLogs, err := req.Stream(ctx)
@@ -154,8 +157,26 @@ func (provider *Provider) GetPodLogs(ctx context.Context, namespace string, podN
if _, err = io.Copy(buf, podLogs); err != nil {
return "", fmt.Errorf("error copy information from podLogs to buf, ns: %s, pod: %s, %w", namespace, podName, err)
}
str := buf.String()
return str, nil
if grep != "" {
finder, err := reader.NewFinder(grep, true, true)
if err != nil {
panic(err)
}
read, err := reader.NewStdReader(bufio.NewReader(buf), []reader.Finder{finder})
if err != nil {
panic(err)
}
read.Run()
result := read.Result()[0]
log.Info().Str("namespace", namespace).Str("pod", podName).Str("container", containerName).Int("lines", len(result.Lines)).Str("grep", grep).Send()
return strings.Join(result.MatchString, "\n"), nil
} else {
log.Info().Str("namespace", namespace).Str("pod", podName).Str("container", containerName).Send()
return buf.String(), nil
}
}
func (provider *Provider) GetNamespaceEvents(ctx context.Context, namespace string) (string, error) {

View File

@@ -1,13 +1,75 @@
---
# Source: kubeshark/templates/16-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubeshark-hub-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app.kubeshark.co/app: hub
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- protocol: TCP
port: 8080
egress:
- {}
---
# Source: kubeshark/templates/16-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubeshark-front-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app.kubeshark.co/app: front
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- protocol: TCP
port: 8080
egress:
- {}
---
# Source: kubeshark/templates/16-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: kubeshark-worker-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app.kubeshark.co/app: worker
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- protocol: TCP
port: 30001
- protocol: TCP
port: 49100
egress:
- {}
---
# Source: kubeshark/templates/01-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-service-account
@@ -21,10 +83,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
stringData:
LICENSE: ''
@@ -38,10 +100,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
stringData:
AUTH_SAML_X509_CRT: |
@@ -54,10 +116,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
stringData:
AUTH_SAML_X509_KEY: |
@@ -69,16 +131,16 @@ metadata:
name: kubeshark-nginx-config-map
namespace: default
labels:
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
data:
default.conf: |
server {
listen 80;
listen [::]:80;
listen 8080;
listen [::]:8080;
access_log /dev/stdout;
error_log /dev/stdout;
@@ -133,43 +195,49 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
data:
POD_REGEX: '.*'
NAMESPACES: ''
BPF_OVERRIDE: ''
SCRIPTING_SCRIPTS: '{}'
INGRESS_ENABLED: 'false'
INGRESS_HOST: 'ks.svc.cluster.local'
PROXY_FRONT_PORT: '8899'
AUTH_ENABLED: ''
AUTH_TYPE: 'saml'
AUTH_ENABLED: 'true'
AUTH_TYPE: 'oidc'
AUTH_SAML_IDP_METADATA_URL: ''
AUTH_SAML_ROLE_ATTRIBUTE: 'role'
AUTH_SAML_ROLES: '{"admin":{"canDownloadPCAP":true,"canReplayTraffic":true,"canUpdateTargetedPods":true,"canUseScripting":true,"filter":"","showAdminConsoleLink":true}}'
TELEMETRY_DISABLED: ''
REPLAY_DISABLED: ''
SCRIPTING_DISABLED: ''
TARGETED_PODS_UPDATE_DISABLED: ''
RECORDING_DISABLED: ''
GLOBAL_FILTER: ""
TRAFFIC_SAMPLE_RATE: '100'
JSON_TTL: '5m'
PCAP_TTL: '10s'
PCAP_ERROR_TTL: '60s'
TIMEZONE: ' '
CLOUD_LICENSE_ENABLED: 'true'
---
# Source: kubeshark/templates/02-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-cluster-role
name: kubeshark-cluster-role-default
namespace: default
rules:
- apiGroups:
@@ -177,6 +245,7 @@ rules:
- extensions
- apps
resources:
- nodes
- pods
- services
- endpoints
@@ -185,24 +254,32 @@ rules:
- list
- get
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
resourceNames:
- kube-system
---
# Source: kubeshark/templates/03-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-cluster-role-binding
name: kubeshark-cluster-role-binding-default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubeshark-cluster-role
name: kubeshark-cluster-role-default
subjects:
- kind: ServiceAccount
name: kubeshark-service-account
@@ -213,10 +290,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-config-role
@@ -242,10 +319,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-config-role-binding
@@ -265,10 +342,10 @@ kind: Service
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub
@@ -277,7 +354,7 @@ spec:
ports:
- name: kubeshark-hub
port: 80
targetPort: 80
targetPort: 8080
selector:
app.kubeshark.co/app: hub
type: ClusterIP
@@ -287,10 +364,10 @@ apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front
@@ -299,7 +376,7 @@ spec:
ports:
- name: kubeshark-front
port: 80
targetPort: 80
targetPort: 8080
selector:
app.kubeshark.co/app: front
type: ClusterIP
@@ -316,10 +393,10 @@ metadata:
spec:
selector:
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
ports:
- name: metrics
@@ -334,10 +411,10 @@ metadata:
labels:
app.kubeshark.co/app: worker
sidecar.istio.io/inject: "false"
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-worker-daemon-set
@@ -346,36 +423,20 @@ spec:
selector:
matchLabels:
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-52.1.45
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/managed-by: Helm
template:
metadata:
labels:
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
name: kubeshark-worker-daemon-set
namespace: kubeshark
spec:
initContainers:
- name: load-pf-ring
image: kubeshark/pf-ring-module:all
imagePullPolicy: Always
securityContext:
capabilities:
add:
- SYS_MODULE
drop:
- ALL
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
containers:
- command:
- ./worker
@@ -385,12 +446,16 @@ spec:
- '30001'
- -metrics-port
- '49100'
- -packet-capture
- 'best'
- -unixsocket
- -servicemesh
- -procfs
- /hostproc
- -kernel-module
image: 'docker.io/kubeshark/worker:v52.1.45'
- -disable-ebpf
- -resolution-strategy
- 'auto'
image: 'docker.io/kubeshark/worker:v52.3.68'
imagePullPolicy: Always
name: sniffer
ports:
@@ -408,8 +473,12 @@ spec:
fieldPath: metadata.namespace
- name: TCP_STREAM_CHANNEL_TIMEOUT_MS
value: '10000'
- name: TCP_STREAM_CHANNEL_TIMEOUT_SHOW
value: 'false'
- name: KUBESHARK_CLOUD_API_URL
value: 'https://api.kubeshark.co'
- name: PROFILING_ENABLED
value: 'false'
resources:
limits:
cpu: 750m
@@ -425,7 +494,6 @@ spec:
- SYS_ADMIN
- SYS_PTRACE
- DAC_OVERRIDE
- CHECKPOINT_RESTORE
drop:
- ALL
readinessProbe:
@@ -455,7 +523,8 @@ spec:
- ./tracer
- -procfs
- /hostproc
image: 'docker.io/kubeshark/worker:v52.1.45'
- -disable-ebpf
image: 'docker.io/kubeshark/worker:v52.3.68'
imagePullPolicy: Always
name: tracer
env:
@@ -480,8 +549,9 @@ spec:
- SYS_ADMIN
- SYS_PTRACE
- SYS_RESOURCE
- CHECKPOINT_RESTORE
- IPC_LOCK
- NET_RAW
- NET_ADMIN
drop:
- ALL
volumeMounts:
@@ -493,6 +563,9 @@ spec:
readOnly: true
- mountPath: /app/data
name: data
- mountPath: /etc/os-release
name: os-release
readOnly: true
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
serviceAccountName: kubeshark-service-account
@@ -521,6 +594,9 @@ spec:
- name: lib-modules
hostPath:
path: /lib/modules
- hostPath:
path: /etc/os-release
name: os-release
- name: data
emptyDir:
sizeLimit: 500Mi
@@ -531,10 +607,10 @@ kind: Deployment
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub
@@ -544,19 +620,16 @@ spec:
selector:
matchLabels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.1.45
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/managed-by: Helm
template:
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
spec:
dnsPolicy: ClusterFirstWithHostNet
@@ -565,6 +638,8 @@ spec:
- name: kubeshark-hub
command:
- ./hub
- -port
- "8080"
env:
- name: POD_NAME
valueFrom:
@@ -576,7 +651,7 @@ spec:
fieldPath: metadata.namespace
- name: KUBESHARK_CLOUD_API_URL
value: 'https://api.kubeshark.co'
image: 'docker.io/kubeshark/hub:v52.1.45'
image: 'docker.io/kubeshark/hub:v52.3.68'
imagePullPolicy: Always
readinessProbe:
periodSeconds: 1
@@ -584,14 +659,14 @@ spec:
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
port: 8080
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
port: 8080
resources:
limits:
cpu: 750m
@@ -624,10 +699,10 @@ kind: Deployment
metadata:
labels:
app.kubeshark.co/app: front
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front
@@ -637,19 +712,16 @@ spec:
selector:
matchLabels:
app.kubeshark.co/app: front
helm.sh/chart: kubeshark-52.1.45
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/managed-by: Helm
template:
metadata:
labels:
app.kubeshark.co/app: front
helm.sh/chart: kubeshark-52.1.45
helm.sh/chart: kubeshark-52.3.68
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.1.45"
app.kubernetes.io/version: "52.3.68"
app.kubernetes.io/managed-by: Helm
spec:
containers:
@@ -657,14 +729,26 @@ spec:
- name: REACT_APP_DEFAULT_FILTER
value: ' '
- name: REACT_APP_AUTH_ENABLED
value: 'false'
value: 'true'
- name: REACT_APP_AUTH_TYPE
value: 'saml'
value: 'oidc'
- name: REACT_APP_AUTH_SAML_IDP_METADATA_URL
value: ' '
- name: REACT_APP_TIMEZONE
value: ' '
- name: REACT_APP_REPLAY_DISABLED
value: 'false'
image: 'docker.io/kubeshark/front:v52.1.45'
- name: REACT_APP_SCRIPTING_DISABLED
value: 'false'
- name: REACT_APP_TARGETED_PODS_UPDATE_DISABLED
value: 'false'
- name: REACT_APP_BPF_OVERRIDE_DISABLED
value: 'false'
- name: REACT_APP_RECORDING_DISABLED
value: 'false'
- name: 'REACT_APP_CLOUD_LICENSE_ENABLED'
value: 'true'
image: 'docker.io/kubeshark/front:v52.3.68'
imagePullPolicy: Always
name: kubeshark-front
livenessProbe:
@@ -673,14 +757,14 @@ spec:
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
port: 8080
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
port: 8080
timeoutSeconds: 1
resources:
limits:

View File

@@ -13,7 +13,7 @@ import (
"github.com/rs/zerolog/log"
)
func DumpLogs(ctx context.Context, provider *kubernetes.Provider, filePath string) error {
func DumpLogs(ctx context.Context, provider *kubernetes.Provider, filePath string, grep string) error {
podExactRegex := regexp.MustCompile("^" + kubernetes.SELF_RESOURCES_PREFIX)
pods, err := provider.ListAllPodsMatchingRegex(ctx, podExactRegex, []string{config.Config.Tap.Release.Namespace})
if err != nil {
@@ -34,7 +34,7 @@ func DumpLogs(ctx context.Context, provider *kubernetes.Provider, filePath strin
for _, pod := range pods {
for _, container := range pod.Spec.Containers {
logs, err := provider.GetPodLogs(ctx, pod.Namespace, pod.Name, container.Name)
logs, err := provider.GetPodLogs(ctx, pod.Namespace, pod.Name, container.Name, grep)
if err != nil {
log.Error().Err(err).Msg("Failed to get logs!")
continue