Compare commits

..

35 Commits

Author SHA1 Message Date
Alon Girmonsky
aca3f4ad44 🔖 Bump the Helm chart version to 52.3.95 2025-01-11 15:53:08 +01:00
Volodymyr Stoiko
f9c66df528 Update worker liveness/readiness config (#1684)
* Increase worker init delay to 30s

* Update values

* fix

* Make probe values configurable

* upd

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2025-01-08 13:09:51 -08:00
Alon Girmonsky
1d572e6bff support new radius protocol (#1682) 2025-01-08 13:05:57 -08:00
Alon Girmonsky
46ad335446 updated the notes (#1681) 2025-01-06 18:42:17 -08:00
Alon Girmonsky
317357e83b Update the Helm chart 2025-01-03 15:42:21 -08:00
Alon Girmonsky
d89ef8789f 🔖 Bump the Helm chart version to 52.3.93 2025-01-03 14:22:00 -08:00
Alon Girmonsky
773ad78ac7 extended the https macro to include http2
in addition to http
2025-01-03 10:30:22 -08:00
Alon Girmonsky
bbcaf74fa7 added https as a default macro (#1680)
* added https as a default macro

* small syntax fix
2025-01-03 09:54:50 -08:00
M. Mert Yildiran
639f1deb51 Add CUSTOM_MACROS to ConfigMap (#1674)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-25 16:45:03 -08:00
Alon Girmonsky
b377bfe35f Revert "Revert "Initialize kubeshark pinned eBPF resources inside init container (#1665)" (#1676)" (#1678)
This reverts commit 12f8883052.
2024-12-25 16:21:08 -08:00
Serhii Ponomarenko
5242d9af07 🛂 Add save/activate/delete role scripting permissions (#1675)
* 🛂 Add save/activate/delete role scripting permissions

* 🔧 Add scripting permissions to tap-config

* 🔨 Re-generate helm values & `complete.yaml`

* 📝 Add scripting permissions to helm chart docs

* 🏷️ Make scripting permissions `true` by default

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-25 12:33:16 -08:00
M. Mert Yildiran
12f8883052 Revert "Initialize kubeshark pinned eBPF resources inside init container (#1665)" (#1676)
This reverts commit 29de008f22.
2024-12-25 11:21:51 -08:00
Alon Girmonsky
7eef5efcd9 Added security capabilities, especially IPC_LOCK (#1671)
to Sniffer in case eBPF traffic capture mechanism is used.
2024-12-23 16:49:54 -08:00
M. Mert Yildiran
af47154a8d Revert "Add CUSTOM_MACROS to ConfigMap"
This reverts commit 17759d296d.
2024-12-23 21:26:42 +03:00
M. Mert Yildiran
17759d296d Add CUSTOM_MACROS to ConfigMap 2024-12-23 21:25:11 +03:00
Ilya Gavrilov
29de008f22 Initialize kubeshark pinned eBPF resources inside init container (#1665)
* Clean kubeshark pinned bpf resources inside init container

* Clean kubeshark pinned bpf resources inside init container

* Update 09-worker-daemon-set.yaml

* add IPC_LOCK capability to sniffer

* add init container to mount bpf filesystem

* add init container to mount bpf filesystem

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-19 16:20:13 -08:00
Volodymyr Stoiko
261a0ca1a9 Replace sniffer 30001 port with 48999 (#1670)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-19 12:37:01 -08:00
Volodymyr Stoiko
e819e9b697 Add hub metrics port (#1666)
* Add hub metrics port

* Add policies and service

* Use static 9100 port for hub metrics

* fix

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-19 12:06:29 -08:00
Alon Girmonsky
a03aa56d07 removed the loglevel flag (#1669)
following reverting tracer version: https://github.com/kubeshark/worker/pull/478
2024-12-16 12:34:51 -08:00
Serhii Ponomarenko
83f437f3f8 🛂 Create save/activate/delete role scripting permissions (#1667)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-16 11:19:00 -08:00
bogdanvbalan
f5637972f2 Add --time param to pcapdump (#1664)
* Add --time param to pcapdump

* Update description

* Remove obsolete code

* Revert config change

* Add time to pcap config

---------

Co-authored-by: bogdan.balan1 <bogdanvalentin.balan@1nce.com>
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-16 08:29:40 -08:00
Alon Girmonsky
4cabf13788 from debug to logLevel (#1668)
* updated helm values

* removed the tap.debug field
from the tapConfig struct

* Revert "removed the tap.debug field"

This reverts commit f911c02f0d.

* support the -d --debug command
with the new logLevel flag
2024-12-15 17:27:05 -08:00
Alon Girmonsky
cd1d7e4a58 🔖 Bump the Helm chart version to 52.3.92 2024-12-09 11:42:05 -08:00
Alon Girmonsky
9b7e2e7144 change the tlx dissector name to tlsx (#1648)
as it can be confused with https
tlsx is a dissector for TLS handshakes and not
HTTPS dissector.
2024-12-08 20:36:58 -08:00
Alon Girmonsky
80fa18cbba Added LDAP support (#1647) 2024-12-08 15:02:50 -08:00
Alon Girmonsky
dfbb321084 Default startup values change (#1646)
* updated the defaultFilter default values and docs.

* fixed a small err in the docs
2024-12-08 14:48:13 -08:00
Alon Girmonsky
d85dc58f20 fixed a bug where a new script can't be added if (#1645)
a previous one was deleted
2024-12-06 14:00:05 -08:00
Volodymyr Stoiko
993b8ae19e Add permissions to watch namespaces (#1644)
* Add permissions to watch namespaces

* Allow watching all namespaces

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-03 18:32:23 -08:00
Alon Girmonsky
77f81c8ab3 🔖 Bump the Helm chart version to 52.3.91 2024-12-01 15:26:34 -08:00
Alon Girmonsky
a24b40a0c1 fixed a small issues in the Makefile 2024-11-20 12:59:34 +02:00
Alon Girmonsky
4817ed2a80 🔖 Bump the Helm chart version to 52.3.90 2024-11-20 12:47:04 +02:00
Alon Girmonsky
d66ec06928 updated pcapdump command's help starting
deprecated the `export` command
2024-11-20 12:30:52 +02:00
Alon Girmonsky
125e3abe6c Brought back .gitignore` after having been deleted due to a bad commit. 2024-11-10 15:25:30 -08:00
Alon Girmonsky
8221c4ef10 removed two binary files uploaded due to a bad earlier commit. 2024-11-10 15:23:03 -08:00
Alon Girmonsky
7f216b2958 Fixed a bug in the Makefile 2024-11-10 15:22:08 -08:00
22 changed files with 472 additions and 294 deletions

66
.gitignore vendored Normal file
View File

@@ -0,0 +1,66 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
.idea/
build
# Mac OS
.DS_Store
.vscode/
# Ignore the scripts that are created for development
*dev.*
# Environment variables
.env
# pprof
pprof/*
# Database Files
*.db
*.gob
# Nohup Files - https://man7.org/linux/man-pages/man1/nohup.1p.html
nohup.*
# Cypress tests
cypress.env.json
*/cypress/downloads
*/cypress/fixtures
*/cypress/plugins
*/cypress/screenshots
*/cypress/videos
*/cypress/support
# UI folders to ignore
**/node_modules/**
**/dist/**
*.editorconfig
# Ignore *.log files
*.log
# Object files
*.o
# Binaries
bin
# Scripts
scripts/
# CWD config YAML
kubeshark.yaml

View File

@@ -189,6 +189,18 @@ release:
@cd ../../kubeshark.github.io/ && git add -A . && git commit -m ":sparkles: Update the Helm chart" && git push
@cd ../kubeshark
soft-release:
@cd ../worker && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../tracer && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../hub && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../front && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../kubeshark && git checkout master && git pull && sed -i 's/^version:.*/version: "$(VERSION)"/' helm-chart/Chart.yaml && make && make generate-helm-values && make generate-manifests
@git add -A . && git commit -m ":bookmark: Bump the Helm chart version to $(VERSION)" && git push
# @git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
# @cd helm-chart && cp -r . ../../kubeshark.github.io/charts/chart
# @cd ../../kubeshark.github.io/ && git add -A . && git commit -m ":sparkles: Update the Helm chart" && git push
# @cd ../kubeshark
branch:
@cd ../worker && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)
@cd ../hub && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)

Binary file not shown.

View File

@@ -1 +0,0 @@
1f82f0ead73917c529e84e1bde4aa706042a936f8e036a34e0c5eaa3e4740306 kubeshark__

View File

@@ -1,62 +0,0 @@
package cmd
import (
"fmt"
"net/http"
"os"
"path/filepath"
"time"
"github.com/creasty/defaults"
"github.com/kubeshark/kubeshark/config/configStructs"
"github.com/kubeshark/kubeshark/internal/connect"
"github.com/kubeshark/kubeshark/kubernetes"
"github.com/kubeshark/kubeshark/utils"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
var exportCmd = &cobra.Command{
Use: "export",
Short: "Exports the captured traffic into a TAR file that contains PCAP files",
RunE: func(cmd *cobra.Command, args []string) error {
runExport()
return nil
},
}
func init() {
rootCmd.AddCommand(exportCmd)
defaultTapConfig := configStructs.TapConfig{}
if err := defaults.Set(&defaultTapConfig); err != nil {
log.Debug().Err(err).Send()
}
exportCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the Kubeshark")
exportCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Kubeshark")
exportCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}
func runExport() {
hubUrl := kubernetes.GetHubUrl()
response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
if err != nil || response.StatusCode != 200 {
log.Info().Msg(fmt.Sprintf(utils.Yellow, "Couldn't connect to Hub. Establishing proxy..."))
runProxy(false, true)
}
dstPath, err := filepath.Abs(fmt.Sprintf("./%d.tar.gz", time.Now().Unix()))
if err != nil {
panic(err)
}
out, err := os.Create(dstPath)
if err != nil {
panic(err)
}
defer out.Close()
connector := connect.NewConnector(kubernetes.GetHubUrl(), connect.DefaultRetries, connect.DefaultTimeout)
connector.PostPcapsMerge(out)
}

View File

@@ -3,6 +3,7 @@ package cmd
import (
"errors"
"path/filepath"
"time"
"github.com/creasty/defaults"
"github.com/kubeshark/kubeshark/config/configStructs"
@@ -16,7 +17,7 @@ import (
// pcapDumpCmd represents the consolidated pcapdump command
var pcapDumpCmd = &cobra.Command{
Use: "pcapdump",
Short: "Manage PCAP dump operations: start, stop, or copy PCAP files",
Short: "Store all captured traffic (including decrypted TLS) in a PCAP file.",
RunE: func(cmd *cobra.Command, args []string) error {
// Retrieve the kubeconfig path from the flag
kubeconfig, _ := cmd.Flags().GetString(configStructs.PcapKubeconfig)
@@ -43,41 +44,26 @@ var pcapDumpCmd = &cobra.Command{
return err
}
// Parse the `--time` flag
timeIntervalStr, _ := cmd.Flags().GetString("time")
var cutoffTime *time.Time // Use a pointer to distinguish between provided and not provided
if timeIntervalStr != "" {
duration, err := time.ParseDuration(timeIntervalStr)
if err != nil {
log.Error().Err(err).Msg("Invalid time interval")
return err
}
tempCutoffTime := time.Now().Add(-duration)
cutoffTime = &tempCutoffTime
}
// Handle copy operation if the copy string is provided
if !cmd.Flags().Changed(configStructs.PcapDumpEnabled) {
destDir, _ := cmd.Flags().GetString(configStructs.PcapDest)
log.Info().Msg("Copying PCAP files")
err = copyPcapFiles(clientset, config, destDir)
if err != nil {
log.Error().Err(err).Msg("Error copying PCAP files")
return err
}
} else {
// Handle start operation if the start string is provided
enabled, err := cmd.Flags().GetBool(configStructs.PcapDumpEnabled)
if err != nil {
log.Error().Err(err).Msg("Error getting pcapdump enable flag")
return err
}
timeInterval, _ := cmd.Flags().GetString(configStructs.PcapTimeInterval)
maxTime, _ := cmd.Flags().GetString(configStructs.PcapMaxTime)
maxSize, _ := cmd.Flags().GetString(configStructs.PcapMaxSize)
err = startStopPcap(clientset, enabled, timeInterval, maxTime, maxSize)
if err != nil {
log.Error().Err(err).Msg("Error starting/stopping PCAP dump")
return err
}
if enabled {
log.Info().Msg("Pcapdump started successfully")
return nil
} else {
log.Info().Msg("Pcapdump stopped successfully")
return nil
}
destDir, _ := cmd.Flags().GetString(configStructs.PcapDest)
log.Info().Msg("Copying PCAP files")
err = copyPcapFiles(clientset, config, destDir, cutoffTime)
if err != nil {
log.Error().Err(err).Msg("Error copying PCAP files")
return err
}
return nil
@@ -92,10 +78,7 @@ func init() {
log.Debug().Err(err).Send()
}
pcapDumpCmd.Flags().String(configStructs.PcapTimeInterval, defaultPcapDumpConfig.PcapTimeInterval, "Time interval for PCAP file rotation (used with --start)")
pcapDumpCmd.Flags().String(configStructs.PcapMaxTime, defaultPcapDumpConfig.PcapMaxTime, "Maximum time for retaining old PCAP files (used with --start)")
pcapDumpCmd.Flags().String(configStructs.PcapMaxSize, defaultPcapDumpConfig.PcapMaxSize, "Maximum size of PCAP files before deletion (used with --start)")
pcapDumpCmd.Flags().String(configStructs.PcapTime, "", "Time interval (e.g., 10m, 1h) in the past for which the pcaps are copied")
pcapDumpCmd.Flags().String(configStructs.PcapDest, "", "Local destination path for copied PCAP files (can not be used together with --enabled)")
pcapDumpCmd.Flags().String(configStructs.PcapKubeconfig, "", "Enabled/Disable to pcap dumps (can not be used together with --dest)")
pcapDumpCmd.Flags().String(configStructs.PcapKubeconfig, "", "Path for kubeconfig (if not provided the default location will be checked)")
}

View File

@@ -6,8 +6,8 @@ import (
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/kubeshark/gopacket"
"github.com/kubeshark/gopacket/layers"
@@ -54,7 +54,7 @@ func listWorkerPods(ctx context.Context, clientset *clientk8s.Clientset, namespa
}
// listFilesInPodDir lists all files in the specified directory inside the pod across multiple namespaces
func listFilesInPodDir(ctx context.Context, clientset *clientk8s.Clientset, config *rest.Config, podName string, namespaces []string, configMapName, configMapKey string) ([]NamespaceFiles, error) {
func listFilesInPodDir(ctx context.Context, clientset *clientk8s.Clientset, config *rest.Config, podName string, namespaces []string, configMapName, configMapKey string, cutoffTime *time.Time) ([]NamespaceFiles, error) {
var namespaceFilesList []NamespaceFiles
for _, namespace := range namespaces {
@@ -114,12 +114,42 @@ func listFilesInPodDir(ctx context.Context, clientset *clientk8s.Clientset, conf
// Split the output (file names) into a list
files := strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if len(files) > 0 {
// Append the NamespaceFiles struct to the list
if len(files) == 0 {
log.Info().Msgf("No files found in directory %s in pod %s", srcFilePath, podName)
continue
}
var filteredFiles []string
// Filter files based on cutoff time if provided
for _, file := range files {
if cutoffTime != nil {
parts := strings.Split(file, "-")
if len(parts) < 2 {
log.Warn().Msgf("Skipping file with invalid format: %s", file)
continue
}
timestampStr := parts[len(parts)-2] + parts[len(parts)-1][:6] // Extract YYYYMMDDHHMMSS
fileTime, err := time.Parse("20060102150405", timestampStr)
if err != nil {
log.Warn().Err(err).Msgf("Skipping file with unparsable timestamp: %s", file)
continue
}
if fileTime.Before(*cutoffTime) {
continue
}
}
// Add file to filtered list
filteredFiles = append(filteredFiles, file)
}
if len(filteredFiles) > 0 {
namespaceFilesList = append(namespaceFilesList, NamespaceFiles{
Namespace: namespace,
SrcDir: srcDir,
Files: files,
Files: filteredFiles,
})
}
}
@@ -229,63 +259,8 @@ func mergePCAPs(outputFile string, inputFiles []string) error {
return nil
}
// setPcapConfigInKubernetes sets the PCAP config for all pods across multiple namespaces
func setPcapConfigInKubernetes(ctx context.Context, clientset *clientk8s.Clientset, podName string, namespaces []string, enabledPcap bool, timeInterval, maxTime, maxSize string) error {
for _, namespace := range namespaces {
// Load the existing ConfigMap in the current namespace
configMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(ctx, "kubeshark-config-map", metav1.GetOptions{})
if err != nil {
log.Error().Err(err).Msgf("failed to get ConfigMap in namespace %s", namespace)
continue
}
// Update the values with user-provided input
configMap.Data["PCAP_TIME_INTERVAL"] = timeInterval
configMap.Data["PCAP_MAX_SIZE"] = maxSize
configMap.Data["PCAP_MAX_TIME"] = maxTime
configMap.Data["PCAP_DUMP_ENABLE"] = strconv.FormatBool(enabledPcap)
// Apply the updated ConfigMap back to the cluster in the current namespace
_, err = clientset.CoreV1().ConfigMaps(namespace).Update(ctx, configMap, metav1.UpdateOptions{})
if err != nil {
log.Error().Err(err).Msgf("failed to update ConfigMap in namespace %s", namespace)
continue
}
}
return nil
}
// startPcap function for starting the PCAP capture
func startStopPcap(clientset *kubernetes.Clientset, pcapEnable bool, timeInterval, maxTime, maxSize string) error {
kubernetesProvider, err := getKubernetesProviderForCli(false, false)
if err != nil {
log.Error().Err(err).Send()
return err
}
targetNamespaces := kubernetesProvider.GetNamespaces()
// List worker pods
workerPods, err := listWorkerPods(context.Background(), clientset, targetNamespaces)
if err != nil {
log.Error().Err(err).Msg("Error listing worker pods")
return err
}
// Iterate over each pod to start the PCAP capture by updating the configuration in Kubernetes
for _, pod := range workerPods {
err := setPcapConfigInKubernetes(context.Background(), clientset, pod.Name, targetNamespaces, pcapEnable, timeInterval, maxTime, maxSize)
if err != nil {
log.Error().Err(err).Msgf("Error setting PCAP config for pod %s", pod.Name)
continue
}
}
return nil
}
// copyPcapFiles function for copying the PCAP files from the worker pods
func copyPcapFiles(clientset *kubernetes.Clientset, config *rest.Config, destDir string) error {
func copyPcapFiles(clientset *kubernetes.Clientset, config *rest.Config, destDir string, cutoffTime *time.Time) error {
kubernetesProvider, err := getKubernetesProviderForCli(false, false)
if err != nil {
log.Error().Err(err).Send()
@@ -305,7 +280,7 @@ func copyPcapFiles(clientset *kubernetes.Clientset, config *rest.Config, destDir
// Iterate over each pod to get the PCAP directory from config and copy files
for _, pod := range workerPods {
// Get the list of NamespaceFiles (files per namespace) and their source directories
namespaceFiles, err := listFilesInPodDir(context.Background(), clientset, config, pod.Name, targetNamespaces, SELF_RESOURCES_PREFIX+SUFFIX_CONFIG_MAP, "PCAP_SRC_DIR")
namespaceFiles, err := listFilesInPodDir(context.Background(), clientset, config, pod.Name, targetNamespaces, SELF_RESOURCES_PREFIX+SUFFIX_CONFIG_MAP, "PCAP_SRC_DIR", cutoffTime)
if err != nil {
log.Error().Err(err).Msgf("Error listing files in pod %s", pod.Name)
continue

View File

@@ -97,9 +97,12 @@ func createScript(provider *kubernetes.Provider, script misc.ConfigMapScript) (i
return
}
script.Active = kubernetes.IsActiveScript(provider, script.Title)
index = int64(len(scripts))
index = 0
if script.Title != "New Script" {
for i, v := range scripts {
if index <= i {
index = i + 1
}
if v.Title == script.Title {
index = int64(i)
}

View File

@@ -63,6 +63,9 @@ func InitConfig(cmd *cobra.Command) error {
Config = CreateDefaultConfig()
Config.Tap.Debug = DebugMode
if DebugMode {
Config.LogLevel = "debug"
}
cmdName = cmd.Name()
if utils.Contains([]string{
"clean",

View File

@@ -59,9 +59,14 @@ func CreateDefaultConfig() ConfigStruct {
RoleAttribute: "role",
Roles: map[string]configStructs.Role{
"admin": {
Filter: "",
CanDownloadPCAP: true,
CanUseScripting: true,
Filter: "",
CanDownloadPCAP: true,
CanUseScripting: true,
ScriptingPermissions: configStructs.ScriptingPermissions{
CanSave: true,
CanActivate: true,
CanDelete: true,
},
CanUpdateTargetedPods: true,
CanStopTrafficCapturing: true,
ShowAdminConsoleLink: true,
@@ -81,7 +86,9 @@ func CreateDefaultConfig() ConfigStruct {
// "tcp",
// "udp",
"ws",
"tls",
// "tlsx",
"ldap",
"radius",
},
},
}
@@ -112,6 +119,7 @@ type ConfigStruct struct {
Scripting configStructs.ScriptingConfig `yaml:"scripting" json:"scripting"`
Manifests ManifestsConfig `yaml:"manifests,omitempty" json:"manifests,omitempty"`
Timezone string `yaml:"timezone" json:"timezone"`
LogLevel string `yaml:"logLevel" json:"logLevel" default:"warning"`
}
func (config *ConfigStruct) ImagePullPolicy() v1.PullPolicy {

View File

@@ -43,6 +43,7 @@ const (
PcapTimeInterval = "timeInterval"
PcapKubeconfig = "kubeconfig"
PcapDumpEnabled = "enabled"
PcapTime = "time"
)
type ResourceLimitsHub struct {
@@ -71,7 +72,7 @@ type ResourceRequirementsWorker struct {
}
type WorkerConfig struct {
SrvPort uint16 `yaml:"srvPort" json:"srvPort" default:"30001"`
SrvPort uint16 `yaml:"srvPort" json:"srvPort" default:"48999"`
}
type HubConfig struct {
@@ -116,13 +117,32 @@ type ResourcesConfig struct {
Tracer ResourceRequirementsWorker `yaml:"tracer" json:"tracer"`
}
type ProbesConfig struct {
Hub ProbeConfig `yaml:"hub" json:"hub"`
Sniffer ProbeConfig `yaml:"sniffer" json:"sniffer"`
}
type ProbeConfig struct {
InitialDelaySeconds int `yaml:"initialDelaySeconds" json:"initialDelaySeconds" default:"15"`
PeriodSeconds int `yaml:"periodSeconds" json:"periodSeconds" default:"10"`
SuccessThreshold int `yaml:"successThreshold" json:"successThreshold" default:"1"`
FailureThreshold int `yaml:"failureThreshold" json:"failureThreshold" default:"3"`
}
type ScriptingPermissions struct {
CanSave bool `yaml:"canSave" json:"canSave" default:"true"`
CanActivate bool `yaml:"canActivate" json:"canActivate" default:"true"`
CanDelete bool `yaml:"canDelete" json:"canDelete" default:"true"`
}
type Role struct {
Filter string `yaml:"filter" json:"filter" default:""`
CanDownloadPCAP bool `yaml:"canDownloadPCAP" json:"canDownloadPCAP" default:"false"`
CanUseScripting bool `yaml:"canUseScripting" json:"canUseScripting" default:"false"`
CanUpdateTargetedPods bool `yaml:"canUpdateTargetedPods" json:"canUpdateTargetedPods" default:"false"`
CanStopTrafficCapturing bool `yaml:"canStopTrafficCapturing" json:"canStopTrafficCapturing" default:"false"`
ShowAdminConsoleLink bool `yaml:"showAdminConsoleLink" json:"showAdminConsoleLink" default:"false"`
Filter string `yaml:"filter" json:"filter" default:""`
CanDownloadPCAP bool `yaml:"canDownloadPCAP" json:"canDownloadPCAP" default:"false"`
CanUseScripting bool `yaml:"canUseScripting" json:"canUseScripting" default:"false"`
ScriptingPermissions ScriptingPermissions `yaml:"scriptingPermissions" json:"scriptingPermissions"`
CanUpdateTargetedPods bool `yaml:"canUpdateTargetedPods" json:"canUpdateTargetedPods" default:"false"`
CanStopTrafficCapturing bool `yaml:"canStopTrafficCapturing" json:"canStopTrafficCapturing" default:"false"`
ShowAdminConsoleLink bool `yaml:"showAdminConsoleLink" json:"showAdminConsoleLink" default:"false"`
}
type SamlConfig struct {
@@ -201,6 +221,7 @@ type PcapDumpConfig struct {
PcapMaxTime string `yaml:"maxTime" json:"maxTime" default:"1h"`
PcapMaxSize string `yaml:"maxSize" json:"maxSize" default:"500MB"`
PcapSrcDir string `yaml:"pcapSrcDir" json:"pcapSrcDir" default:"pcapdump"`
PcapTime string `yaml:"time" json:"time" default:"time"`
}
type TapConfig struct {
@@ -219,6 +240,7 @@ type TapConfig struct {
StorageClass string `yaml:"storageClass" json:"storageClass" default:"standard"`
DryRun bool `yaml:"dryRun" json:"dryRun" default:"false"`
Resources ResourcesConfig `yaml:"resources" json:"resources"`
Probes ProbesConfig `yaml:"probes" json:"probes"`
ServiceMesh bool `yaml:"serviceMesh" json:"serviceMesh" default:"true"`
Tls bool `yaml:"tls" json:"tls" default:"true"`
DisableTlsLog bool `yaml:"disableTlsLog" json:"disableTlsLog" default:"true"`
@@ -234,7 +256,7 @@ type TapConfig struct {
Telemetry TelemetryConfig `yaml:"telemetry" json:"telemetry"`
ResourceGuard ResourceGuardConfig `yaml:"resourceGuard" json:"resourceGuard"`
Sentry SentryConfig `yaml:"sentry" json:"sentry"`
DefaultFilter string `yaml:"defaultFilter" json:"defaultFilter" default:"!dns and !tcp and !udp and !icmp"`
DefaultFilter string `yaml:"defaultFilter" json:"defaultFilter" default:"!dns and !error"`
ScriptingDisabled bool `yaml:"scriptingDisabled" json:"scriptingDisabled" default:"false"`
TargetedPodsUpdateDisabled bool `yaml:"targetedPodsUpdateDisabled" json:"targetedPodsUpdateDisabled" default:"false"`
PresetFiltersChangingEnabled bool `yaml:"presetFiltersChangingEnabled" json:"presetFiltersChangingEnabled" default:"true"`
@@ -243,6 +265,7 @@ type TapConfig struct {
Capabilities CapabilitiesConfig `yaml:"capabilities" json:"capabilities"`
GlobalFilter string `yaml:"globalFilter" json:"globalFilter" default:""`
EnabledDissectors []string `yaml:"enabledDissectors" json:"enabledDissectors"`
CustomMacros map[string]string `yaml:"customMacros" json:"customMacros" default:"{\"https\":\"tls and (http or http2)\"}"`
Metrics MetricsConfig `yaml:"metrics" json:"metrics"`
Pprof PprofConfig `yaml:"pprof" json:"pprof"`
Misc MiscConfig `yaml:"misc" json:"misc"`

View File

@@ -1,6 +1,6 @@
apiVersion: v2
name: kubeshark
version: "52.3.89"
version: "52.3.95"
description: The API Traffic Analyzer for Kubernetes
home: https://kubeshark.co
keywords:

View File

@@ -131,7 +131,7 @@ Example for overriding image names:
| `tap.docker.overrideImage` | Can be used to directly override image names | `""` |
| `tap.docker.overrideTag` | Can be used to override image tags | `""` |
| `tap.proxy.hub.srvPort` | Hub server port. Change if already occupied. | `8898` |
| `tap.proxy.worker.srvPort` | Worker server port. Change if already occupied.| `30001` |
| `tap.proxy.worker.srvPort` | Worker server port. Change if already occupied.| `48999` |
| `tap.proxy.front.port` | Front service port. Change if already occupied.| `8899` |
| `tap.proxy.host` | Change to 0.0.0.0 top open up to the world. | `127.0.0.1` |
| `tap.regex` | Target (process traffic from) pods that match regex | `.*` |
@@ -160,6 +160,14 @@ Example for overriding image names:
| `tap.resources.tracer.limits.memory` | Memory limit for tracer | `3Gi` |
| `tap.resources.tracer.requests.cpu` | CPU request for tracer | `50m` |
| `tap.resources.tracer.requests.memory` | Memory request for tracer | `50Mi` |
| `tap.probes.hub.initialDelaySeconds` | Initial delay before probing the hub | `15` |
| `tap.probes.hub.periodSeconds` | Period between probes for the hub | `10` |
| `tap.probes.hub.successThreshold` | Number of successful probes before considering the hub healthy | `1` |
| `tap.probes.hub.failureThreshold` | Number of failed probes before considering the hub unhealthy | `3` |
| `tap.probes.sniffer.initialDelaySeconds` | Initial delay before probing the sniffer | `15` |
| `tap.probes.sniffer.periodSeconds` | Period between probes for the sniffer | `10` |
| `tap.probes.sniffer.successThreshold` | Number of successful probes before considering the sniffer healthy | `1` |
| `tap.probes.sniffer.failureThreshold` | Number of failed probes before considering the sniffer unhealthy | `3` |
| `tap.serviceMesh` | Capture traffic from service meshes like Istio, Linkerd, Consul, etc. | `true` |
| `tap.tls` | Capture the encrypted/TLS traffic from cryptography libraries like OpenSSL | `true` |
| `tap.disableTlsLog` | Suppress logging for TLS/eBPF | `true` |
@@ -175,7 +183,7 @@ Example for overriding image names:
| `tap.auth.saml.x509crt` | A self-signed X.509 `.cert` contents <br/>(effective, if `tap.auth.type = saml`) | `` |
| `tap.auth.saml.x509key` | A self-signed X.509 `.key` contents <br/>(effective, if `tap.auth.type = saml`) | `` |
| `tap.auth.saml.roleAttribute` | A SAML attribute name corresponding to user's authorization role <br/>(effective, if `tap.auth.type = saml`) | `role` |
| `tap.auth.saml.roles` | A list of SAML authorization roles and their permissions <br/>(effective, if `tap.auth.type = saml`) | `{"admin":{"canDownloadPCAP":true,"canUpdateTargetedPods":true,"canUseScripting":true, "canStopTrafficCapturing":true, "filter":"","showAdminConsoleLink":true}}` |
| `tap.auth.saml.roles` | A list of SAML authorization roles and their permissions <br/>(effective, if `tap.auth.type = saml`) | `{"admin":{"canDownloadPCAP":true,"canUpdateTargetedPods":true,"canUseScripting":true, "scriptingPermissions":{"canSave":true, "canActivate":true, "canDelete":true}, "canStopTrafficCapturing":true, "filter":"","showAdminConsoleLink":true}}` |
| `tap.ingress.enabled` | Enable `Ingress` | `false` |
| `tap.ingress.className` | Ingress class name | `""` |
| `tap.ingress.host` | Host of the `Ingress` | `ks.svc.cluster.local` |
@@ -187,10 +195,10 @@ Example for overriding image names:
| `tap.resourceGuard.enabled` | Enable resource guard worker process, which watches RAM/disk usage and enables/disables traffic capture based on available resources | `false` |
| `tap.sentry.enabled` | Enable sending of error logs to Sentry | `false` |
| `tap.sentry.environment` | Sentry environment to label error logs with | `production` |
| `tap.defaultFilter` | Sets the default dashboard KFL filter (e.g. `http`). By default, this value is set to filter out noisy protocols such as DNS, UDP, ICMP and TCP. The user can easily change this in the Dashboard. You can also change this value to change this behavior. | `"!dns and !tcp and !udp and !icmp"` |
| `tap.defaultFilter` | Sets the default dashboard KFL filter (e.g. `http`). By default, this value is set to filter out noisy protocols such as DNS, UDP, ICMP and TCP. The user can easily change this, **temporarily**, in the Dashboard. For a permanent change, you should change this value in the `values.yaml` or `config.yaml` file. | `"!dns and !error"` |
| `tap.globalFilter` | Prepends to any KFL filter and can be used to limit what is visible in the dashboard. For example, `redact("request.headers.Authorization")` will redact the appropriate field. Another example `!dns` will not show any DNS traffic. | `""` |
| `tap.metrics.port` | Pod port used to expose Prometheus metrics | `49100` |
| `tap.enabledDissectors` | This is an array of strings representing the list of supported protocols. Remove or comment out redundant protocols (e.g., dns).| The default list excludes: `dns` and `tcp` |
| `tap.enabledDissectors` | This is an array of strings representing the list of supported protocols. Remove or comment out redundant protocols (e.g., dns).| The default list excludes: `udp` and `tcp` |
| `logs.file` | Logs dump path | `""` |
| `pcapdump.enabled` | Enable recording of all traffic captured according to other parameters. Whatever Kubeshark captures, considering pod targeting rules, will be stored in pcap files ready to be viewed by tools | `true` |
| `pcapdump.maxTime` | The time window into the past that will be stored. Older traffic will be discarded. | `2h` |

View File

@@ -31,8 +31,8 @@ rules:
- namespaces
verbs:
- get
resourceNames:
- kube-system
- list
- watch
- apiGroups:
- networking.k8s.io
resources:

View File

@@ -31,9 +31,8 @@ spec:
- ./hub
- -port
- "8080"
{{- if .Values.tap.debug }}
- -debug
{{- end }}
- -loglevel
- '{{ .Values.logLevel | default "warning" }}'
env:
- name: POD_NAME
valueFrom:
@@ -66,17 +65,17 @@ spec:
{{- end }}
{{- end }}
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
periodSeconds: {{ .Values.tap.probes.hub.periodSeconds }}
failureThreshold: {{ .Values.tap.probes.hub.failureThreshold }}
successThreshold: {{ .Values.tap.probes.hub.successThreshold }}
initialDelaySeconds: {{ .Values.tap.probes.hub.initialDelaySeconds }}
tcpSocket:
port: 8080
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
periodSeconds: {{ .Values.tap.probes.hub.periodSeconds }}
failureThreshold: {{ .Values.tap.probes.hub.failureThreshold }}
successThreshold: {{ .Values.tap.probes.hub.successThreshold }}
initialDelaySeconds: {{ .Values.tap.probes.hub.initialDelaySeconds }}
tcpSocket:
port: 8080
resources:

View File

@@ -25,6 +25,39 @@ spec:
name: kubeshark-worker-daemon-set
namespace: kubeshark
spec:
initContainers:
- command:
- /bin/sh
- -c
- mkdir -p /sys/fs/bpf && mount | grep -q '/sys/fs/bpf' || mount -t bpf bpf /sys/fs/bpf
{{- if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{- end }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
name: check-bpf
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys
name: sys
mountPropagation: Bidirectional
- command:
- ./tracer
- -init-bpf
{{- if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{- end }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
name: init-bpf
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys
name: sys
containers:
- command:
- ./worker
@@ -36,6 +69,8 @@ spec:
- '{{ .Values.tap.metrics.port }}'
- -packet-capture
- '{{ .Values.tap.packetCapture }}'
- -loglevel
- '{{ .Values.logLevel | default "warning" }}'
{{- if .Values.tap.tls }}
- -unixsocket
{{- end }}
@@ -54,9 +89,6 @@ spec:
- '{{ .Values.tap.misc.resolutionStrategy }}'
- -staletimeout
- '{{ .Values.tap.misc.staleTimeoutSeconds }}'
{{- if .Values.tap.debug }}
- -debug
{{- end }}
{{- if .Values.tap.docker.overrideImage.worker }}
image: '{{ .Values.tap.docker.overrideImage.worker }}'
{{- else if .Values.tap.docker.overrideTag.worker }}
@@ -123,20 +155,25 @@ spec:
{{ print "- " . }}
{{- end }}
{{- end }}
{{- if .Values.tap.capabilities.ebpfCapture }}
{{- range .Values.tap.capabilities.ebpfCapture }}
{{ print "- " . }}
{{- end }}
{{- end }}
drop:
- ALL
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
periodSeconds: {{ .Values.tap.probes.sniffer.periodSeconds }}
failureThreshold: {{ .Values.tap.probes.sniffer.failureThreshold }}
successThreshold: {{ .Values.tap.probes.sniffer.successThreshold }}
initialDelaySeconds: {{ .Values.tap.probes.sniffer.initialDelaySeconds }}
tcpSocket:
port: {{ .Values.tap.proxy.worker.srvPort }}
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
periodSeconds: {{ .Values.tap.probes.sniffer.periodSeconds }}
failureThreshold: {{ .Values.tap.probes.sniffer.failureThreshold }}
successThreshold: {{ .Values.tap.probes.sniffer.successThreshold }}
initialDelaySeconds: {{ .Values.tap.probes.sniffer.initialDelaySeconds }}
tcpSocket:
port: {{ .Values.tap.proxy.worker.srvPort }}
volumeMounts:
@@ -156,9 +193,6 @@ spec:
{{- if ne .Values.tap.packetCapture "ebpf" }}
- -disable-ebpf
{{- end }}
{{- if .Values.tap.debug }}
- -debug
{{- end }}
{{- if .Values.tap.disableTlsLog }}
- -disable-tls-log
{{- end }}
@@ -166,6 +200,8 @@ spec:
- -port
- '{{ add .Values.tap.proxy.worker.srvPort 1 }}'
{{- end }}
# - -loglevel
# - '{{ .Values.logLevel | default "warning" }}'
{{- if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{ else }}

View File

@@ -50,6 +50,7 @@ data:
{{- end }}'
DUPLICATE_TIMEFRAME: '{{ .Values.tap.misc.duplicateTimeframe }}'
ENABLED_DISSECTORS: '{{ gt (len .Values.tap.enabledDissectors) 0 | ternary (join "," .Values.tap.enabledDissectors) "" }}'
CUSTOM_MACROS: '{{ toJson .Values.tap.customMacros }}'
DISSECTORS_UPDATING_ENABLED: '{{ .Values.dissectorsUpdatingEnabled | ternary "true" "false" }}'
DETECT_DUPLICATES: '{{ .Values.tap.misc.detectDuplicates | ternary "true" "false" }}'
PCAP_DUMP_ENABLE: '{{ .Values.pcapdump.enabled }}'

View File

@@ -0,0 +1,23 @@
---
kind: Service
apiVersion: v1
metadata:
labels:
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9100'
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-hub-metrics
namespace: {{ .Release.Namespace }}
spec:
selector:
app.kubeshark.co/app: hub
{{- include "kubeshark.labels" . | nindent 4 }}
ports:
- name: metrics
protocol: TCP
port: 9100
targetPort: 9100

View File

@@ -20,6 +20,9 @@ spec:
- ports:
- protocol: TCP
port: 8080
- ports:
- protocol: TCP
port: 9100
egress:
- {}
---

View File

@@ -2,26 +2,36 @@ Thank you for installing {{ title .Chart.Name }}.
Registry: {{ .Values.tap.docker.registry }}
Tag: {{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (printf "v%s" .Chart.Version) }}
{{- if .Values.tap.docker.overrideTag.worker }}
Overridden worker tag: {{ .Values.tap.docker.overrideTag.worker }}
{{ end }}
{{- end }}
{{- if .Values.tap.docker.overrideTag.hub }}
Overridden hub tag: {{ .Values.tap.docker.overrideTag.hub }}
{{ end }}
{{- end }}
{{- if .Values.tap.docker.overrideTag.front }}
Overridden front tag: {{ .Values.tap.docker.overrideTag.front }}
{{ end }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.worker }}
Overridden worker image: {{ .Values.tap.docker.overrideImage.worker }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.hub }}
Overridden hub image: {{ .Values.tap.docker.overrideImage.hub }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.front }}
Overridden front image: {{ .Values.tap.docker.overrideImage.front }}
{{- end }}
Your deployment has been successful. The release is named `{{ .Release.Name }}` and it has been deployed in the `{{ .Release.Namespace }}` namespace.
{{- if .Values.tap.telemetry.enabled }}
Notice: Telemetry is enabled. Kubeshark will collect anonymous usage statistics.
{{ end }}
Notices:
{{- if .Values.supportChatEnabled}}
- Support chat using Intercom is enabled. It can be disabled using `--set supportChatEnabled=false`
{{- end }}
{{- if eq .Values.license ""}}
- No license key was detected. You can get your license key from https://console.kubeshark.co/.
{{- end }}
{{- if .Values.tap.ingress.enabled }}
{{ if .Values.tap.ingress.enabled }}
You can now access the application through the following URL:
http{{ if .Values.tap.ingress.tls }}s{{ end }}://{{ .Values.tap.ingress.host }}
@@ -36,4 +46,4 @@ To access the application, follow these steps:
2. Once port forwarding is done, you can access the application by visiting the following URL in your web browser:
http://0.0.0.0:8899
{{ end }}
{{- end }}

View File

@@ -16,7 +16,7 @@ tap:
front: ""
proxy:
worker:
srvPort: 30001
srvPort: 48999
hub:
srvPort: 8898
front:
@@ -59,6 +59,17 @@ tap:
requests:
cpu: 50m
memory: 50Mi
probes:
hub:
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
sniffer:
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
serviceMesh: true
tls: true
disableTlsLog: true
@@ -85,6 +96,10 @@ tap:
filter: ""
canDownloadPCAP: true
canUseScripting: true
scriptingPermissions:
canSave: true
canActivate: true
canDelete: true
canUpdateTargetedPods: true
canStopTrafficCapturing: true
showAdminConsoleLink: true
@@ -103,7 +118,7 @@ tap:
sentry:
enabled: false
environment: production
defaultFilter: "!dns and !tcp and !udp and !icmp"
defaultFilter: "!dns and !error"
scriptingDisabled: false
targetedPodsUpdateDisabled: false
presetFiltersChangingEnabled: true
@@ -133,7 +148,10 @@ tap:
- sctp
- syscall
- ws
- tls
- ldap
- radius
customMacros:
https: tls and (http or http2)
metrics:
port: 49100
pprof:
@@ -160,6 +178,7 @@ pcapdump:
maxTime: 1h
maxSize: 500MB
pcapSrcDir: pcapdump
time: time
kube:
configPath: ""
context: ""
@@ -178,3 +197,4 @@ scripting:
active: []
console: true
timezone: ""
logLevel: warning

View File

@@ -1,13 +1,13 @@
---
# Source: kubeshark/templates/16-network-policies.yaml
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub-network-policy
@@ -23,18 +23,21 @@ spec:
- ports:
- protocol: TCP
port: 8080
- ports:
- protocol: TCP
port: 9100
egress:
- {}
---
# Source: kubeshark/templates/16-network-policies.yaml
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front-network-policy
@@ -53,15 +56,15 @@ spec:
egress:
- {}
---
# Source: kubeshark/templates/16-network-policies.yaml
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-worker-network-policy
@@ -76,7 +79,7 @@ spec:
ingress:
- ports:
- protocol: TCP
port: 30001
port: 48999
- protocol: TCP
port: 49100
egress:
@@ -87,10 +90,10 @@ apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-service-account
@@ -104,10 +107,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
stringData:
LICENSE: ''
@@ -121,10 +124,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
stringData:
AUTH_SAML_X509_CRT: |
@@ -137,10 +140,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
stringData:
AUTH_SAML_X509_KEY: |
@@ -152,10 +155,10 @@ metadata:
name: kubeshark-nginx-config-map
namespace: default
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
data:
default.conf: |
@@ -216,10 +219,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
data:
POD_REGEX: '.*'
@@ -236,7 +239,7 @@ data:
AUTH_TYPE: 'oidc'
AUTH_SAML_IDP_METADATA_URL: ''
AUTH_SAML_ROLE_ATTRIBUTE: 'role'
AUTH_SAML_ROLES: '{"admin":{"canDownloadPCAP":true,"canStopTrafficCapturing":true,"canUpdateTargetedPods":true,"canUseScripting":true,"filter":"","showAdminConsoleLink":true}}'
AUTH_SAML_ROLES: '{"admin":{"canDownloadPCAP":true,"canStopTrafficCapturing":true,"canUpdateTargetedPods":true,"canUseScripting":true,"filter":"","scriptingPermissions":{"canActivate":true,"canDelete":true,"canSave":true},"showAdminConsoleLink":true}}'
TELEMETRY_DISABLED: 'false'
SCRIPTING_DISABLED: ''
TARGETED_PODS_UPDATE_DISABLED: ''
@@ -244,7 +247,7 @@ data:
RECORDING_DISABLED: ''
STOP_TRAFFIC_CAPTURING_DISABLED: 'false'
GLOBAL_FILTER: ""
DEFAULT_FILTER: "!dns and !tcp and !udp and !icmp"
DEFAULT_FILTER: "!dns and !error"
TRAFFIC_SAMPLE_RATE: '100'
JSON_TTL: '5m'
PCAP_TTL: '10s'
@@ -252,7 +255,8 @@ data:
TIMEZONE: ' '
CLOUD_LICENSE_ENABLED: 'true'
DUPLICATE_TIMEFRAME: '200ms'
ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,sctp,syscall,ws,tls'
ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,sctp,syscall,ws,ldap,radius'
CUSTOM_MACROS: '{"https":"tls and (http or http2)"}'
DISSECTORS_UPDATING_ENABLED: 'true'
DETECT_DUPLICATES: 'false'
PCAP_DUMP_ENABLE: 'true'
@@ -266,10 +270,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-cluster-role-default
@@ -295,8 +299,8 @@ rules:
- namespaces
verbs:
- get
resourceNames:
- kube-system
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
@@ -314,10 +318,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-cluster-role-binding-default
@@ -336,10 +340,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-config-role
@@ -366,10 +370,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-config-role-binding
@@ -389,10 +393,10 @@ kind: Service
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub
@@ -411,10 +415,10 @@ apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front
@@ -433,10 +437,10 @@ kind: Service
apiVersion: v1
metadata:
labels:
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
prometheus.io/scrape: 'true'
@@ -446,10 +450,10 @@ metadata:
spec:
selector:
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
ports:
- name: metrics
@@ -457,6 +461,35 @@ spec:
port: 49100
targetPort: 49100
---
# Source: kubeshark/templates/16-hub-service-metrics.yaml
kind: Service
apiVersion: v1
metadata:
labels:
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9100'
name: kubeshark-hub-metrics
namespace: default
spec:
selector:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
ports:
- name: metrics
protocol: TCP
port: 9100
targetPort: 9100
---
# Source: kubeshark/templates/09-worker-daemon-set.yaml
apiVersion: apps/v1
kind: DaemonSet
@@ -464,10 +497,10 @@ metadata:
labels:
app.kubeshark.co/app: worker
sidecar.istio.io/inject: "false"
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-worker-daemon-set
@@ -482,25 +515,52 @@ spec:
metadata:
labels:
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
name: kubeshark-worker-daemon-set
namespace: kubeshark
spec:
initContainers:
- command:
- /bin/sh
- -c
- mkdir -p /sys/fs/bpf && mount | grep -q '/sys/fs/bpf' || mount -t bpf bpf /sys/fs/bpf
image: 'docker.io/kubeshark/worker:v52.3.95'
imagePullPolicy: Always
name: check-bpf
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys
name: sys
mountPropagation: Bidirectional
- command:
- ./tracer
- -init-bpf
image: 'docker.io/kubeshark/worker:v52.3.95'
imagePullPolicy: Always
name: init-bpf
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys
name: sys
containers:
- command:
- ./worker
- -i
- any
- -port
- '30001'
- '48999'
- -metrics-port
- '49100'
- -packet-capture
- 'best'
- -loglevel
- 'warning'
- -unixsocket
- -servicemesh
- -procfs
@@ -510,7 +570,7 @@ spec:
- 'auto'
- -staletimeout
- '30'
image: 'docker.io/kubeshark/worker:v52.3.89'
image: 'docker.io/kubeshark/worker:v52.3.95'
imagePullPolicy: Always
name: sniffer
ports:
@@ -559,22 +619,26 @@ spec:
- SYS_ADMIN
- SYS_PTRACE
- DAC_OVERRIDE
- SYS_ADMIN
- SYS_PTRACE
- SYS_RESOURCE
- IPC_LOCK
drop:
- ALL
readinessProbe:
periodSeconds: 1
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
initialDelaySeconds: 15
tcpSocket:
port: 30001
port: 48999
livenessProbe:
periodSeconds: 1
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
initialDelaySeconds: 15
tcpSocket:
port: 30001
port: 48999
volumeMounts:
- mountPath: /hostproc
name: proc
@@ -590,7 +654,9 @@ spec:
- /hostproc
- -disable-ebpf
- -disable-tls-log
image: 'docker.io/kubeshark/worker:v52.3.89'
# - -loglevel
# - 'warning'
image: 'docker.io/kubeshark/worker:v52.3.95'
imagePullPolicy: Always
name: tracer
env:
@@ -692,10 +758,10 @@ kind: Deployment
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub
@@ -711,10 +777,10 @@ spec:
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
spec:
dnsPolicy: ClusterFirstWithHostNet
@@ -725,6 +791,8 @@ spec:
- ./hub
- -port
- "8080"
- -loglevel
- 'warning'
env:
- name: POD_NAME
valueFrom:
@@ -742,20 +810,20 @@ spec:
value: 'https://api.kubeshark.co'
- name: PROFILING_ENABLED
value: 'false'
image: 'docker.io/kubeshark/hub:v52.3.89'
image: 'docker.io/kubeshark/hub:v52.3.95'
imagePullPolicy: Always
readinessProbe:
periodSeconds: 1
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
initialDelaySeconds: 15
tcpSocket:
port: 8080
livenessProbe:
periodSeconds: 1
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
initialDelaySeconds: 15
tcpSocket:
port: 8080
resources:
@@ -796,10 +864,10 @@ kind: Deployment
metadata:
labels:
app.kubeshark.co/app: front
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front
@@ -815,10 +883,10 @@ spec:
metadata:
labels:
app.kubeshark.co/app: front
helm.sh/chart: kubeshark-52.3.89
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.89"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
spec:
containers:
@@ -853,7 +921,7 @@ spec:
value: 'false'
- name: REACT_APP_SENTRY_ENVIRONMENT
value: 'production'
image: 'docker.io/kubeshark/front:v52.3.89'
image: 'docker.io/kubeshark/front:v52.3.95'
imagePullPolicy: Always
name: kubeshark-front
livenessProbe: