Compare commits

..

48 Commits

Author SHA1 Message Date
Alon Girmonsky
aca3f4ad44 🔖 Bump the Helm chart version to 52.3.95 2025-01-11 15:53:08 +01:00
Volodymyr Stoiko
f9c66df528 Update worker liveness/readiness config (#1684)
* Increase worker init delay to 30s

* Update values

* fix

* Make probe values configurable

* upd

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2025-01-08 13:09:51 -08:00
Alon Girmonsky
1d572e6bff support new radius protocol (#1682) 2025-01-08 13:05:57 -08:00
Alon Girmonsky
46ad335446 updated the notes (#1681) 2025-01-06 18:42:17 -08:00
Alon Girmonsky
317357e83b Update the Helm chart 2025-01-03 15:42:21 -08:00
Alon Girmonsky
d89ef8789f 🔖 Bump the Helm chart version to 52.3.93 2025-01-03 14:22:00 -08:00
Alon Girmonsky
773ad78ac7 extended the https macro to include http2
in addition to http
2025-01-03 10:30:22 -08:00
Alon Girmonsky
bbcaf74fa7 added https as a default macro (#1680)
* added https as a default macro

* small syntax fix
2025-01-03 09:54:50 -08:00
M. Mert Yildiran
639f1deb51 Add CUSTOM_MACROS to ConfigMap (#1674)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-25 16:45:03 -08:00
Alon Girmonsky
b377bfe35f Revert "Revert "Initialize kubeshark pinned eBPF resources inside init container (#1665)" (#1676)" (#1678)
This reverts commit 12f8883052.
2024-12-25 16:21:08 -08:00
Serhii Ponomarenko
5242d9af07 🛂 Add save/activate/delete role scripting permissions (#1675)
* 🛂 Add save/activate/delete role scripting permissions

* 🔧 Add scripting permissions to tap-config

* 🔨 Re-generate helm values & `complete.yaml`

* 📝 Add scripting permissions to helm chart docs

* 🏷️ Make scripting permissions `true` by default

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-25 12:33:16 -08:00
M. Mert Yildiran
12f8883052 Revert "Initialize kubeshark pinned eBPF resources inside init container (#1665)" (#1676)
This reverts commit 29de008f22.
2024-12-25 11:21:51 -08:00
Alon Girmonsky
7eef5efcd9 Added security capabilities, especially IPC_LOCK (#1671)
to Sniffer in case eBPF traffic capture mechanism is used.
2024-12-23 16:49:54 -08:00
M. Mert Yildiran
af47154a8d Revert "Add CUSTOM_MACROS to ConfigMap"
This reverts commit 17759d296d.
2024-12-23 21:26:42 +03:00
M. Mert Yildiran
17759d296d Add CUSTOM_MACROS to ConfigMap 2024-12-23 21:25:11 +03:00
Ilya Gavrilov
29de008f22 Initialize kubeshark pinned eBPF resources inside init container (#1665)
* Clean kubeshark pinned bpf resources inside init container

* Clean kubeshark pinned bpf resources inside init container

* Update 09-worker-daemon-set.yaml

* add IPC_LOCK capability to sniffer

* add init container to mount bpf filesystem

* add init container to mount bpf filesystem

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-19 16:20:13 -08:00
Volodymyr Stoiko
261a0ca1a9 Replace sniffer 30001 port with 48999 (#1670)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-19 12:37:01 -08:00
Volodymyr Stoiko
e819e9b697 Add hub metrics port (#1666)
* Add hub metrics port

* Add policies and service

* Use static 9100 port for hub metrics

* fix

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-19 12:06:29 -08:00
Alon Girmonsky
a03aa56d07 removed the loglevel flag (#1669)
following reverting tracer version: https://github.com/kubeshark/worker/pull/478
2024-12-16 12:34:51 -08:00
Serhii Ponomarenko
83f437f3f8 🛂 Create save/activate/delete role scripting permissions (#1667)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-16 11:19:00 -08:00
bogdanvbalan
f5637972f2 Add --time param to pcapdump (#1664)
* Add --time param to pcapdump

* Update description

* Remove obsolete code

* Revert config change

* Add time to pcap config

---------

Co-authored-by: bogdan.balan1 <bogdanvalentin.balan@1nce.com>
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-16 08:29:40 -08:00
Alon Girmonsky
4cabf13788 from debug to logLevel (#1668)
* updated helm values

* removed the tap.debug field
from the tapConfig struct

* Revert "removed the tap.debug field"

This reverts commit f911c02f0d.

* support the -d --debug command
with the new logLevel flag
2024-12-15 17:27:05 -08:00
Alon Girmonsky
cd1d7e4a58 🔖 Bump the Helm chart version to 52.3.92 2024-12-09 11:42:05 -08:00
Alon Girmonsky
9b7e2e7144 change the tlx dissector name to tlsx (#1648)
as it can be confused with https
tlsx is a dissector for TLS handshakes and not
HTTPS dissector.
2024-12-08 20:36:58 -08:00
Alon Girmonsky
80fa18cbba Added LDAP support (#1647) 2024-12-08 15:02:50 -08:00
Alon Girmonsky
dfbb321084 Default startup values change (#1646)
* updated the defaultFilter default values and docs.

* fixed a small err in the docs
2024-12-08 14:48:13 -08:00
Alon Girmonsky
d85dc58f20 fixed a bug where a new script can't be added if (#1645)
a previous one was deleted
2024-12-06 14:00:05 -08:00
Volodymyr Stoiko
993b8ae19e Add permissions to watch namespaces (#1644)
* Add permissions to watch namespaces

* Allow watching all namespaces

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-12-03 18:32:23 -08:00
Alon Girmonsky
77f81c8ab3 🔖 Bump the Helm chart version to 52.3.91 2024-12-01 15:26:34 -08:00
Alon Girmonsky
a24b40a0c1 fixed a small issues in the Makefile 2024-11-20 12:59:34 +02:00
Alon Girmonsky
4817ed2a80 🔖 Bump the Helm chart version to 52.3.90 2024-11-20 12:47:04 +02:00
Alon Girmonsky
d66ec06928 updated pcapdump command's help starting
deprecated the `export` command
2024-11-20 12:30:52 +02:00
Alon Girmonsky
125e3abe6c Brought back .gitignore` after having been deleted due to a bad commit. 2024-11-10 15:25:30 -08:00
Alon Girmonsky
8221c4ef10 removed two binary files uploaded due to a bad earlier commit. 2024-11-10 15:23:03 -08:00
Alon Girmonsky
7f216b2958 Fixed a bug in the Makefile 2024-11-10 15:22:08 -08:00
Alon Girmonsky
67006e2fc7 🔖 Bump the Helm chart version to 52.3.89 2024-11-10 15:04:27 -08:00
Alon Girmonsky
d0adbc357f if no scripting source folders, that's not an error 2024-11-06 11:34:44 -08:00
Volodymyr Stoiko
8e135d570b Remove pfring leftovers from ds (#1642)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-11-06 11:11:44 -08:00
Volodymyr Stoiko
f21f68a7e0 Fix frontend port (#1641)
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-11-06 11:09:41 -08:00
Alon Girmonsky
5f13f7d28d Added an option to provide multiple script sources. (#1640) 2024-11-05 17:00:33 -08:00
Volodymyr Stoiko
80d23d62bd Remove PF_RING references (#1638)
* Remove PF_RING references

* Update values

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-11-05 14:13:50 -08:00
Volodymyr Stoiko
bba1bbd1fb Watch cm creation and sync scripts (#1637)
* Fix graceful shutdown

* add helpers

* Watch for configmap changes

---------

Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
2024-11-05 13:35:17 -08:00
Volodymyr Stoiko
4a6628a3e8 Fix helm resource requests/limits templates (#1639) 2024-11-05 13:03:21 -08:00
Alon Girmonsky
bec0b25daa 🔖 Bump the Helm chart version to 52.3.88 2024-11-02 13:11:02 -07:00
Alon Girmonsky
9248f07af0 missing commit 2024-11-02 09:50:30 -07:00
Alon Girmonsky
a1e05db4b0 Improved resource limits and requests Helm templating 2024-11-02 09:49:45 -07:00
Alon Girmonsky
b3f6fdc831 Added an ability to override image names for a case, where when using a CI, one needs to use individual image names (#1636) 2024-10-31 21:18:13 -07:00
Alon Girmonsky
e0c010eb29 🔖 Bump the Helm chart version to 52.3.87 2024-10-30 12:51:15 -07:00
25 changed files with 741 additions and 583 deletions

2
.gitignore vendored
View File

@@ -63,4 +63,4 @@ bin
scripts/
# CWD config YAML
kubeshark.yaml
kubeshark.yaml

View File

@@ -189,6 +189,18 @@ release:
@cd ../../kubeshark.github.io/ && git add -A . && git commit -m ":sparkles: Update the Helm chart" && git push
@cd ../kubeshark
soft-release:
@cd ../worker && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../tracer && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../hub && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../front && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
@cd ../kubeshark && git checkout master && git pull && sed -i 's/^version:.*/version: "$(VERSION)"/' helm-chart/Chart.yaml && make && make generate-helm-values && make generate-manifests
@git add -A . && git commit -m ":bookmark: Bump the Helm chart version to $(VERSION)" && git push
# @git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
# @cd helm-chart && cp -r . ../../kubeshark.github.io/charts/chart
# @cd ../../kubeshark.github.io/ && git add -A . && git commit -m ":sparkles: Update the Helm chart" && git push
# @cd ../kubeshark
branch:
@cd ../worker && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)
@cd ../hub && git checkout master && git pull && git checkout -b $(name); git push --set-upstream origin $(name)

View File

@@ -1,62 +0,0 @@
package cmd
import (
"fmt"
"net/http"
"os"
"path/filepath"
"time"
"github.com/creasty/defaults"
"github.com/kubeshark/kubeshark/config/configStructs"
"github.com/kubeshark/kubeshark/internal/connect"
"github.com/kubeshark/kubeshark/kubernetes"
"github.com/kubeshark/kubeshark/utils"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
var exportCmd = &cobra.Command{
Use: "export",
Short: "Exports the captured traffic into a TAR file that contains PCAP files",
RunE: func(cmd *cobra.Command, args []string) error {
runExport()
return nil
},
}
func init() {
rootCmd.AddCommand(exportCmd)
defaultTapConfig := configStructs.TapConfig{}
if err := defaults.Set(&defaultTapConfig); err != nil {
log.Debug().Err(err).Send()
}
exportCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the Kubeshark")
exportCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Kubeshark")
exportCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}
func runExport() {
hubUrl := kubernetes.GetHubUrl()
response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
if err != nil || response.StatusCode != 200 {
log.Info().Msg(fmt.Sprintf(utils.Yellow, "Couldn't connect to Hub. Establishing proxy..."))
runProxy(false, true)
}
dstPath, err := filepath.Abs(fmt.Sprintf("./%d.tar.gz", time.Now().Unix()))
if err != nil {
panic(err)
}
out, err := os.Create(dstPath)
if err != nil {
panic(err)
}
defer out.Close()
connector := connect.NewConnector(kubernetes.GetHubUrl(), connect.DefaultRetries, connect.DefaultTimeout)
connector.PostPcapsMerge(out)
}

View File

@@ -3,6 +3,7 @@ package cmd
import (
"errors"
"path/filepath"
"time"
"github.com/creasty/defaults"
"github.com/kubeshark/kubeshark/config/configStructs"
@@ -16,7 +17,7 @@ import (
// pcapDumpCmd represents the consolidated pcapdump command
var pcapDumpCmd = &cobra.Command{
Use: "pcapdump",
Short: "Manage PCAP dump operations: start, stop, or copy PCAP files",
Short: "Store all captured traffic (including decrypted TLS) in a PCAP file.",
RunE: func(cmd *cobra.Command, args []string) error {
// Retrieve the kubeconfig path from the flag
kubeconfig, _ := cmd.Flags().GetString(configStructs.PcapKubeconfig)
@@ -43,41 +44,26 @@ var pcapDumpCmd = &cobra.Command{
return err
}
// Parse the `--time` flag
timeIntervalStr, _ := cmd.Flags().GetString("time")
var cutoffTime *time.Time // Use a pointer to distinguish between provided and not provided
if timeIntervalStr != "" {
duration, err := time.ParseDuration(timeIntervalStr)
if err != nil {
log.Error().Err(err).Msg("Invalid time interval")
return err
}
tempCutoffTime := time.Now().Add(-duration)
cutoffTime = &tempCutoffTime
}
// Handle copy operation if the copy string is provided
if !cmd.Flags().Changed(configStructs.PcapDumpEnabled) {
destDir, _ := cmd.Flags().GetString(configStructs.PcapDest)
log.Info().Msg("Copying PCAP files")
err = copyPcapFiles(clientset, config, destDir)
if err != nil {
log.Error().Err(err).Msg("Error copying PCAP files")
return err
}
} else {
// Handle start operation if the start string is provided
enabled, err := cmd.Flags().GetBool(configStructs.PcapDumpEnabled)
if err != nil {
log.Error().Err(err).Msg("Error getting pcapdump enable flag")
return err
}
timeInterval, _ := cmd.Flags().GetString(configStructs.PcapTimeInterval)
maxTime, _ := cmd.Flags().GetString(configStructs.PcapMaxTime)
maxSize, _ := cmd.Flags().GetString(configStructs.PcapMaxSize)
err = startStopPcap(clientset, enabled, timeInterval, maxTime, maxSize)
if err != nil {
log.Error().Err(err).Msg("Error starting/stopping PCAP dump")
return err
}
if enabled {
log.Info().Msg("Pcapdump started successfully")
return nil
} else {
log.Info().Msg("Pcapdump stopped successfully")
return nil
}
destDir, _ := cmd.Flags().GetString(configStructs.PcapDest)
log.Info().Msg("Copying PCAP files")
err = copyPcapFiles(clientset, config, destDir, cutoffTime)
if err != nil {
log.Error().Err(err).Msg("Error copying PCAP files")
return err
}
return nil
@@ -92,10 +78,7 @@ func init() {
log.Debug().Err(err).Send()
}
pcapDumpCmd.Flags().String(configStructs.PcapTimeInterval, defaultPcapDumpConfig.PcapTimeInterval, "Time interval for PCAP file rotation (used with --start)")
pcapDumpCmd.Flags().String(configStructs.PcapMaxTime, defaultPcapDumpConfig.PcapMaxTime, "Maximum time for retaining old PCAP files (used with --start)")
pcapDumpCmd.Flags().String(configStructs.PcapMaxSize, defaultPcapDumpConfig.PcapMaxSize, "Maximum size of PCAP files before deletion (used with --start)")
pcapDumpCmd.Flags().String(configStructs.PcapTime, "", "Time interval (e.g., 10m, 1h) in the past for which the pcaps are copied")
pcapDumpCmd.Flags().String(configStructs.PcapDest, "", "Local destination path for copied PCAP files (can not be used together with --enabled)")
pcapDumpCmd.Flags().String(configStructs.PcapKubeconfig, "", "Enabled/Disable to pcap dumps (can not be used together with --dest)")
pcapDumpCmd.Flags().String(configStructs.PcapKubeconfig, "", "Path for kubeconfig (if not provided the default location will be checked)")
}

View File

@@ -6,8 +6,8 @@ import (
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/kubeshark/gopacket"
"github.com/kubeshark/gopacket/layers"
@@ -54,7 +54,7 @@ func listWorkerPods(ctx context.Context, clientset *clientk8s.Clientset, namespa
}
// listFilesInPodDir lists all files in the specified directory inside the pod across multiple namespaces
func listFilesInPodDir(ctx context.Context, clientset *clientk8s.Clientset, config *rest.Config, podName string, namespaces []string, configMapName, configMapKey string) ([]NamespaceFiles, error) {
func listFilesInPodDir(ctx context.Context, clientset *clientk8s.Clientset, config *rest.Config, podName string, namespaces []string, configMapName, configMapKey string, cutoffTime *time.Time) ([]NamespaceFiles, error) {
var namespaceFilesList []NamespaceFiles
for _, namespace := range namespaces {
@@ -114,12 +114,42 @@ func listFilesInPodDir(ctx context.Context, clientset *clientk8s.Clientset, conf
// Split the output (file names) into a list
files := strings.Split(strings.TrimSpace(stdoutBuf.String()), "\n")
if len(files) > 0 {
// Append the NamespaceFiles struct to the list
if len(files) == 0 {
log.Info().Msgf("No files found in directory %s in pod %s", srcFilePath, podName)
continue
}
var filteredFiles []string
// Filter files based on cutoff time if provided
for _, file := range files {
if cutoffTime != nil {
parts := strings.Split(file, "-")
if len(parts) < 2 {
log.Warn().Msgf("Skipping file with invalid format: %s", file)
continue
}
timestampStr := parts[len(parts)-2] + parts[len(parts)-1][:6] // Extract YYYYMMDDHHMMSS
fileTime, err := time.Parse("20060102150405", timestampStr)
if err != nil {
log.Warn().Err(err).Msgf("Skipping file with unparsable timestamp: %s", file)
continue
}
if fileTime.Before(*cutoffTime) {
continue
}
}
// Add file to filtered list
filteredFiles = append(filteredFiles, file)
}
if len(filteredFiles) > 0 {
namespaceFilesList = append(namespaceFilesList, NamespaceFiles{
Namespace: namespace,
SrcDir: srcDir,
Files: files,
Files: filteredFiles,
})
}
}
@@ -229,63 +259,8 @@ func mergePCAPs(outputFile string, inputFiles []string) error {
return nil
}
// setPcapConfigInKubernetes sets the PCAP config for all pods across multiple namespaces
func setPcapConfigInKubernetes(ctx context.Context, clientset *clientk8s.Clientset, podName string, namespaces []string, enabledPcap bool, timeInterval, maxTime, maxSize string) error {
for _, namespace := range namespaces {
// Load the existing ConfigMap in the current namespace
configMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(ctx, "kubeshark-config-map", metav1.GetOptions{})
if err != nil {
log.Error().Err(err).Msgf("failed to get ConfigMap in namespace %s", namespace)
continue
}
// Update the values with user-provided input
configMap.Data["PCAP_TIME_INTERVAL"] = timeInterval
configMap.Data["PCAP_MAX_SIZE"] = maxSize
configMap.Data["PCAP_MAX_TIME"] = maxTime
configMap.Data["PCAP_DUMP_ENABLE"] = strconv.FormatBool(enabledPcap)
// Apply the updated ConfigMap back to the cluster in the current namespace
_, err = clientset.CoreV1().ConfigMaps(namespace).Update(ctx, configMap, metav1.UpdateOptions{})
if err != nil {
log.Error().Err(err).Msgf("failed to update ConfigMap in namespace %s", namespace)
continue
}
}
return nil
}
// startPcap function for starting the PCAP capture
func startStopPcap(clientset *kubernetes.Clientset, pcapEnable bool, timeInterval, maxTime, maxSize string) error {
kubernetesProvider, err := getKubernetesProviderForCli(false, false)
if err != nil {
log.Error().Err(err).Send()
return err
}
targetNamespaces := kubernetesProvider.GetNamespaces()
// List worker pods
workerPods, err := listWorkerPods(context.Background(), clientset, targetNamespaces)
if err != nil {
log.Error().Err(err).Msg("Error listing worker pods")
return err
}
// Iterate over each pod to start the PCAP capture by updating the configuration in Kubernetes
for _, pod := range workerPods {
err := setPcapConfigInKubernetes(context.Background(), clientset, pod.Name, targetNamespaces, pcapEnable, timeInterval, maxTime, maxSize)
if err != nil {
log.Error().Err(err).Msgf("Error setting PCAP config for pod %s", pod.Name)
continue
}
}
return nil
}
// copyPcapFiles function for copying the PCAP files from the worker pods
func copyPcapFiles(clientset *kubernetes.Clientset, config *rest.Config, destDir string) error {
func copyPcapFiles(clientset *kubernetes.Clientset, config *rest.Config, destDir string, cutoffTime *time.Time) error {
kubernetesProvider, err := getKubernetesProviderForCli(false, false)
if err != nil {
log.Error().Err(err).Send()
@@ -305,7 +280,7 @@ func copyPcapFiles(clientset *kubernetes.Clientset, config *rest.Config, destDir
// Iterate over each pod to get the PCAP directory from config and copy files
for _, pod := range workerPods {
// Get the list of NamespaceFiles (files per namespace) and their source directories
namespaceFiles, err := listFilesInPodDir(context.Background(), clientset, config, pod.Name, targetNamespaces, SELF_RESOURCES_PREFIX+SUFFIX_CONFIG_MAP, "PCAP_SRC_DIR")
namespaceFiles, err := listFilesInPodDir(context.Background(), clientset, config, pod.Name, targetNamespaces, SELF_RESOURCES_PREFIX+SUFFIX_CONFIG_MAP, "PCAP_SRC_DIR", cutoffTime)
if err != nil {
log.Error().Err(err).Msgf("Error listing files in pod %s", pod.Name)
continue

View File

@@ -3,7 +3,12 @@ package cmd
import (
"context"
"encoding/json"
"errors"
"os"
"os/signal"
"strings"
"sync"
"time"
"github.com/creasty/defaults"
"github.com/fsnotify/fsnotify"
@@ -11,14 +16,16 @@ import (
"github.com/kubeshark/kubeshark/config/configStructs"
"github.com/kubeshark/kubeshark/kubernetes"
"github.com/kubeshark/kubeshark/misc"
"github.com/kubeshark/kubeshark/utils"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"
)
var scriptsCmd = &cobra.Command{
Use: "scripts",
Short: "Watch the `scripting.source` directory for changes and update the scripts",
Short: "Watch the `scripting.source` and/or `scripting.sources` folders for changes and update the scripts",
RunE: func(cmd *cobra.Command, args []string) error {
runScripts()
return nil
@@ -39,8 +46,8 @@ func init() {
}
func runScripts() {
if config.Config.Scripting.Source == "" {
log.Error().Msg("`scripting.source` field is empty.")
if config.Config.Scripting.Source == "" && len(config.Config.Scripting.Sources) == 0 {
log.Error().Msg("Both `scripting.source` and `scripting.sources` fields are empty.")
return
}
@@ -50,39 +57,82 @@ func runScripts() {
return
}
watchScripts(kubernetesProvider, true)
var wg sync.WaitGroup
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, os.Interrupt)
wg.Add(1)
go func() {
defer wg.Done()
watchConfigMap(ctx, kubernetesProvider)
}()
wg.Add(1)
go func() {
defer wg.Done()
watchScripts(ctx, kubernetesProvider, true)
}()
go func() {
<-signalChan
log.Debug().Msg("Received interrupt, stopping watchers.")
cancel()
}()
wg.Wait()
}
func createScript(provider *kubernetes.Provider, script misc.ConfigMapScript) (index int64, err error) {
const maxRetries = 5
var scripts map[int64]misc.ConfigMapScript
scripts, err = kubernetes.ConfigGetScripts(provider)
if err != nil {
return
}
script.Active = kubernetes.IsActiveScript(provider, script.Title)
index = int64(len(scripts))
if script.Title != "New Script" {
for i, v := range scripts {
if v.Title == script.Title {
index = int64(i)
for i := 0; i < maxRetries; i++ {
scripts, err = kubernetes.ConfigGetScripts(provider)
if err != nil {
return
}
script.Active = kubernetes.IsActiveScript(provider, script.Title)
index = 0
if script.Title != "New Script" {
for i, v := range scripts {
if index <= i {
index = i + 1
}
if v.Title == script.Title {
index = int64(i)
}
}
}
}
scripts[index] = script
scripts[index] = script
log.Info().Str("title", script.Title).Bool("Active", script.Active).Int64("Index", index).Msg("Creating script")
var data []byte
data, err = json.Marshal(scripts)
if err != nil {
return
log.Info().Str("title", script.Title).Bool("Active", script.Active).Int64("Index", index).Msg("Creating script")
var data []byte
data, err = json.Marshal(scripts)
if err != nil {
return
}
_, err = kubernetes.SetConfig(provider, kubernetes.CONFIG_SCRIPTING_SCRIPTS, string(data))
if err == nil {
return index, nil
}
if k8serrors.IsConflict(err) {
log.Warn().Err(err).Msg("Conflict detected, retrying update...")
time.Sleep(500 * time.Millisecond)
continue
}
return 0, err
}
_, err = kubernetes.SetConfig(provider, kubernetes.CONFIG_SCRIPTING_SCRIPTS, string(data))
if err != nil {
return
}
return
log.Error().Msg("Max retries reached for creating script due to conflicts.")
return 0, errors.New("max retries reached due to conflicts while creating script")
}
func updateScript(provider *kubernetes.Provider, index int64, script misc.ConfigMapScript) (err error) {
@@ -134,7 +184,7 @@ func deleteScript(provider *kubernetes.Provider, index int64) (err error) {
return
}
func watchScripts(provider *kubernetes.Provider, block bool) {
func watchScripts(ctx context.Context, provider *kubernetes.Provider, block bool) {
files := make(map[string]int64)
scripts, err := config.Config.Scripting.GetScripts()
@@ -162,9 +212,31 @@ func watchScripts(provider *kubernetes.Provider, block bool) {
defer watcher.Close()
}
ctx, cancel := context.WithCancel(ctx)
defer cancel()
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, os.Interrupt)
go func() {
<-signalChan
log.Debug().Msg("Received interrupt, stopping script watch.")
cancel()
watcher.Close()
}()
if err := watcher.Add(config.Config.Scripting.Source); err != nil {
log.Error().Err(err).Msg("Failed to add scripting source to watcher")
return
}
go func() {
for {
select {
case <-ctx.Done():
log.Debug().Msg("Script watcher exiting gracefully.")
return
// watch for events
case event := <-watcher.Events:
if !strings.HasSuffix(event.Name, "js") {
@@ -213,9 +285,12 @@ func watchScripts(provider *kubernetes.Provider, block bool) {
// pass
}
// watch for errors
case err := <-watcher.Errors:
log.Error().Err(err).Send()
case err, ok := <-watcher.Errors:
if !ok {
log.Info().Msg("Watcher errors channel closed.")
return
}
log.Error().Err(err).Msg("Watcher error encountered")
}
}
}()
@@ -224,11 +299,79 @@ func watchScripts(provider *kubernetes.Provider, block bool) {
log.Error().Err(err).Send()
}
log.Info().Str("directory", config.Config.Scripting.Source).Msg("Watching scripts against changes:")
for _, source := range config.Config.Scripting.Sources {
if err := watcher.Add(source); err != nil {
log.Error().Err(err).Send()
}
}
log.Info().Str("folder", config.Config.Scripting.Source).Interface("folders", config.Config.Scripting.Sources).Msg("Watching scripts against changes:")
if block {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
utils.WaitForTermination(ctx, cancel)
<-ctx.Done()
}
}
func watchConfigMap(ctx context.Context, provider *kubernetes.Provider) {
clientset := provider.GetClientSet()
configMapName := kubernetes.SELF_RESOURCES_PREFIX + kubernetes.SUFFIX_CONFIG_MAP
for {
select {
case <-ctx.Done():
log.Info().Msg("ConfigMap watcher exiting gracefully.")
return
default:
watcher, err := clientset.CoreV1().ConfigMaps(config.Config.Tap.Release.Namespace).Watch(context.TODO(), metav1.ListOptions{
FieldSelector: "metadata.name=" + configMapName,
})
if err != nil {
log.Warn().Err(err).Msg("ConfigMap not found, retrying in 5 seconds...")
time.Sleep(5 * time.Second)
continue
}
for event := range watcher.ResultChan() {
select {
case <-ctx.Done():
log.Info().Msg("ConfigMap watcher loop exiting gracefully.")
watcher.Stop()
return
default:
if event.Type == watch.Added {
log.Info().Msg("ConfigMap created or modified")
runScriptsSync(provider)
} else if event.Type == watch.Deleted {
log.Warn().Msg("ConfigMap deleted, waiting for recreation...")
watcher.Stop()
break
}
}
}
time.Sleep(5 * time.Second)
}
}
}
func runScriptsSync(provider *kubernetes.Provider) {
files := make(map[string]int64)
scripts, err := config.Config.Scripting.GetScripts()
if err != nil {
log.Error().Err(err).Send()
return
}
for _, script := range scripts {
index, err := createScript(provider, script.ConfigMap())
if err != nil {
log.Error().Err(err).Send()
continue
}
files[script.Path] = index
}
log.Info().Msg("Synchronized scripts with ConfigMap.")
}

View File

@@ -424,8 +424,9 @@ func postFrontStarted(ctx context.Context, kubernetesProvider *kubernetes.Provid
time.Sleep(100 * time.Millisecond)
}
if config.Config.Scripting.Source != "" && config.Config.Scripting.WatchScripts {
watchScripts(kubernetesProvider, false)
if (config.Config.Scripting.Source != "" || len(config.Config.Scripting.Sources) > 0) && config.Config.Scripting.WatchScripts {
watchScripts(ctx, kubernetesProvider, false)
}
if config.Config.Scripting.Console {

View File

@@ -63,6 +63,9 @@ func InitConfig(cmd *cobra.Command) error {
Config = CreateDefaultConfig()
Config.Tap.Debug = DebugMode
if DebugMode {
Config.LogLevel = "debug"
}
cmdName = cmd.Name()
if utils.Contains([]string{
"clean",

View File

@@ -42,10 +42,6 @@ func CreateDefaultConfig() ConfigStruct {
// DAC_OVERRIDE is required to read /proc/PID/environ
"DAC_OVERRIDE",
},
KernelModule: []string{
// SYS_MODULE is required to install kernel modules
"SYS_MODULE",
},
EBPFCapture: []string{
// SYS_ADMIN is required to read /proc/PID/net/ns + to install eBPF programs (kernel < 5.8)
"SYS_ADMIN",
@@ -63,9 +59,14 @@ func CreateDefaultConfig() ConfigStruct {
RoleAttribute: "role",
Roles: map[string]configStructs.Role{
"admin": {
Filter: "",
CanDownloadPCAP: true,
CanUseScripting: true,
Filter: "",
CanDownloadPCAP: true,
CanUseScripting: true,
ScriptingPermissions: configStructs.ScriptingPermissions{
CanSave: true,
CanActivate: true,
CanDelete: true,
},
CanUpdateTargetedPods: true,
CanStopTrafficCapturing: true,
ShowAdminConsoleLink: true,
@@ -85,7 +86,9 @@ func CreateDefaultConfig() ConfigStruct {
// "tcp",
// "udp",
"ws",
"tls",
// "tlsx",
"ldap",
"radius",
},
},
}
@@ -116,6 +119,7 @@ type ConfigStruct struct {
Scripting configStructs.ScriptingConfig `yaml:"scripting" json:"scripting"`
Manifests ManifestsConfig `yaml:"manifests,omitempty" json:"manifests,omitempty"`
Timezone string `yaml:"timezone" json:"timezone"`
LogLevel string `yaml:"logLevel" json:"logLevel" default:"warning"`
}
func (config *ConfigStruct) ImagePullPolicy() v1.PullPolicy {

View File

@@ -1,6 +1,7 @@
package configStructs
import (
"fmt"
"io/fs"
"os"
"path/filepath"
@@ -13,41 +14,79 @@ import (
type ScriptingConfig struct {
Env map[string]interface{} `yaml:"env" json:"env" default:"{}"`
Source string `yaml:"source" json:"source" default:""`
Sources []string `yaml:"sources" json:"sources" default:"[]"`
WatchScripts bool `yaml:"watchScripts" json:"watchScripts" default:"true"`
Active []string `yaml:"active" json:"active" default:"[]"`
Console bool `yaml:"console" json:"console" default:"true"`
}
func (config *ScriptingConfig) GetScripts() (scripts []*misc.Script, err error) {
if config.Source == "" {
return
// Check if both Source and Sources are empty
if config.Source == "" && len(config.Sources) == 0 {
return nil, nil
}
var files []fs.DirEntry
files, err = os.ReadDir(config.Source)
if err != nil {
return
var allFiles []struct {
Source string
File fs.DirEntry
}
for _, f := range files {
if f.IsDir() {
// Handle single Source directory
if config.Source != "" {
files, err := os.ReadDir(config.Source)
if err != nil {
return nil, fmt.Errorf("failed to read directory %s: %v", config.Source, err)
}
for _, file := range files {
allFiles = append(allFiles, struct {
Source string
File fs.DirEntry
}{Source: config.Source, File: file})
}
}
// Handle multiple Sources directories
if len(config.Sources) > 0 {
for _, source := range config.Sources {
files, err := os.ReadDir(source)
if err != nil {
return nil, fmt.Errorf("failed to read directory %s: %v", source, err)
}
for _, file := range files {
allFiles = append(allFiles, struct {
Source string
File fs.DirEntry
}{Source: source, File: file})
}
}
}
// Iterate over all collected files
for _, f := range allFiles {
if f.File.IsDir() {
continue
}
var script *misc.Script
path := filepath.Join(config.Source, f.Name())
if !strings.HasSuffix(path, ".js") {
// Construct the full path based on the relevant source directory
path := filepath.Join(f.Source, f.File.Name())
if !strings.HasSuffix(f.File.Name(), ".js") { // Use file name suffix for skipping non-JS files
log.Info().Str("path", path).Msg("Skipping non-JS file")
continue
}
// Read the script file
var script *misc.Script
script, err = misc.ReadScriptFile(path)
if err != nil {
return
return nil, fmt.Errorf("failed to read script file %s: %v", path, err)
}
// Append the valid script to the scripts slice
scripts = append(scripts, script)
log.Debug().Str("path", path).Msg("Found script:")
}
return
// Return the collected scripts and nil error if successful
return scripts, nil
}

View File

@@ -35,28 +35,29 @@ const (
PprofPortLabel = "pprof-port"
PprofViewLabel = "pprof-view"
DebugLabel = "debug"
ContainerPort = 80
ContainerPortStr = "80"
ContainerPort = 8080
ContainerPortStr = "8080"
PcapDest = "dest"
PcapMaxSize = "maxSize"
PcapMaxTime = "maxTime"
PcapTimeInterval = "timeInterval"
PcapKubeconfig = "kubeconfig"
PcapDumpEnabled = "enabled"
PcapTime = "time"
)
type ResourceLimitsHub struct {
CPU string `yaml:"cpu" json:"cpu" default:""`
CPU string `yaml:"cpu" json:"cpu" default:"0"`
Memory string `yaml:"memory" json:"memory" default:"5Gi"`
}
type ResourceLimitsWorker struct {
CPU string `yaml:"cpu" json:"cpu" default:""`
CPU string `yaml:"cpu" json:"cpu" default:"0"`
Memory string `yaml:"memory" json:"memory" default:"3Gi"`
}
type ResourceRequests struct {
CPU string `yaml:"cpu" json:"cpu" default:""`
CPU string `yaml:"cpu" json:"cpu" default:"50m"`
Memory string `yaml:"memory" json:"memory" default:"50Mi"`
}
@@ -71,7 +72,7 @@ type ResourceRequirementsWorker struct {
}
type WorkerConfig struct {
SrvPort uint16 `yaml:"srvPort" json:"srvPort" default:"30001"`
SrvPort uint16 `yaml:"srvPort" json:"srvPort" default:"48999"`
}
type HubConfig struct {
@@ -89,6 +90,11 @@ type ProxyConfig struct {
Host string `yaml:"host" json:"host" default:"127.0.0.1"`
}
type OverrideImageConfig struct {
Worker string `yaml:"worker" json:"worker"`
Hub string `yaml:"hub" json:"hub"`
Front string `yaml:"front" json:"front"`
}
type OverrideTagConfig struct {
Worker string `yaml:"worker" json:"worker"`
Hub string `yaml:"hub" json:"hub"`
@@ -96,12 +102,13 @@ type OverrideTagConfig struct {
}
type DockerConfig struct {
Registry string `yaml:"registry" json:"registry" default:"docker.io/kubeshark"`
Tag string `yaml:"tag" json:"tag" default:""`
TagLocked bool `yaml:"tagLocked" json:"tagLocked" default:"true"`
ImagePullPolicy string `yaml:"imagePullPolicy" json:"imagePullPolicy" default:"Always"`
ImagePullSecrets []string `yaml:"imagePullSecrets" json:"imagePullSecrets"`
OverrideTag OverrideTagConfig `yaml:"overrideTag" json:"overrideTag"`
Registry string `yaml:"registry" json:"registry" default:"docker.io/kubeshark"`
Tag string `yaml:"tag" json:"tag" default:""`
TagLocked bool `yaml:"tagLocked" json:"tagLocked" default:"true"`
ImagePullPolicy string `yaml:"imagePullPolicy" json:"imagePullPolicy" default:"Always"`
ImagePullSecrets []string `yaml:"imagePullSecrets" json:"imagePullSecrets"`
OverrideImage OverrideImageConfig `yaml:"overrideImage" json:"overrideImage"`
OverrideTag OverrideTagConfig `yaml:"overrideTag" json:"overrideTag"`
}
type ResourcesConfig struct {
@@ -110,13 +117,32 @@ type ResourcesConfig struct {
Tracer ResourceRequirementsWorker `yaml:"tracer" json:"tracer"`
}
type ProbesConfig struct {
Hub ProbeConfig `yaml:"hub" json:"hub"`
Sniffer ProbeConfig `yaml:"sniffer" json:"sniffer"`
}
type ProbeConfig struct {
InitialDelaySeconds int `yaml:"initialDelaySeconds" json:"initialDelaySeconds" default:"15"`
PeriodSeconds int `yaml:"periodSeconds" json:"periodSeconds" default:"10"`
SuccessThreshold int `yaml:"successThreshold" json:"successThreshold" default:"1"`
FailureThreshold int `yaml:"failureThreshold" json:"failureThreshold" default:"3"`
}
type ScriptingPermissions struct {
CanSave bool `yaml:"canSave" json:"canSave" default:"true"`
CanActivate bool `yaml:"canActivate" json:"canActivate" default:"true"`
CanDelete bool `yaml:"canDelete" json:"canDelete" default:"true"`
}
type Role struct {
Filter string `yaml:"filter" json:"filter" default:""`
CanDownloadPCAP bool `yaml:"canDownloadPCAP" json:"canDownloadPCAP" default:"false"`
CanUseScripting bool `yaml:"canUseScripting" json:"canUseScripting" default:"false"`
CanUpdateTargetedPods bool `yaml:"canUpdateTargetedPods" json:"canUpdateTargetedPods" default:"false"`
CanStopTrafficCapturing bool `yaml:"canStopTrafficCapturing" json:"canStopTrafficCapturing" default:"false"`
ShowAdminConsoleLink bool `yaml:"showAdminConsoleLink" json:"showAdminConsoleLink" default:"false"`
Filter string `yaml:"filter" json:"filter" default:""`
CanDownloadPCAP bool `yaml:"canDownloadPCAP" json:"canDownloadPCAP" default:"false"`
CanUseScripting bool `yaml:"canUseScripting" json:"canUseScripting" default:"false"`
ScriptingPermissions ScriptingPermissions `yaml:"scriptingPermissions" json:"scriptingPermissions"`
CanUpdateTargetedPods bool `yaml:"canUpdateTargetedPods" json:"canUpdateTargetedPods" default:"false"`
CanStopTrafficCapturing bool `yaml:"canStopTrafficCapturing" json:"canStopTrafficCapturing" default:"false"`
ShowAdminConsoleLink bool `yaml:"showAdminConsoleLink" json:"showAdminConsoleLink" default:"false"`
}
type SamlConfig struct {
@@ -163,16 +189,9 @@ type SentryConfig struct {
type CapabilitiesConfig struct {
NetworkCapture []string `yaml:"networkCapture" json:"networkCapture" default:"[]"`
ServiceMeshCapture []string `yaml:"serviceMeshCapture" json:"serviceMeshCapture" default:"[]"`
KernelModule []string `yaml:"kernelModule" json:"kernelModule" default:"[]"`
EBPFCapture []string `yaml:"ebpfCapture" json:"ebpfCapture" default:"[]"`
}
type KernelModuleConfig struct {
Enabled bool `yaml:"enabled" json:"enabled" default:"false"`
Image string `yaml:"image" json:"image" default:"kubeshark/pf-ring-module:all"`
UnloadOnDestroy bool `yaml:"unloadOnDestroy" json:"unloadOnDestroy" default:"false"`
}
type MetricsConfig struct {
Port uint16 `yaml:"port" json:"port" default:"49100"`
}
@@ -202,6 +221,7 @@ type PcapDumpConfig struct {
PcapMaxTime string `yaml:"maxTime" json:"maxTime" default:"1h"`
PcapMaxSize string `yaml:"maxSize" json:"maxSize" default:"500MB"`
PcapSrcDir string `yaml:"pcapSrcDir" json:"pcapSrcDir" default:"pcapdump"`
PcapTime string `yaml:"time" json:"time" default:"time"`
}
type TapConfig struct {
@@ -220,6 +240,7 @@ type TapConfig struct {
StorageClass string `yaml:"storageClass" json:"storageClass" default:"standard"`
DryRun bool `yaml:"dryRun" json:"dryRun" default:"false"`
Resources ResourcesConfig `yaml:"resources" json:"resources"`
Probes ProbesConfig `yaml:"probes" json:"probes"`
ServiceMesh bool `yaml:"serviceMesh" json:"serviceMesh" default:"true"`
Tls bool `yaml:"tls" json:"tls" default:"true"`
DisableTlsLog bool `yaml:"disableTlsLog" json:"disableTlsLog" default:"true"`
@@ -232,11 +253,10 @@ type TapConfig struct {
Ingress IngressConfig `yaml:"ingress" json:"ingress"`
IPv6 bool `yaml:"ipv6" json:"ipv6" default:"true"`
Debug bool `yaml:"debug" json:"debug" default:"false"`
KernelModule KernelModuleConfig `yaml:"kernelModule" json:"kernelModule"`
Telemetry TelemetryConfig `yaml:"telemetry" json:"telemetry"`
ResourceGuard ResourceGuardConfig `yaml:"resourceGuard" json:"resourceGuard"`
Sentry SentryConfig `yaml:"sentry" json:"sentry"`
DefaultFilter string `yaml:"defaultFilter" json:"defaultFilter" default:"!dns and !tcp and !udp and !icmp"`
DefaultFilter string `yaml:"defaultFilter" json:"defaultFilter" default:"!dns and !error"`
ScriptingDisabled bool `yaml:"scriptingDisabled" json:"scriptingDisabled" default:"false"`
TargetedPodsUpdateDisabled bool `yaml:"targetedPodsUpdateDisabled" json:"targetedPodsUpdateDisabled" default:"false"`
PresetFiltersChangingEnabled bool `yaml:"presetFiltersChangingEnabled" json:"presetFiltersChangingEnabled" default:"true"`
@@ -245,6 +265,7 @@ type TapConfig struct {
Capabilities CapabilitiesConfig `yaml:"capabilities" json:"capabilities"`
GlobalFilter string `yaml:"globalFilter" json:"globalFilter" default:""`
EnabledDissectors []string `yaml:"enabledDissectors" json:"enabledDissectors"`
CustomMacros map[string]string `yaml:"customMacros" json:"customMacros" default:"{\"https\":\"tls and (http or http2)\"}"`
Metrics MetricsConfig `yaml:"metrics" json:"metrics"`
Pprof PprofConfig `yaml:"pprof" json:"pprof"`
Misc MiscConfig `yaml:"misc" json:"misc"`

View File

@@ -1,6 +1,6 @@
apiVersion: v2
name: kubeshark
version: "52.3.86"
version: "52.3.95"
description: The API Traffic Analyzer for Kubernetes
home: https://kubeshark.co
keywords:

View File

@@ -1,152 +0,0 @@
# PF_RING
<!-- TOC -->
- [PF\_RING](#pf_ring)
- [Overview](#overview)
- [Loading PF\_RING module on Kubernetes nodes](#loading-pf_ring-module-on-kubernetes-nodes)
- [Pre-built kernel module exists and external egress allowed](#pre-built-kernel-module-exists-and-external-egress-allowed)
- [Pre-built kernel module doesn't exist or external egress isn't allowed](#pre-built-kernel-module-doesnt-exist-or-external-egress-isnt-allowed)
- [Appendix A: PF\_RING kernel module compilation](#appendix-a-pf_ring-kernel-module-compilation)
- [Automated complilation](#automated-complilation)
- [Manual compilation](#manual-compilation)
<!-- /TOC -->
## Overview
PF_RING™ is an advanced Linux kernel module and user-space framework designed for high-speed packet processing. It offers a uniform API for packet processing applications, enabling efficient handling of large volumes of network data.
For comprehensive information on PF_RING™, please visit the [User's Guide]((https://www.ntop.org/guides/pf_ring) and access detailed [API Documentation](http://www.ntop.org/guides/pf_ring_api/files.html).
## Loading PF_RING module on Kubernetes nodes
PF_RING kernel module loading is performed via of the `worker` component pod.
The target container `tap.kernelModule.image` must contain `pf_ring.ko` file under path `/opt/lib/modules/<kernel version>/pf_ring.ko`.
Kubeshark provides ready to use containers with kernel modules for the most popular kernel versions running in different managed clouds.
Prior to deploying `kubeshark` with PF_RING enabled, it is essential to verify if a PF_RING kernel module is already built for your kernel version.
Kubeshark provides additional CLI tool for this purpose - [pf-ring-compiler](https://github.com/kubeshark/pf-ring-compiler).
Compatibility verification can be done by running:
```bash
pfring-compiler compatibility
```
This command checks for the availability of kernel modules for the kernel versions running across all nodes in the Kubernetes cluster.
Example output for a compatible cluster:
```bash
Node Kernel Version Supported
ip-192-168-77-230.us-west-2.compute.internal 5.10.199-190.747.amzn2.x86_64 true
ip-192-168-34-216.us-west-2.compute.internal 5.10.199-190.747.amzn2.x86_64 true
Cluster is compatible
```
Another option to verify availability of kernel modules is just inspecting available kernel module versions via:
```bash
curl https://api.kubeshark.co/kernel-modules/meta/versions.jso
```
Based on Kubernetes cluster compatibility and external connection capabilities, user has two options:
1. Use Kubeshark provided container `kubeshark/pf-ring-module`
2. Build custom container with required kernel module version.
### Pre-built kernel module exists and external egress allowed
In this case no additional configuration required.
Kubeshark will load PF_RING kernel module from the default `kubeshark/pf-ring-module:all` container.
### Pre-built kernel module doesn't exist or external egress isn't allowed
In this case building custom Docker image is required.
1. Compile PF_RING kernel module for target version
Skip if you have `pf_ring.ko` for the target kernel version.
Otherwise, follow [Appendix A](#appendix-a-pf_ring-kernel-module-compilation) for details.
2. Build container
The same build process Kubeshark has can be reused (follow [pfring-compilier](https://github.com/kubeshark/pf-ring-compiler/tree/main/modules) for details).
3. Configure Helm values
```yaml
tap:
kernelModule:
image: <container from stage 2>
```
## Appendix A: PF_RING kernel module compilation
PF_RING kernel module compilation can be completed automatically or manually.
### Automated complilation
In case your Kubernetes workers run supported Linux distribution, `kubeshark` CLI can be used to build PF_RING module:
```bash
pfring-compiler compile --target <distro>
```
This command requires:
- kubectl to be installed and configured with a proper context
- egress connection to Internet available
This command:
1. Runs Kubernetes job with build container
2. Waits for job to be completed
3. Downloads `pf-ring-<kernel version>.ko` file into the current folder.
4. Cleans up created job.
Currently supported distros:
- Ubuntu
- RHEL 9
- Amazon Linux 2
### Manual compilation
The process description is based on Ubuntu 22.04 distribution.
1. Get terminal access to the node with target kernel version
This can be done either via SSH directly to node or with debug container running on the target node:
```bash
kubectl debug node/<target node> -it --attach=true --image=ubuntu:22.04
```
2. Install build tools and kernel headers
```bash
apt update
apt install -y gcc build-essential make git wget tar gzip
apt install -y linux-headers-$(uname -r)
```
3. Download PF_RING source code
```bash
wget https://github.com/ntop/PF_RING/archive/refs/tags/8.4.0.tar.gz
tar -xf 8.4.0.tar.gz
cd PF_RING-8.4.0/kernel
```
4. Compile the kernel module
```bash
make KERNEL_SRC=/usr/src/linux-headers-$(uname -r)
```
5. Copy `pf_ring.ko` to the local file system.
Use `scp` or `kubectl cp` depending on type of access(SSH or debug pod).

View File

@@ -104,6 +104,20 @@ helm install kubeshark kubeshark/kubeshark \
Please refer to [metrics](./metrics.md) documentation for details.
## Override Tag, Tags, Images
In addition to using a private registry, you can further override the images' tag, specific image tags and specific image names.
Example for overriding image names:
```yaml
docker:
overrideImage:
worker: docker.io/kubeshark/worker:v52.3.87
front: docker.io/kubeshark/front:v52.3.87
hub: docker.io/kubeshark/hub:v52.3.87
```
## Configuration
| Parameter | Description | Default |
@@ -114,9 +128,10 @@ Please refer to [metrics](./metrics.md) documentation for details.
| `tap.docker.tagLocked` | If `false` - use latest minor tag | `true` |
| `tap.docker.imagePullPolicy` | Kubernetes image pull policy | `Always` |
| `tap.docker.imagePullSecrets` | Kubernetes secrets to pull the images | `[]` |
| `tap.docker.overrideTag` | DANGER: Used to override specific images, when testing custom features from the Kubeshark team | `""` |
| `tap.docker.overrideImage` | Can be used to directly override image names | `""` |
| `tap.docker.overrideTag` | Can be used to override image tags | `""` |
| `tap.proxy.hub.srvPort` | Hub server port. Change if already occupied. | `8898` |
| `tap.proxy.worker.srvPort` | Worker server port. Change if already occupied.| `30001` |
| `tap.proxy.worker.srvPort` | Worker server port. Change if already occupied.| `48999` |
| `tap.proxy.front.port` | Front service port. Change if already occupied.| `8899` |
| `tap.proxy.host` | Change to 0.0.0.0 top open up to the world. | `127.0.0.1` |
| `tap.regex` | Target (process traffic from) pods that match regex | `.*` |
@@ -145,6 +160,14 @@ Please refer to [metrics](./metrics.md) documentation for details.
| `tap.resources.tracer.limits.memory` | Memory limit for tracer | `3Gi` |
| `tap.resources.tracer.requests.cpu` | CPU request for tracer | `50m` |
| `tap.resources.tracer.requests.memory` | Memory request for tracer | `50Mi` |
| `tap.probes.hub.initialDelaySeconds` | Initial delay before probing the hub | `15` |
| `tap.probes.hub.periodSeconds` | Period between probes for the hub | `10` |
| `tap.probes.hub.successThreshold` | Number of successful probes before considering the hub healthy | `1` |
| `tap.probes.hub.failureThreshold` | Number of failed probes before considering the hub unhealthy | `3` |
| `tap.probes.sniffer.initialDelaySeconds` | Initial delay before probing the sniffer | `15` |
| `tap.probes.sniffer.periodSeconds` | Period between probes for the sniffer | `10` |
| `tap.probes.sniffer.successThreshold` | Number of successful probes before considering the sniffer healthy | `1` |
| `tap.probes.sniffer.failureThreshold` | Number of failed probes before considering the sniffer unhealthy | `3` |
| `tap.serviceMesh` | Capture traffic from service meshes like Istio, Linkerd, Consul, etc. | `true` |
| `tap.tls` | Capture the encrypted/TLS traffic from cryptography libraries like OpenSSL | `true` |
| `tap.disableTlsLog` | Suppress logging for TLS/eBPF | `true` |
@@ -160,7 +183,7 @@ Please refer to [metrics](./metrics.md) documentation for details.
| `tap.auth.saml.x509crt` | A self-signed X.509 `.cert` contents <br/>(effective, if `tap.auth.type = saml`) | `` |
| `tap.auth.saml.x509key` | A self-signed X.509 `.key` contents <br/>(effective, if `tap.auth.type = saml`) | `` |
| `tap.auth.saml.roleAttribute` | A SAML attribute name corresponding to user's authorization role <br/>(effective, if `tap.auth.type = saml`) | `role` |
| `tap.auth.saml.roles` | A list of SAML authorization roles and their permissions <br/>(effective, if `tap.auth.type = saml`) | `{"admin":{"canDownloadPCAP":true,"canUpdateTargetedPods":true,"canUseScripting":true, "canStopTrafficCapturing":true, "filter":"","showAdminConsoleLink":true}}` |
| `tap.auth.saml.roles` | A list of SAML authorization roles and their permissions <br/>(effective, if `tap.auth.type = saml`) | `{"admin":{"canDownloadPCAP":true,"canUpdateTargetedPods":true,"canUseScripting":true, "scriptingPermissions":{"canSave":true, "canActivate":true, "canDelete":true}, "canStopTrafficCapturing":true, "filter":"","showAdminConsoleLink":true}}` |
| `tap.ingress.enabled` | Enable `Ingress` | `false` |
| `tap.ingress.className` | Ingress class name | `""` |
| `tap.ingress.host` | Host of the `Ingress` | `ks.svc.cluster.local` |
@@ -168,17 +191,14 @@ Please refer to [metrics](./metrics.md) documentation for details.
| `tap.ingress.annotations` | `Ingress` annotations | `{}` |
| `tap.ipv6` | Enable IPv6 support for the front-end | `true` |
| `tap.debug` | Enable debug mode | `false` |
| `tap.kernelModule.enabled` | Use PF_RING kernel module([details](PF_RING.md)) | `false` |
| `tap.kernelModule.image` | Container image containing PF_RING kernel module with supported kernel version([details](PF_RING.md)) | "kubeshark/pf-ring-module:all" |
| `tap.kernelModule.unloadOnDestroy` | Create additional container which watches for pod termination and unloads PF_RING kernel module. | `false`|
| `tap.telemetry.enabled` | Enable anonymous usage statistics collection | `true` |
| `tap.resourceGuard.enabled` | Enable resource guard worker process, which watches RAM/disk usage and enables/disables traffic capture based on available resources | `false` |
| `tap.sentry.enabled` | Enable sending of error logs to Sentry | `false` |
| `tap.sentry.environment` | Sentry environment to label error logs with | `production` |
| `tap.defaultFilter` | Sets the default dashboard KFL filter (e.g. `http`). By default, this value is set to filter out noisy protocols such as DNS, UDP, ICMP and TCP. The user can easily change this in the Dashboard. You can also change this value to change this behavior. | `"!dns and !tcp and !udp and !icmp"` |
| `tap.defaultFilter` | Sets the default dashboard KFL filter (e.g. `http`). By default, this value is set to filter out noisy protocols such as DNS, UDP, ICMP and TCP. The user can easily change this, **temporarily**, in the Dashboard. For a permanent change, you should change this value in the `values.yaml` or `config.yaml` file. | `"!dns and !error"` |
| `tap.globalFilter` | Prepends to any KFL filter and can be used to limit what is visible in the dashboard. For example, `redact("request.headers.Authorization")` will redact the appropriate field. Another example `!dns` will not show any DNS traffic. | `""` |
| `tap.metrics.port` | Pod port used to expose Prometheus metrics | `49100` |
| `tap.enabledDissectors` | This is an array of strings representing the list of supported protocols. Remove or comment out redundant protocols (e.g., dns).| The default list excludes: `dns` and `tcp` |
| `tap.enabledDissectors` | This is an array of strings representing the list of supported protocols. Remove or comment out redundant protocols (e.g., dns).| The default list excludes: `udp` and `tcp` |
| `logs.file` | Logs dump path | `""` |
| `pcapdump.enabled` | Enable recording of all traffic captured according to other parameters. Whatever Kubeshark captures, considering pod targeting rules, will be stored in pcap files ready to be viewed by tools | `true` |
| `pcapdump.maxTime` | The time window into the past that will be stored. Older traffic will be discarded. | `2h` |

View File

@@ -31,8 +31,8 @@ rules:
- namespaces
verbs:
- get
resourceNames:
- kube-system
- list
- watch
- apiGroups:
- networking.k8s.io
resources:

View File

@@ -31,9 +31,8 @@ spec:
- ./hub
- -port
- "8080"
{{- if .Values.tap.debug }}
- -debug
{{- end }}
- -loglevel
- '{{ .Values.logLevel | default "warning" }}'
env:
- name: POD_NAME
valueFrom:
@@ -51,7 +50,9 @@ spec:
value: 'https://api.kubeshark.co'
- name: PROFILING_ENABLED
value: '{{ .Values.tap.pprof.enabled }}'
{{- if .Values.tap.docker.overrideTag.hub }}
{{- if .Values.tap.docker.overrideImage.hub }}
image: '{{ .Values.tap.docker.overrideImage.hub }}'
{{- else if .Values.tap.docker.overrideTag.hub }}
image: '{{ .Values.tap.docker.registry }}/hub:{{ .Values.tap.docker.overrideTag.hub }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/hub:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}'
@@ -64,26 +65,34 @@ spec:
{{- end }}
{{- end }}
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
periodSeconds: {{ .Values.tap.probes.hub.periodSeconds }}
failureThreshold: {{ .Values.tap.probes.hub.failureThreshold }}
successThreshold: {{ .Values.tap.probes.hub.successThreshold }}
initialDelaySeconds: {{ .Values.tap.probes.hub.initialDelaySeconds }}
tcpSocket:
port: 8080
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
periodSeconds: {{ .Values.tap.probes.hub.periodSeconds }}
failureThreshold: {{ .Values.tap.probes.hub.failureThreshold }}
successThreshold: {{ .Values.tap.probes.hub.successThreshold }}
initialDelaySeconds: {{ .Values.tap.probes.hub.initialDelaySeconds }}
tcpSocket:
port: 8080
resources:
limits:
{{ if ne (toString .Values.tap.resources.hub.limits.cpu) "0" }}
cpu: {{ .Values.tap.resources.hub.limits.cpu }}
{{ end }}
{{ if ne (toString .Values.tap.resources.hub.limits.memory) "0" }}
memory: {{ .Values.tap.resources.hub.limits.memory }}
{{ end }}
requests:
{{ if ne (toString .Values.tap.resources.hub.requests.cpu) "0" }}
cpu: {{ .Values.tap.resources.hub.requests.cpu }}
{{ end }}
{{ if ne (toString .Values.tap.resources.hub.requests.memor) "0" }}
memory: {{ .Values.tap.resources.hub.requests.memory }}
{{ end }}
volumeMounts:
- name: saml-x509-volume
mountPath: "/etc/saml/x509"

View File

@@ -66,7 +66,9 @@ spec:
value: '{{ (include "sentry.enabled" .) }}'
- name: REACT_APP_SENTRY_ENVIRONMENT
value: '{{ .Values.tap.sentry.environment }}'
{{- if .Values.tap.docker.overrideTag.front }}
{{- if .Values.tap.docker.overrideImage.front }}
image: '{{ .Values.tap.docker.overrideImage.front }}'
{{- else if .Values.tap.docker.overrideTag.front }}
image: '{{ .Values.tap.docker.registry }}/front:{{ .Values.tap.docker.overrideTag.front }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/front:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}'

View File

@@ -25,29 +25,39 @@ spec:
name: kubeshark-worker-daemon-set
namespace: kubeshark
spec:
{{- if .Values.tap.kernelModule.enabled }}
initContainers:
- name: load-pf-ring
image: {{ .Values.tap.kernelModule.image }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
{{- if .Values.tap.docker.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.tap.docker.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
securityContext:
capabilities:
add:
{{- range .Values.tap.capabilities.kernelModule }}
{{ print "- " . }}
{{- end }}
drop:
- ALL
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
{{- end }}
- command:
- /bin/sh
- -c
- mkdir -p /sys/fs/bpf && mount | grep -q '/sys/fs/bpf' || mount -t bpf bpf /sys/fs/bpf
{{- if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{- end }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
name: check-bpf
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys
name: sys
mountPropagation: Bidirectional
- command:
- ./tracer
- -init-bpf
{{- if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{- end }}
imagePullPolicy: {{ .Values.tap.docker.imagePullPolicy }}
name: init-bpf
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys
name: sys
containers:
- command:
- ./worker
@@ -59,6 +69,8 @@ spec:
- '{{ .Values.tap.metrics.port }}'
- -packet-capture
- '{{ .Values.tap.packetCapture }}'
- -loglevel
- '{{ .Values.logLevel | default "warning" }}'
{{- if .Values.tap.tls }}
- -unixsocket
{{- end }}
@@ -67,9 +79,6 @@ spec:
{{- end }}
- -procfs
- /hostproc
{{- if .Values.tap.kernelModule.enabled }}
- -kernel-module
{{- end }}
{{- if ne .Values.tap.packetCapture "ebpf" }}
- -disable-ebpf
{{- end }}
@@ -80,10 +89,9 @@ spec:
- '{{ .Values.tap.misc.resolutionStrategy }}'
- -staletimeout
- '{{ .Values.tap.misc.staleTimeoutSeconds }}'
{{- if .Values.tap.debug }}
- -debug
{{- end }}
{{- if .Values.tap.docker.overrideTag.worker }}
{{- if .Values.tap.docker.overrideImage.worker }}
image: '{{ .Values.tap.docker.overrideImage.worker }}'
{{- else if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{ else }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (include "kubeshark.defaultVersion" .) }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
@@ -123,11 +131,19 @@ spec:
value: '{{ .Values.tap.sentry.environment }}'
resources:
limits:
{{ if ne (toString .Values.tap.resources.sniffer.limits.cpu) "0" }}
cpu: {{ .Values.tap.resources.sniffer.limits.cpu }}
{{ end }}
{{ if ne (toString .Values.tap.resources.sniffer.limits.memory) "0" }}
memory: {{ .Values.tap.resources.sniffer.limits.memory }}
{{ end }}
requests:
{{ if ne (toString .Values.tap.resources.sniffer.requests.cpu) "0" }}
cpu: {{ .Values.tap.resources.sniffer.requests.cpu }}
{{ end }}
{{ if ne (toString .Values.tap.resources.sniffer.requests.memory) "0" }}
memory: {{ .Values.tap.resources.sniffer.requests.memory }}
{{ end }}
securityContext:
capabilities:
add:
@@ -139,20 +155,25 @@ spec:
{{ print "- " . }}
{{- end }}
{{- end }}
{{- if .Values.tap.capabilities.ebpfCapture }}
{{- range .Values.tap.capabilities.ebpfCapture }}
{{ print "- " . }}
{{- end }}
{{- end }}
drop:
- ALL
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
periodSeconds: {{ .Values.tap.probes.sniffer.periodSeconds }}
failureThreshold: {{ .Values.tap.probes.sniffer.failureThreshold }}
successThreshold: {{ .Values.tap.probes.sniffer.successThreshold }}
initialDelaySeconds: {{ .Values.tap.probes.sniffer.initialDelaySeconds }}
tcpSocket:
port: {{ .Values.tap.proxy.worker.srvPort }}
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
periodSeconds: {{ .Values.tap.probes.sniffer.periodSeconds }}
failureThreshold: {{ .Values.tap.probes.sniffer.failureThreshold }}
successThreshold: {{ .Values.tap.probes.sniffer.successThreshold }}
initialDelaySeconds: {{ .Values.tap.probes.sniffer.initialDelaySeconds }}
tcpSocket:
port: {{ .Values.tap.proxy.worker.srvPort }}
volumeMounts:
@@ -164,20 +185,6 @@ spec:
readOnly: true
- mountPath: /app/data
name: data
{{- if and (eq .Values.tap.kernelModule.enabled true) (eq .Values.tap.kernelModule.unloadOnDestroy true) }}
- name: unload-pf-ring
image: {{ .Values.tap.kernelModule.image }}
command: ["/bin/sh"]
args: ["-c", "trap 'rmmod pf_ring && sleep 3' SIGTERM; while true; do sleep 1; done"]
securityContext:
capabilities:
add:
{{- range .Values.tap.capabilities.kernelModule }}
{{ print "- " . }}
{{- end }}
drop:
- ALL
{{- end }}
{{- if .Values.tap.tls }}
- command:
- ./tracer
@@ -186,9 +193,6 @@ spec:
{{- if ne .Values.tap.packetCapture "ebpf" }}
- -disable-ebpf
{{- end }}
{{- if .Values.tap.debug }}
- -debug
{{- end }}
{{- if .Values.tap.disableTlsLog }}
- -disable-tls-log
{{- end }}
@@ -196,6 +200,8 @@ spec:
- -port
- '{{ add .Values.tap.proxy.worker.srvPort 1 }}'
{{- end }}
# - -loglevel
# - '{{ .Values.logLevel | default "warning" }}'
{{- if .Values.tap.docker.overrideTag.worker }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.overrideTag.worker }}{{ include "kubeshark.dockerTagDebugVersion" . }}'
{{ else }}
@@ -226,11 +232,19 @@ spec:
value: '{{ .Values.tap.sentry.environment }}'
resources:
limits:
{{ if ne (toString .Values.tap.resources.tracer.limits.cpu) "0" }}
cpu: {{ .Values.tap.resources.tracer.limits.cpu }}
{{ end }}
{{ if ne (toString .Values.tap.resources.tracer.limits.memory) "0" }}
memory: {{ .Values.tap.resources.tracer.limits.memory }}
{{ end }}
requests:
{{ if ne (toString .Values.tap.resources.tracer.requests.cpu) "0" }}
cpu: {{ .Values.tap.resources.tracer.requests.cpu }}
{{ end }}
{{ if ne (toString .Values.tap.resources.tracer.requests.memory) "0" }}
memory: {{ .Values.tap.resources.tracer.requests.memory }}
{{ end }}
securityContext:
capabilities:
add:

View File

@@ -50,6 +50,7 @@ data:
{{- end }}'
DUPLICATE_TIMEFRAME: '{{ .Values.tap.misc.duplicateTimeframe }}'
ENABLED_DISSECTORS: '{{ gt (len .Values.tap.enabledDissectors) 0 | ternary (join "," .Values.tap.enabledDissectors) "" }}'
CUSTOM_MACROS: '{{ toJson .Values.tap.customMacros }}'
DISSECTORS_UPDATING_ENABLED: '{{ .Values.dissectorsUpdatingEnabled | ternary "true" "false" }}'
DETECT_DUPLICATES: '{{ .Values.tap.misc.detectDuplicates | ternary "true" "false" }}'
PCAP_DUMP_ENABLE: '{{ .Values.pcapdump.enabled }}'

View File

@@ -0,0 +1,23 @@
---
kind: Service
apiVersion: v1
metadata:
labels:
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9100'
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-hub-metrics
namespace: {{ .Release.Namespace }}
spec:
selector:
app.kubeshark.co/app: hub
{{- include "kubeshark.labels" . | nindent 4 }}
ports:
- name: metrics
protocol: TCP
port: 9100
targetPort: 9100

View File

@@ -20,6 +20,9 @@ spec:
- ports:
- protocol: TCP
port: 8080
- ports:
- protocol: TCP
port: 9100
egress:
- {}
---

View File

@@ -2,26 +2,36 @@ Thank you for installing {{ title .Chart.Name }}.
Registry: {{ .Values.tap.docker.registry }}
Tag: {{ not (eq .Values.tap.docker.tag "") | ternary .Values.tap.docker.tag (printf "v%s" .Chart.Version) }}
{{- if .Values.tap.docker.overrideTag.worker }}
Overridden worker tag: {{ .Values.tap.docker.overrideTag.worker }}
{{ end }}
{{- end }}
{{- if .Values.tap.docker.overrideTag.hub }}
Overridden hub tag: {{ .Values.tap.docker.overrideTag.hub }}
{{ end }}
{{- end }}
{{- if .Values.tap.docker.overrideTag.front }}
Overridden front tag: {{ .Values.tap.docker.overrideTag.front }}
{{ end }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.worker }}
Overridden worker image: {{ .Values.tap.docker.overrideImage.worker }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.hub }}
Overridden hub image: {{ .Values.tap.docker.overrideImage.hub }}
{{- end }}
{{- if .Values.tap.docker.overrideImage.front }}
Overridden front image: {{ .Values.tap.docker.overrideImage.front }}
{{- end }}
Your deployment has been successful. The release is named `{{ .Release.Name }}` and it has been deployed in the `{{ .Release.Namespace }}` namespace.
{{- if .Values.tap.telemetry.enabled }}
Notice: Telemetry is enabled. Kubeshark will collect anonymous usage statistics.
{{ end }}
Notices:
{{- if .Values.supportChatEnabled}}
- Support chat using Intercom is enabled. It can be disabled using `--set supportChatEnabled=false`
{{- end }}
{{- if eq .Values.license ""}}
- No license key was detected. You can get your license key from https://console.kubeshark.co/.
{{- end }}
{{- if .Values.tap.ingress.enabled }}
{{ if .Values.tap.ingress.enabled }}
You can now access the application through the following URL:
http{{ if .Values.tap.ingress.tls }}s{{ end }}://{{ .Values.tap.ingress.host }}
@@ -36,4 +46,4 @@ To access the application, follow these steps:
2. Once port forwarding is done, you can access the application by visiting the following URL in your web browser:
http://0.0.0.0:8899
{{ end }}
{{- end }}

View File

@@ -6,13 +6,17 @@ tap:
tagLocked: true
imagePullPolicy: Always
imagePullSecrets: []
overrideImage:
worker: ""
hub: ""
front: ""
overrideTag:
worker: ""
hub: ""
front: ""
proxy:
worker:
srvPort: 30001
srvPort: 48999
hub:
srvPort: 8898
front:
@@ -36,25 +40,36 @@ tap:
resources:
hub:
limits:
cpu: ""
cpu: "0"
memory: 5Gi
requests:
cpu: ""
cpu: 50m
memory: 50Mi
sniffer:
limits:
cpu: ""
cpu: "0"
memory: 5Gi
requests:
cpu: ""
cpu: 50m
memory: 50Mi
tracer:
limits:
cpu: ""
cpu: "0"
memory: 5Gi
requests:
cpu: ""
cpu: 50m
memory: 50Mi
probes:
hub:
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
sniffer:
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
serviceMesh: true
tls: true
disableTlsLog: true
@@ -81,6 +96,10 @@ tap:
filter: ""
canDownloadPCAP: true
canUseScripting: true
scriptingPermissions:
canSave: true
canActivate: true
canDelete: true
canUpdateTargetedPods: true
canStopTrafficCapturing: true
showAdminConsoleLink: true
@@ -92,10 +111,6 @@ tap:
annotations: {}
ipv6: true
debug: false
kernelModule:
enabled: false
image: kubeshark/pf-ring-module:all
unloadOnDestroy: false
telemetry:
enabled: true
resourceGuard:
@@ -103,7 +118,7 @@ tap:
sentry:
enabled: false
environment: production
defaultFilter: "!dns and !tcp and !udp and !icmp"
defaultFilter: "!dns and !error"
scriptingDisabled: false
targetedPodsUpdateDisabled: false
presetFiltersChangingEnabled: true
@@ -117,8 +132,6 @@ tap:
- SYS_ADMIN
- SYS_PTRACE
- DAC_OVERRIDE
kernelModule:
- SYS_MODULE
ebpfCapture:
- SYS_ADMIN
- SYS_PTRACE
@@ -135,7 +148,10 @@ tap:
- sctp
- syscall
- ws
- tls
- ldap
- radius
customMacros:
https: tls and (http or http2)
metrics:
port: 49100
pprof:
@@ -162,6 +178,7 @@ pcapdump:
maxTime: 1h
maxSize: 500MB
pcapSrcDir: pcapdump
time: time
kube:
configPath: ""
context: ""
@@ -175,7 +192,9 @@ dissectorsUpdatingEnabled: true
scripting:
env: {}
source: ""
sources: []
watchScripts: true
active: []
console: true
timezone: ""
logLevel: warning

View File

@@ -247,6 +247,10 @@ func (provider *Provider) GetNamespaces() (namespaces []string) {
return
}
func (provider *Provider) GetClientSet() *kubernetes.Clientset {
return provider.clientSet
}
func getClientSet(config *rest.Config) (*kubernetes.Clientset, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {

View File

@@ -1,13 +1,13 @@
---
# Source: kubeshark/templates/16-network-policies.yaml
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub-network-policy
@@ -23,18 +23,21 @@ spec:
- ports:
- protocol: TCP
port: 8080
- ports:
- protocol: TCP
port: 9100
egress:
- {}
---
# Source: kubeshark/templates/16-network-policies.yaml
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front-network-policy
@@ -53,15 +56,15 @@ spec:
egress:
- {}
---
# Source: kubeshark/templates/16-network-policies.yaml
# Source: kubeshark/templates/17-network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-worker-network-policy
@@ -76,7 +79,7 @@ spec:
ingress:
- ports:
- protocol: TCP
port: 30001
port: 48999
- protocol: TCP
port: 49100
egress:
@@ -87,10 +90,10 @@ apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-service-account
@@ -104,10 +107,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
stringData:
LICENSE: ''
@@ -121,10 +124,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
stringData:
AUTH_SAML_X509_CRT: |
@@ -137,10 +140,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
stringData:
AUTH_SAML_X509_KEY: |
@@ -152,10 +155,10 @@ metadata:
name: kubeshark-nginx-config-map
namespace: default
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
data:
default.conf: |
@@ -216,10 +219,10 @@ metadata:
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
data:
POD_REGEX: '.*'
@@ -236,7 +239,7 @@ data:
AUTH_TYPE: 'oidc'
AUTH_SAML_IDP_METADATA_URL: ''
AUTH_SAML_ROLE_ATTRIBUTE: 'role'
AUTH_SAML_ROLES: '{"admin":{"canDownloadPCAP":true,"canStopTrafficCapturing":true,"canUpdateTargetedPods":true,"canUseScripting":true,"filter":"","showAdminConsoleLink":true}}'
AUTH_SAML_ROLES: '{"admin":{"canDownloadPCAP":true,"canStopTrafficCapturing":true,"canUpdateTargetedPods":true,"canUseScripting":true,"filter":"","scriptingPermissions":{"canActivate":true,"canDelete":true,"canSave":true},"showAdminConsoleLink":true}}'
TELEMETRY_DISABLED: 'false'
SCRIPTING_DISABLED: ''
TARGETED_PODS_UPDATE_DISABLED: ''
@@ -244,7 +247,7 @@ data:
RECORDING_DISABLED: ''
STOP_TRAFFIC_CAPTURING_DISABLED: 'false'
GLOBAL_FILTER: ""
DEFAULT_FILTER: "!dns and !tcp and !udp and !icmp"
DEFAULT_FILTER: "!dns and !error"
TRAFFIC_SAMPLE_RATE: '100'
JSON_TTL: '5m'
PCAP_TTL: '10s'
@@ -252,7 +255,8 @@ data:
TIMEZONE: ' '
CLOUD_LICENSE_ENABLED: 'true'
DUPLICATE_TIMEFRAME: '200ms'
ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,sctp,syscall,ws,tls'
ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,sctp,syscall,ws,ldap,radius'
CUSTOM_MACROS: '{"https":"tls and (http or http2)"}'
DISSECTORS_UPDATING_ENABLED: 'true'
DETECT_DUPLICATES: 'false'
PCAP_DUMP_ENABLE: 'true'
@@ -266,10 +270,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-cluster-role-default
@@ -295,8 +299,8 @@ rules:
- namespaces
verbs:
- get
resourceNames:
- kube-system
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
@@ -314,10 +318,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-cluster-role-binding-default
@@ -336,10 +340,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-config-role
@@ -366,10 +370,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-config-role-binding
@@ -389,10 +393,10 @@ kind: Service
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub
@@ -411,10 +415,10 @@ apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front
@@ -433,10 +437,10 @@ kind: Service
apiVersion: v1
metadata:
labels:
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
prometheus.io/scrape: 'true'
@@ -446,10 +450,10 @@ metadata:
spec:
selector:
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
ports:
- name: metrics
@@ -457,6 +461,35 @@ spec:
port: 49100
targetPort: 49100
---
# Source: kubeshark/templates/16-hub-service-metrics.yaml
kind: Service
apiVersion: v1
metadata:
labels:
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9100'
name: kubeshark-hub-metrics
namespace: default
spec:
selector:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
ports:
- name: metrics
protocol: TCP
port: 9100
targetPort: 9100
---
# Source: kubeshark/templates/09-worker-daemon-set.yaml
apiVersion: apps/v1
kind: DaemonSet
@@ -464,10 +497,10 @@ metadata:
labels:
app.kubeshark.co/app: worker
sidecar.istio.io/inject: "false"
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-worker-daemon-set
@@ -482,25 +515,52 @@ spec:
metadata:
labels:
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
name: kubeshark-worker-daemon-set
namespace: kubeshark
spec:
initContainers:
- command:
- /bin/sh
- -c
- mkdir -p /sys/fs/bpf && mount | grep -q '/sys/fs/bpf' || mount -t bpf bpf /sys/fs/bpf
image: 'docker.io/kubeshark/worker:v52.3.95'
imagePullPolicy: Always
name: check-bpf
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys
name: sys
mountPropagation: Bidirectional
- command:
- ./tracer
- -init-bpf
image: 'docker.io/kubeshark/worker:v52.3.95'
imagePullPolicy: Always
name: init-bpf
securityContext:
privileged: true
volumeMounts:
- mountPath: /sys
name: sys
containers:
- command:
- ./worker
- -i
- any
- -port
- '30001'
- '48999'
- -metrics-port
- '49100'
- -packet-capture
- 'best'
- -loglevel
- 'warning'
- -unixsocket
- -servicemesh
- -procfs
@@ -510,7 +570,7 @@ spec:
- 'auto'
- -staletimeout
- '30'
image: 'docker.io/kubeshark/worker:v52.3.86'
image: 'docker.io/kubeshark/worker:v52.3.95'
imagePullPolicy: Always
name: sniffer
ports:
@@ -540,11 +600,17 @@ spec:
value: 'production'
resources:
limits:
cpu:
memory: 5Gi
requests:
cpu:
cpu: 50m
memory: 50Mi
securityContext:
capabilities:
add:
@@ -553,22 +619,26 @@ spec:
- SYS_ADMIN
- SYS_PTRACE
- DAC_OVERRIDE
- SYS_ADMIN
- SYS_PTRACE
- SYS_RESOURCE
- IPC_LOCK
drop:
- ALL
readinessProbe:
periodSeconds: 1
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
initialDelaySeconds: 15
tcpSocket:
port: 30001
port: 48999
livenessProbe:
periodSeconds: 1
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
initialDelaySeconds: 15
tcpSocket:
port: 30001
port: 48999
volumeMounts:
- mountPath: /hostproc
name: proc
@@ -584,7 +654,9 @@ spec:
- /hostproc
- -disable-ebpf
- -disable-tls-log
image: 'docker.io/kubeshark/worker:v52.3.86'
# - -loglevel
# - 'warning'
image: 'docker.io/kubeshark/worker:v52.3.95'
imagePullPolicy: Always
name: tracer
env:
@@ -604,11 +676,17 @@ spec:
value: 'production'
resources:
limits:
cpu:
memory: 5Gi
requests:
cpu:
cpu: 50m
memory: 50Mi
securityContext:
capabilities:
add:
@@ -680,10 +758,10 @@ kind: Deployment
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub
@@ -699,10 +777,10 @@ spec:
metadata:
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
spec:
dnsPolicy: ClusterFirstWithHostNet
@@ -713,6 +791,8 @@ spec:
- ./hub
- -port
- "8080"
- -loglevel
- 'warning'
env:
- name: POD_NAME
valueFrom:
@@ -730,29 +810,35 @@ spec:
value: 'https://api.kubeshark.co'
- name: PROFILING_ENABLED
value: 'false'
image: 'docker.io/kubeshark/hub:v52.3.86'
image: 'docker.io/kubeshark/hub:v52.3.95'
imagePullPolicy: Always
readinessProbe:
periodSeconds: 1
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
initialDelaySeconds: 15
tcpSocket:
port: 8080
livenessProbe:
periodSeconds: 1
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
initialDelaySeconds: 15
tcpSocket:
port: 8080
resources:
limits:
cpu:
memory: 5Gi
requests:
cpu:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: saml-x509-volume
mountPath: "/etc/saml/x509"
@@ -778,10 +864,10 @@ kind: Deployment
metadata:
labels:
app.kubeshark.co/app: front
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front
@@ -797,10 +883,10 @@ spec:
metadata:
labels:
app.kubeshark.co/app: front
helm.sh/chart: kubeshark-52.3.86
helm.sh/chart: kubeshark-52.3.95
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "52.3.86"
app.kubernetes.io/version: "52.3.95"
app.kubernetes.io/managed-by: Helm
spec:
containers:
@@ -835,7 +921,7 @@ spec:
value: 'false'
- name: REACT_APP_SENTRY_ENVIRONMENT
value: 'production'
image: 'docker.io/kubeshark/front:v52.3.86'
image: 'docker.io/kubeshark/front:v52.3.95'
imagePullPolicy: Always
name: kubeshark-front
livenessProbe: