Compare commits

...

65 Commits
41.6 ... 50.3

Author SHA1 Message Date
M. Mert Yildiran
78c89cc5b4 🔖 Bump the Helm chart version to 50.3 2023-09-17 00:09:37 +03:00
M. Mert Yildiran
b5c9a31380 🔧 Run make generate-manifests 2023-09-16 23:52:53 +03:00
Luiz Oliveira
3dfff2b7a5 ♻️ Turn the Ingress path rewrite for Hub into an Nginx location directive (#1426)
* fixes websocket for nginx-ingress

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* update messagem when helm completes

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* force react port to be a path

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* include Authorization header to the proxy

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* remove hub from proxy

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* remove REACT_APP_HUB_PORT info

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* include path back again to REACT_APP_HUB_PORT

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

---------

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>
2023-09-15 21:43:34 +03:00
M. Mert Yildiran
583a5b97ee 🔧 Re-order the template filenames and re-generate values.yaml and complete.yaml 2023-09-04 02:25:33 +03:00
Luiz Oliveira
64aae06fe5 🛂 Add a new Role and RoleBinding resources to have write access for our own Secret resource (#1416)
* include role and rolebinding to write secrets

With this, the kubeshark service-account have rights to
update the value of the secrets of the same namespace
where kubeshark was deployed. This was necessary to keep
the value of the license updated

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* Update helm-chart/templates/02-cluster-role.yaml

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* Update helm-chart/templates/03-cluster-role-binding.yaml

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* Update helm-chart/templates/03-cluster-role-binding.yaml

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* Update helm-chart/templates/03-cluster-role-binding.yaml

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* Update helm-chart/templates/02-cluster-role.yaml

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

---------

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>
Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>
2023-09-04 02:20:26 +03:00
Luiz Oliveira
1ccaa03fb2 🏗️ Give the user ability to set ingress as needed (#1417)
* Give the user hability to set ingress as needed

- Removed unecessary IngressClass.
- If no IngressClassName passed, use cluster's default class
- Renamed `ingressclass` with `IngressClassName`. Is the standard name
    used for it.
- Included custom annotations for Ingress. This way user can set any
    custom annotation for the ingress only.

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* Update helm-chart/templates/11-ingress.yaml

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* Update config/configStructs/tapConfig.go

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* Update helm-chart/templates/11-ingress.yaml

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* update default ingressClassName value

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

---------

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>
Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>
2023-09-04 02:18:43 +03:00
M. Mert Yildiran
3222212367 🔧 Update complete.yaml 2023-09-01 04:09:57 +03:00
M. Mert Yildiran
c5681871e4 🔖 Bump the Helm chart version to 50.2 2023-09-01 03:22:56 +03:00
M. Mert Yildiran
1ac3ba0a6d 🔧 Add a notice about telemetry into NOTES.txt of the Helm chart 2023-08-31 18:55:58 +03:00
M. Mert Yildiran
d3520765eb 🔥 Delete .dockerignore file 2023-08-31 06:16:52 +03:00
M. Mert Yildiran
fa1e7bcf01 🔧 Add TelemetryConfig struct and --telemetry-enabled flag to tap command 2023-08-31 03:50:14 +03:00
M. Mert Yildiran
bf182b6330 🐛 Template the -tls flag in worker DaemonSet 2023-08-29 03:51:08 +03:00
M. Mert Yildiran
f59f84af02 Add export command to download PCAP export 2023-08-28 22:00:36 +03:00
M. Mert Yildiran
cae5a92a13 🔖 Bump the Helm chart version to 50.1 2023-08-25 22:22:36 +03:00
M. Mert Yildiran
7afb1d8b9b Set the probing port of Hub back to 80 2023-08-24 23:51:47 +03:00
M. Mert Yildiran
f628192216 🚑 Add initialDelaySeconds to readiness and liveness probes of worker DaemonSet 2023-08-24 22:05:26 +03:00
M. Mert Yildiran
b1feb4e33f 🔧 Add port-forward-worker Makefile rule 2023-08-23 23:55:33 +03:00
M. Mert Yildiran
94dff24aed 🔥 Delete Chart.lock file 2023-08-23 02:02:29 +03:00
M. Mert Yildiran
d00d2eafa7 🔖 Bump the Helm chart version to 50.0 2023-08-22 23:25:48 +03:00
M. Mert Yildiran
63eb39b451 🚑 Fix the pod regex in the watch function for the recent changes related to pod names 2023-08-22 23:24:40 +03:00
M. Mert Yildiran
149a8b7efe 🔧 Remove the KMM related Makefile rules 2023-08-22 19:02:39 +03:00
M. Mert Yildiran
247fbc1291 🔥 Delete the module loader Dockerfile 2023-08-22 19:02:22 +03:00
M. Mert Yildiran
0e74238e56 🚀 Rename some of the recently added Kubernetes resources 2023-08-22 19:00:22 +03:00
M. Mert Yildiran
05ecef557f 🔧 Run make generate-manifests 2023-08-22 18:54:25 +03:00
Luiz Oliveira
63325ec890 🚀 Add readiness and liveness probes to worker DaemonSet (#1414)
Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>
2023-08-19 20:32:49 +03:00
M. Mert Yildiran
579cb47ecf 🔥 networking.k8s.io from apiGroups and ingresses from resources in ClusterRole 2023-08-17 17:29:54 +03:00
M. Mert Yildiran
7ed4088b4b Load the environment variables from kubeshark-hub-secret in worker DaemonSet 2023-08-17 00:56:16 +03:00
Luiz Oliveira
f95db49317 🚀 Change Hub's and Front's resource type from Pod to Deployment (#1412)
* change services to ClusterIP and update selector labels

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* replace kind of hub and front to Deployments

Pod -> Deployments
hub config -> Uses a config-map
license -> Ises a secret

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* uses map of labels to select pods and services

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* remove ListAllNamespaces method

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* include livenessProbe and readinessProbe for deployments

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

---------

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>
2023-08-16 02:35:31 +03:00
M. Mert Yildiran
749b19512e Bring back the app labels 2023-08-15 18:33:00 +03:00
M. Mert Yildiran
746eff1e23 🔥 Remove the dead code in kubernetes package 2023-08-15 17:46:23 +03:00
M. Mert Yildiran
b7a8d9a41a Fix the label order 2023-08-15 17:44:39 +03:00
Luiz Oliveira
995fb96f24 🎨 Rename worker labels to the same pattern just like the other resources (#1410)
* rename worker labels to the same pattern from others kubeshark components

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* update matchLabels from daemonsets

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

---------

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>
2023-08-15 16:56:43 +03:00
M. Mert Yildiran
5d4557d1dd Add SYS_MODULE Linux capability to the worker DaemonSet 2023-08-14 17:49:14 +03:00
M. Mert Yildiran
78c1c02fe6 🔥 Delete the recently added KMM related resources 2023-08-14 17:43:44 +03:00
M. Mert Yildiran
742a56272b 👕 Fix the linter error 2023-08-12 03:36:01 +03:00
M. Mert Yildiran
b7b3603e57 Add cert-manager Helm dependency 2023-08-12 03:29:12 +03:00
M. Mert Yildiran
54c5da2fcb Add a default NodeSelectorTerm that's matching Linux OS 2023-08-12 03:28:33 +03:00
M. Mert Yildiran
a5efb6b625 Fix the indentation 2023-08-12 03:09:37 +03:00
M. Mert Yildiran
7dcb2d23a0 Use the nodeselectorterms from values.yaml in the kmm-operator-controller-manager deployment 2023-08-12 02:44:35 +03:00
M. Mert Yildiran
f4ff4d4dd6 Add KMMConfig struct to TapConfig 2023-08-12 02:41:29 +03:00
M. Mert Yildiran
dd5761f112 🎨 Add a new line character at the end of values.yaml 2023-08-12 02:38:25 +03:00
M. Mert Yildiran
854836056d 🔨 Rename kernel-module-management.yaml to 15-kernel-module-management.yaml 2023-08-12 02:37:29 +03:00
Luiz Oliveira
090368295c Include kernel module management operator (#1409)
Files generated from https://github.com/kubernetes-sigs/kernel-module-management/tree/main/config/default
using kubectl kustomize
included kubeshark labels and checking

Attention, KMM requires cert-manager.

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>
2023-08-12 02:36:30 +03:00
M. Mert Yildiran
67038e324b 🔧 Add logs-kmm-loader Makefile rule 2023-08-11 21:49:46 +03:00
M. Mert Yildiran
a5fb7e0474 Add .Capabilities.APIVersions.Has "kmm.sigs.x-k8s.io/v1beta1"check to module loader related Helm templates 2023-08-11 21:49:01 +03:00
M. Mert Yildiran
1a0625d37c Change the key from Dockerfile to dockerfile in module loader ConfigMap 2023-08-11 17:15:12 +03:00
M. Mert Yildiran
7ec1f595a1 Change the selector in module loader 2023-08-11 00:20:47 +03:00
M. Mert Yildiran
3998485944 🔨 Rename 12-nginx-config.yaml to 12-nginx-config-map.yaml 2023-08-11 00:15:41 +03:00
M. Mert Yildiran
e5de984acd 🔧 Add ssh-node Makefile rule 2023-08-11 00:14:04 +03:00
M. Mert Yildiran
18d6345e80 🔧 Add logs-kmm Makefile rule 2023-08-11 00:06:17 +03:00
M. Mert Yildiran
661e17ace9 Add 14-module-loader-config-map.yaml and a Makefile rule that generates it 2023-08-11 00:03:37 +03:00
M. Mert Yildiran
cc78b291af 🐳 Bring in module-loader Dockerfile 2023-08-10 23:50:53 +03:00
Luiz Oliveira
7c8adee7a8 🔨 Add _helpers.tpl and NOTES.txt to Helm chart and refactor labels (#1406)
* include kubernetes default labels

Using _helpers.tpl to define those labels

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* include Notes with tips after the installs

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* create a standard service account name

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* Update helm-chart/templates/NOTES.txt

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* Update helm-chart/templates/NOTES.txt

Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>

* fixes ingress and nginx labels

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* fixes new label mapping from values

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

* update makefile to to use correct default namespace and release name to generate manifests

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>

---------

Signed-off-by: Luiz Oliveira <ziuloliveira@gmail.com>
Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>
2023-08-10 22:39:17 +03:00
M. Mert Yildiran
461ad1921e Add 13-module-loader.yaml Helm template which should load pf_ring.ko kernel module using KMM 2023-08-10 15:51:37 +03:00
M. Mert Yildiran
5ca90d70ff Have consistent case style in values.yaml 2023-08-09 20:16:49 +03:00
M. Mert Yildiran
65bda4e844 Add the IPv6 field to TapConfig struct 2023-08-09 01:24:08 +03:00
M. Mert Yildiran
c533bcd38c Add AUTH_ENABLED and AUTH_APPROVED_EMAILS environment variables to Hub's template 2023-08-09 01:22:10 +03:00
M. Mert Yildiran
1d17f83931 ⬆️ Bump the Helm chart version 2023-08-07 20:03:11 +03:00
M. Mert Yildiran
b9c3704bae Remove apiVersion field 2023-08-07 20:01:59 +03:00
M. Mert Yildiran
08602c75e0 Run make generate-manifests 2023-08-07 20:00:06 +03:00
M. Mert Yildiran
46799f6665 Revert " Let the user system:anonymous access the services/proxy resource"
This reverts commit acaa29f8eb.
2023-08-07 19:59:16 +03:00
Adrian Wyssmann
250a878407 Allow to disable IPv6 for nginx ingress (#1392)
Co-authored-by: M. Mert Yildiran <me@mertyildiran.com>
2023-08-05 18:43:13 +03:00
M. Mert Yildiran
b32f5f9e12 🔥 Remove the unused constants in kubernetes package 2023-08-04 20:49:21 +03:00
M. Mert Yildiran
5325f94f2b 🐛 Fix the flag redefined: release-namespace error 2023-08-01 23:00:36 +03:00
M. Mert Yildiran
fc3bf69348 Add -s flag to set release namespace into console, proxy and scripts 2023-07-31 23:09:04 +03:00
41 changed files with 983 additions and 537 deletions

View File

@@ -1,16 +0,0 @@
# Files
.dockerignore
.editorconfig
.gitignore
Dockerfile
Makefile
LICENSE
**/*.md
**/*_test.go
*.out
# Folders
.git/
.github/
build/
**/node_modules/

View File

@@ -73,7 +73,7 @@ generate-helm-values: ## Generate the Helm values from config.yaml
./bin/kubeshark__ config > ./helm-chart/values.yaml
generate-manifests: ## Generate the manifests from the Helm chart using default configuration
helm template ./helm-chart > ./manifests/complete.yaml
helm template kubeshark -n default ./helm-chart > ./manifests/complete.yaml
logs-worker:
export LOGS_POD_PREFIX=kubeshark-worker-
@@ -108,6 +108,9 @@ logs-front-follow:
logs:
kubectl logs $$(kubectl get pods | awk '$$1 ~ /^$(LOGS_POD_PREFIX)/' | awk 'END {print $$1}') $(LOGS_FOLLOW)
ssh-node:
kubectl ssh node $$(kubectl get nodes | awk 'END {print $$1}')
exec-worker:
export EXEC_POD_PREFIX=kubeshark-worker-
${MAKE} exec
@@ -146,3 +149,6 @@ helm-uninstall:
proxy:
kubeshark proxy
port-forward-worker:
kubectl port-forward $$(kubectl get pods | awk '$$1 ~ /^$(LOGS_POD_PREFIX)/' | awk 'END {print $$1}') $(LOGS_FOLLOW) 8897:8897

View File

@@ -36,12 +36,13 @@ func init() {
log.Debug().Err(err).Send()
}
consoleCmd.Flags().Uint16(configStructs.ProxyHubPortLabel, defaultTapConfig.Proxy.Hub.Port, "Provide a custom port for the Hub")
consoleCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Hub")
consoleCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the Kubeshark")
consoleCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Kubeshark")
consoleCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}
func runConsole() {
hubUrl := kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port)
hubUrl := kubernetes.GetHubUrl()
response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
if err != nil || response.StatusCode != 200 {
log.Info().Msg(fmt.Sprintf(utils.Yellow, "Couldn't connect to Hub. Establishing proxy..."))
@@ -51,10 +52,10 @@ func runConsole() {
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt)
log.Info().Str("host", config.Config.Tap.Proxy.Host).Uint16("port", config.Config.Tap.Proxy.Hub.Port).Msg("Connecting to:")
log.Info().Str("host", config.Config.Tap.Proxy.Host).Str("url", hubUrl).Msg("Connecting to:")
u := url.URL{
Scheme: "ws",
Host: fmt.Sprintf("%s:%d", config.Config.Tap.Proxy.Host, config.Config.Tap.Proxy.Hub.Port),
Host: fmt.Sprintf("%s:%d/api", config.Config.Tap.Proxy.Host, config.Config.Tap.Proxy.Front.Port),
Path: "/scripts/logs",
}
headers := http.Header{}

62
cmd/export.go Normal file
View File

@@ -0,0 +1,62 @@
package cmd
import (
"fmt"
"net/http"
"os"
"path/filepath"
"time"
"github.com/creasty/defaults"
"github.com/kubeshark/kubeshark/config/configStructs"
"github.com/kubeshark/kubeshark/internal/connect"
"github.com/kubeshark/kubeshark/kubernetes"
"github.com/kubeshark/kubeshark/utils"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
var exportCmd = &cobra.Command{
Use: "export",
Short: "Exports the captured traffic into a TAR file that contains PCAP files",
RunE: func(cmd *cobra.Command, args []string) error {
runExport()
return nil
},
}
func init() {
rootCmd.AddCommand(exportCmd)
defaultTapConfig := configStructs.TapConfig{}
if err := defaults.Set(&defaultTapConfig); err != nil {
log.Debug().Err(err).Send()
}
exportCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the Kubeshark")
exportCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Kubeshark")
exportCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}
func runExport() {
hubUrl := kubernetes.GetHubUrl()
response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
if err != nil || response.StatusCode != 200 {
log.Info().Msg(fmt.Sprintf(utils.Yellow, "Couldn't connect to Hub. Establishing proxy..."))
runProxy(false, true)
}
dstPath, err := filepath.Abs(fmt.Sprintf("./%d.tar.gz", time.Now().Unix()))
if err != nil {
panic(err)
}
out, err := os.Create(dstPath)
if err != nil {
panic(err)
}
defer out.Close()
connector := connect.NewConnector(kubernetes.GetHubUrl(), connect.DefaultRetries, connect.DefaultTimeout)
connector.PostPcapsMerge(out)
}

View File

@@ -40,19 +40,19 @@ func init() {
log.Debug().Err(err).Send()
}
proCmd.Flags().Uint16(configStructs.ProxyHubPortLabel, defaultTapConfig.Proxy.Hub.Port, "Provide a custom port for the Hub")
proCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Hub")
proCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the Kubeshark")
proCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Kubeshark")
}
func acquireLicense() {
hubUrl := kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port)
hubUrl := kubernetes.GetHubUrl()
response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
if err != nil || response.StatusCode != 200 {
log.Info().Msg(fmt.Sprintf(utils.Yellow, "Couldn't connect to Hub. Establishing proxy..."))
runProxy(false, true)
}
connector = connect.NewConnector(kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port), connect.DefaultRetries, connect.DefaultTimeout)
connector = connect.NewConnector(kubernetes.GetHubUrl(), connect.DefaultRetries, connect.DefaultTimeout)
log.Info().Str("url", PRO_URL).Msg("Opening in the browser:")
utils.OpenBrowser(PRO_URL)

View File

@@ -24,7 +24,7 @@ func init() {
log.Debug().Err(err).Send()
}
proxyCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the front-end proxy/port-forward")
proxyCmd.Flags().Uint16(configStructs.ProxyHubPortLabel, defaultTapConfig.Proxy.Hub.Port, "Provide a custom port for the Hub proxy/port-forward")
proxyCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the proxy/port-forward")
proxyCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the proxy/port-forward")
proxyCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}

View File

@@ -63,38 +63,8 @@ func runProxy(block bool, noBrowser bool) {
var establishedProxy bool
hubUrl := kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port)
response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
if err == nil && response.StatusCode == 200 {
log.Info().
Str("service", kubernetes.HubServiceName).
Int("port", int(config.Config.Tap.Proxy.Hub.Port)).
Msg("Found a running service.")
okToOpen("Hub", hubUrl, true)
} else {
startProxyReportErrorIfAny(
kubernetesProvider,
ctx,
kubernetes.HubServiceName,
kubernetes.HubPodName,
configStructs.ProxyHubPortLabel,
config.Config.Tap.Proxy.Hub.Port,
configStructs.ContainerPort,
"/echo",
)
connector := connect.NewConnector(hubUrl, connect.DefaultRetries, connect.DefaultTimeout)
if err := connector.TestConnection("/echo"); err != nil {
log.Error().Msg(fmt.Sprintf(utils.Red, "Couldn't connect to Hub."))
return
}
establishedProxy = true
okToOpen("Hub", hubUrl, true)
}
frontUrl := kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Front.Port)
response, err = http.Get(fmt.Sprintf("%s/", frontUrl))
response, err := http.Get(fmt.Sprintf("%s/", frontUrl))
if err == nil && response.StatusCode == 200 {
log.Info().
Str("service", kubernetes.FrontServiceName).

View File

@@ -34,8 +34,9 @@ func init() {
log.Debug().Err(err).Send()
}
scriptsCmd.Flags().Uint16(configStructs.ProxyHubPortLabel, defaultTapConfig.Proxy.Hub.Port, "Provide a custom port for the Hub")
scriptsCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Hub")
scriptsCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the Kubeshark")
scriptsCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the Kubeshark")
scriptsCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
}
func runScripts() {
@@ -44,14 +45,14 @@ func runScripts() {
return
}
hubUrl := kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port)
hubUrl := kubernetes.GetHubUrl()
response, err := http.Get(fmt.Sprintf("%s/echo", hubUrl))
if err != nil || response.StatusCode != 200 {
log.Info().Msg(fmt.Sprintf(utils.Yellow, "Couldn't connect to Hub. Establishing proxy..."))
runProxy(false, true)
}
connector = connect.NewConnector(kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port), connect.DefaultRetries, connect.DefaultTimeout)
connector = connect.NewConnector(kubernetes.GetHubUrl(), connect.DefaultRetries, connect.DefaultTimeout)
watchScripts(true)
}

View File

@@ -47,8 +47,7 @@ func init() {
tapCmd.Flags().StringP(configStructs.DockerTagLabel, "t", defaultTapConfig.Docker.Tag, "The tag of the Docker images that are going to be pulled")
tapCmd.Flags().String(configStructs.DockerImagePullPolicy, defaultTapConfig.Docker.ImagePullPolicy, "ImagePullPolicy for the Docker images")
tapCmd.Flags().StringSlice(configStructs.DockerImagePullSecrets, defaultTapConfig.Docker.ImagePullSecrets, "ImagePullSecrets for the Docker images")
tapCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the front-end proxy/port-forward")
tapCmd.Flags().Uint16(configStructs.ProxyHubPortLabel, defaultTapConfig.Proxy.Hub.Port, "Provide a custom port for the Hub proxy/port-forward")
tapCmd.Flags().Uint16(configStructs.ProxyFrontPortLabel, defaultTapConfig.Proxy.Front.Port, "Provide a custom port for the proxy/port-forward")
tapCmd.Flags().String(configStructs.ProxyHostLabel, defaultTapConfig.Proxy.Host, "Provide a custom host for the proxy/port-forward")
tapCmd.Flags().StringSliceP(configStructs.NamespacesLabel, "n", defaultTapConfig.Namespaces, "Namespaces selector")
tapCmd.Flags().StringP(configStructs.ReleaseNamespaceLabel, "s", defaultTapConfig.Release.Namespace, "Release namespace of Kubeshark")
@@ -61,4 +60,5 @@ func init() {
tapCmd.Flags().Bool(configStructs.TlsLabel, defaultTapConfig.Tls, "Capture the traffic that's encrypted with OpenSSL or Go crypto/tls libraries")
tapCmd.Flags().Bool(configStructs.IgnoreTaintedLabel, defaultTapConfig.IgnoreTainted, "Ignore tainted pods while running Worker DaemonSet")
tapCmd.Flags().Bool(configStructs.IngressEnabledLabel, defaultTapConfig.Ingress.Enabled, "Enable Ingress")
tapCmd.Flags().Bool(configStructs.TelemetryEnabledLabel, defaultTapConfig.Telemetry.Enabled, "Enable/disable Telemetry")
}

View File

@@ -506,7 +506,7 @@ func pcap(tarPath string) error {
},
}
connector = connect.NewConnector(kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port), connect.DefaultRetries, connect.DefaultTimeout)
connector = connect.NewConnector(kubernetes.GetHubUrl(), connect.DefaultRetries, connect.DefaultTimeout)
connector.PostWorkerPodToHub(workerPod)
// License
@@ -515,7 +515,7 @@ func pcap(tarPath string) error {
}
log.Info().
Str("url", kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port)).
Str("url", kubernetes.GetHubUrl()).
Msg(fmt.Sprintf(utils.Green, "Hub is available at:"))
url := kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Front.Port)

View File

@@ -65,7 +65,7 @@ func tap() {
Str("limit", config.Config.Tap.StorageLimit).
Msg(fmt.Sprintf("%s will store the traffic up to a limit (per node). Oldest TCP/UDP streams will be removed once the limit is reached.", misc.Software))
connector = connect.NewConnector(kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port), connect.DefaultRetries, connect.DefaultTimeout)
connector = connect.NewConnector(kubernetes.GetHubUrl(), connect.DefaultRetries, connect.DefaultTimeout)
kubernetesProvider, err := getKubernetesProviderForCli(false, false)
if err != nil {
@@ -77,6 +77,11 @@ func tap() {
state.targetNamespaces = kubernetesProvider.GetNamespaces()
log.Info().
Bool("enabled", config.Config.Tap.Telemetry.Enabled).
Str("notice", "Telemetry can be disabled by setting the flag: --telemetry-enabled=false").
Msg("Telemetry")
log.Info().Strs("namespaces", state.targetNamespaces).Msg("Targeting pods in:")
if err := printTargetedPodsPreview(ctx, kubernetesProvider, state.targetNamespaces); err != nil {
@@ -153,7 +158,7 @@ func printNoPodsFoundSuggestion(targetNamespaces []string) {
}
func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s$", kubernetes.HubPodName))
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.HubPodName))
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.Tap.Release.Namespace}, podWatchHelper)
isPodReady := false
@@ -244,7 +249,7 @@ func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, c
}
func watchFrontPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s$", kubernetes.FrontPodName))
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.FrontPodName))
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.Tap.Release.Namespace}, podWatchHelper)
isPodReady := false
@@ -401,16 +406,6 @@ func watchHubEvents(ctx context.Context, kubernetesProvider *kubernetes.Provider
}
func postHubStarted(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc, update bool) {
startProxyReportErrorIfAny(
kubernetesProvider,
ctx,
kubernetes.HubServiceName,
kubernetes.HubPodName,
configStructs.ProxyHubPortLabel,
config.Config.Tap.Proxy.Hub.Port,
configStructs.ContainerPort,
"/echo",
)
if update {
// Pod regex
@@ -439,12 +434,6 @@ func postHubStarted(ctx context.Context, kubernetesProvider *kubernetes.Provider
connector.PostScriptDone()
}
if !update && !config.Config.Tap.Ingress.Enabled {
// Hub proxy URL
url := kubernetes.GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port)
log.Info().Str("url", url).Msg(fmt.Sprintf(utils.Green, "Hub is available at:"))
}
if config.Config.Scripting.Source != "" && config.Config.Scripting.WatchScripts {
watchScripts(false)
}

View File

@@ -14,7 +14,21 @@ const (
)
func CreateDefaultConfig() ConfigStruct {
return ConfigStruct{}
return ConfigStruct{
Tap: configStructs.TapConfig{
NodeSelectorTerms: []v1.NodeSelectorTerm{
{
MatchExpressions: []v1.NodeSelectorRequirement{
{
Key: "kubernetes.io/os",
Operator: v1.NodeSelectorOpIn,
Values: []string{"linux"},
},
},
},
},
},
}
}
type KubeConfig struct {

View File

@@ -12,7 +12,7 @@ import (
type ScriptingConfig struct {
Env map[string]interface{} `yaml:"env" json:"env"`
Source string `yaml:"source" json:"source" default:""`
WatchScripts bool `yaml:"watchScripts" json:"watchScripts" default:"true"`
WatchScripts bool `yaml:"watchscripts" json:"watchscripts" default:"true"`
}
func (config *ScriptingConfig) GetScripts() (scripts []*misc.Script, err error) {

View File

@@ -27,6 +27,7 @@ const (
TlsLabel = "tls"
IgnoreTaintedLabel = "ignoretainted"
IngressEnabledLabel = "ingress-enabled"
TelemetryEnabledLabel = "telemetry-enabled"
DebugLabel = "debug"
ContainerPort = 80
ContainerPortStr = "80"
@@ -80,17 +81,17 @@ type ResourcesConfig struct {
}
type AuthConfig struct {
Enabled bool `yaml:"enabled" json:"enabled" default:"false"`
ApprovedEmails []string `yaml:"approvedemails" json:"approvedemails" default:"[]"`
ApprovedDomains []string `yaml:"approveddomains" json:"approveddomains" default:"[]"`
}
type IngressConfig struct {
Enabled bool `yaml:"enabled" json:"enabled" default:"false"`
ClassName string `yaml:"classname" json:"classname" default:"kubeshark-ingress-class"`
Controller string `yaml:"controller" json:"controller" default:"k8s.io/ingress-nginx"`
ClassName string `yaml:"classname" json:"classname" default:""`
Host string `yaml:"host" json:"host" default:"ks.svc.cluster.local"`
TLS []networking.IngressTLS `yaml:"tls" json:"tls"`
Auth AuthConfig `yaml:"auth" json:"auth"`
CertManager string `yaml:"certmanager" json:"certmanager" default:"letsencrypt-prod"`
TLS []networking.IngressTLS `yaml:"tls" json:"tls" default:"[]"`
Annotations map[string]string `yaml:"annotations" json:"annotations" default:"{}"`
}
type ReleaseConfig struct {
@@ -99,6 +100,10 @@ type ReleaseConfig struct {
Namespace string `yaml:"namespace" json:"namespace" default:"default"`
}
type TelemetryConfig struct {
Enabled bool `yaml:"enabled" json:"enabled" default:"true"`
}
type TapConfig struct {
Docker DockerConfig `yaml:"docker" json:"docker"`
Proxy ProxyConfig `yaml:"proxy" json:"proxy"`
@@ -118,8 +123,11 @@ type TapConfig struct {
Labels map[string]string `yaml:"labels" json:"labels" default:"{}"`
Annotations map[string]string `yaml:"annotations" json:"annotations" default:"{}"`
NodeSelectorTerms []v1.NodeSelectorTerm `yaml:"nodeselectorterms" json:"nodeselectorterms" default:"[]"`
Auth AuthConfig `yaml:"auth" json:"auth"`
Ingress IngressConfig `yaml:"ingress" json:"ingress"`
IPv6 bool `yaml:"ipv6" json:"ipv6" default:"true"`
Debug bool `yaml:"debug" json:"debug" default:"false"`
Telemetry TelemetryConfig `yaml:"telemetry" json:"telemetry"`
}
func (config *TapConfig) PodRegex() *regexp.Regexp {

View File

@@ -1,5 +1,5 @@
apiVersion: v2
appVersion: "41.6"
appVersion: "50.3"
description: The API Traffic Analyzer for Kubernetes
home: https://kubeshark.co
keywords:
@@ -22,5 +22,5 @@ name: kubeshark
sources:
- https://github.com/kubeshark/kubeshark/tree/master/helm-chart
type: application
version: "41.6"
version: "50.3"
icon: https://raw.githubusercontent.com/kubeshark/assets/master/logo/vector/logo.svg

View File

@@ -58,9 +58,10 @@ Visit [localhost:8899](http://localhost:8899)
helm install kubeshark kubeshark/kubeshark \
--set tap.ingress.enabled=true \
--set tap.ingress.host=ks.svc.cluster.local \
--set "tap.ingress.auth.approveddomains={gmail.com}" \
--set "tap.ingress.approveddomains={gmail.com}" \
--set license=LICENSE_GOES_HERE
```
You can get your license [here](https://console.kubeshark.co/).
## Installing with Persistent Storage Enabled
@@ -70,4 +71,14 @@ helm install kubeshark kubeshark/kubeshark \
--set tap.persistentstorage=true \
--set license=LICENSE_GOES_HERE
```
You can get your license [here](https://console.kubeshark.co/).
## Disabling IPV6
Not all have IPV6 enabled, hence this has to be disabled as follows:
```shell
helm install kubeshark kubeshark/kubeshark \
--set tap.ipv6=false
```

View File

@@ -2,14 +2,11 @@
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
labels:
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-service-account
name: {{ include "kubeshark.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -2,11 +2,8 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
@@ -18,15 +15,37 @@ rules:
- ""
- extensions
- apps
- networking.k8s.io
resources:
- pods
- services
- services/proxy
- endpoints
- persistentvolumeclaims
- ingresses
verbs:
- list
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-self-secrets-role
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- "v1"
- ""
resourceNames:
- kubeshark-secret
resources:
- secrets
verbs:
- get
- watch
- update
- patch

View File

@@ -2,11 +2,8 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
labels:
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
@@ -19,7 +16,25 @@ roleRef:
name: kubeshark-cluster-role
subjects:
- kind: ServiceAccount
name: kubeshark-service-account
name: {{ include "kubeshark.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
- kind: User
name: system:anonymous
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeshark-self-secrets-role-binding
labels:
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: {{ include "kubeshark.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: kubeshark-self-secrets-role
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,58 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kubeshark.fullname" . }}-hub
namespace: {{ .Release.Namespace }}
labels:
app.kubeshark.co/app: hub
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
spec:
replicas: 1 # Set the desired number of replicas
selector:
matchLabels:
app.kubeshark.co/app: hub
template:
metadata:
labels:
app.kubeshark.co/app: hub
sidecar.istio.io/inject: "false"
spec:
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: {{ include "kubeshark.serviceAccountName" . }}
containers:
- name: kubeshark-hub
command:
- ./hub
{{ .Values.tap.debug | ternary "- -debug" "" }}
envFrom:
- configMapRef:
name: kubeshark-config-map
- secretRef:
name: kubeshark-secret
image: '{{ .Values.tap.docker.registry }}/hub:{{ .Values.tap.docker.tag }}'
imagePullPolicy: {{ .Values.tap.docker.imagepullpolicy }}
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
resources:
limits:
cpu: {{ .Values.tap.resources.hub.limits.cpu }}
memory: {{ .Values.tap.resources.hub.limits.memory }}
requests:
cpu: {{ .Values.tap.resources.hub.requests.cpu }}
memory: {{ .Values.tap.resources.hub.requests.memory }}

View File

@@ -1,56 +0,0 @@
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
app: kubeshark-hub
sidecar.istio.io/inject: "false"
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-hub
namespace: {{ .Release.Namespace }}
spec:
containers:
- command:
- ./hub
{{ .Values.tap.debug | ternary "- -debug" "" }}
env:
- name: POD_REGEX
value: '{{ .Values.tap.regex }}'
- name: NAMESPACES
value: '{{ gt (len .Values.tap.namespaces) 0 | ternary (join "," .Values.tap.namespaces) "" }}'
- name: LICENSE
value: '{{ .Values.license }}'
- name: SCRIPTING_ENV
value: '{{ .Values.scripting.env | toJson }}'
- name: SCRIPTING_SCRIPTS
value: '[]'
- name: AUTH_APPROVED_DOMAINS
value: '{{ gt (len .Values.tap.ingress.auth.approveddomains) 0 | ternary (join "," .Values.tap.ingress.auth.approveddomains) "" }}'
image: '{{ .Values.tap.docker.registry }}/hub:{{ .Values.tap.docker.tag }}'
imagePullPolicy: {{ .Values.tap.docker.imagepullpolicy }}
name: kubeshark-hub
resources:
limits:
cpu: {{ .Values.tap.resources.hub.limits.cpu }}
memory: {{ .Values.tap.resources.hub.limits.memory }}
requests:
cpu: {{ .Values.tap.resources.hub.requests.cpu }}
memory: {{ .Values.tap.resources.hub.requests.memory }}
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: kubeshark-service-account
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute
operator: Exists
{{- if not .Values.tap.ignoretainted }}
- effect: NoSchedule
operator: Exists
{{- end }}
status: {}

View File

@@ -2,11 +2,9 @@
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
app.kubeshark.co/app: hub
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
@@ -19,7 +17,5 @@ spec:
port: 80
targetPort: 80
selector:
app: kubeshark-hub
type: NodePort
status:
loadBalancer: {}
app.kubeshark.co/app: hub
type: ClusterIP

View File

@@ -0,0 +1,66 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kubeshark.fullname" . }}-front
namespace: {{ .Release.Namespace }}
labels:
app.kubeshark.co/app: front
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
spec:
replicas: 1 # Set the desired number of replicas
selector:
matchLabels:
app.kubeshark.co/app: front
template:
metadata:
labels:
app.kubeshark.co/app: front
spec:
containers:
- env:
- name: REACT_APP_DEFAULT_FILTER
value: ' '
- name: REACT_APP_HUB_HOST
value: ' '
- name: REACT_APP_HUB_PORT
value: '{{ .Values.tap.ingress.enabled | ternary "/api" (print ":" .Values.tap.proxy.front.port "/api") }}'
image: '{{ .Values.tap.docker.registry }}/front:{{ .Values.tap.docker.tag }}'
imagePullPolicy: {{ .Values.tap.docker.imagepullpolicy }}
name: kubeshark-front
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
timeoutSeconds: 1
resources:
limits:
cpu: 750m
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
readOnly: true
volumes:
- name: nginx-config
configMap:
name: kubeshark-nginx-config-map
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: {{ include "kubeshark.serviceAccountName" . }}

View File

@@ -1,54 +0,0 @@
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
app: kubeshark-front
sidecar.istio.io/inject: "false"
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-front
namespace: {{ .Release.Namespace }}
spec:
containers:
- env:
- name: REACT_APP_DEFAULT_FILTER
value: ' '
- name: REACT_APP_HUB_HOST
value: ' '
- name: REACT_APP_HUB_PORT
value: '{{ .Values.tap.ingress.enabled | ternary "/api" (print ":" .Values.tap.proxy.hub.port) }}'
image: '{{ .Values.tap.docker.registry }}/front:{{ .Values.tap.docker.tag }}'
imagePullPolicy: {{ .Values.tap.docker.imagepullpolicy }}
name: kubeshark-front
readinessProbe:
failureThreshold: 3
periodSeconds: 1
successThreshold: 1
tcpSocket:
port: 80
timeoutSeconds: 1
resources:
limits:
cpu: 750m
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: kubeshark-service-account
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute
operator: Exists
{{- if not .Values.tap.ignoretainted }}
- effect: NoSchedule
operator: Exists
{{- end }}
status: {}

View File

@@ -2,11 +2,8 @@
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
@@ -19,7 +16,5 @@ spec:
port: 80
targetPort: 80
selector:
app: kubeshark-front
type: NodePort
status:
loadBalancer: {}
app.kubeshark.co/app: front
type: ClusterIP

View File

@@ -3,11 +3,8 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}

View File

@@ -2,13 +2,10 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
app: kubeshark-worker-daemon-set
app.kubeshark.co/app: worker
sidecar.istio.io/inject: "false"
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
{{- include "kubeshark.labels" . | nindent 4 }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
@@ -18,18 +15,13 @@ metadata:
spec:
selector:
matchLabels:
app: kubeshark-worker-daemon-set
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 6 }}
{{- end }}
app.kubeshark.co/app: worker
{{- include "kubeshark.labels" . | nindent 6 }}
template:
metadata:
creationTimestamp: null
labels:
app: kubeshark-worker-daemon-set
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 8 }}
{{- end }}
app.kubeshark.co/app: worker
{{- include "kubeshark.labels" . | nindent 8 }}
name: kubeshark-worker-daemon-set
namespace: kubeshark
spec:
@@ -41,13 +33,16 @@ spec:
- -port
- '{{ .Values.tap.proxy.worker.srvport }}'
- -servicemesh
- -tls
{{ .Values.tap.tls | ternary "- -tls" "" }}
- -procfs
- /hostproc
{{ .Values.tap.debug | ternary "- -debug" "" }}
image: '{{ .Values.tap.docker.registry }}/worker:{{ .Values.tap.docker.tag }}'
imagePullPolicy: {{ .Values.tap.docker.imagepullpolicy }}
name: kubeshark-worker-daemon-set
envFrom:
- secretRef:
name: kubeshark-secret
{{- if .Values.tap.debug }}
env:
- name: PROFILING_ENABLED
@@ -73,8 +68,23 @@ spec:
- SYS_PTRACE
- DAC_OVERRIDE
- SYS_RESOURCE
- SYS_MODULE
drop:
- ALL
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
tcpSocket:
port: {{ .Values.tap.proxy.worker.srvport }}
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
tcpSocket:
port: {{ .Values.tap.proxy.worker.srvport }}
volumeMounts:
- mountPath: /hostproc
name: proc
@@ -88,7 +98,7 @@ spec:
{{- end }}
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
serviceAccountName: kubeshark-service-account
serviceAccountName: {{ include "kubeshark.serviceAccountName" . }}
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute

View File

@@ -1,19 +0,0 @@
---
{{- if .Values.tap.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
creationTimestamp: null
labels:
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
annotations:
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
name: kubeshark-ingress-class
namespace: {{ .Release.Namespace }}
spec:
controller: {{ .Values.tap.ingress.controller }}
{{- end }}

View File

@@ -4,40 +4,34 @@ apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: {{ .Values.tap.ingress.certmanager }}
nginx.ingress.kubernetes.io/rewrite-target: /$2
{{- if .Values.tap.annotations }}
nginx.org/websocket-services: "kubeshark-front"
{{- if .Values.tap.annotations }}
{{- toYaml .Values.tap.annotations | nindent 4 }}
{{- end }}
{{- if .Values.tap.ingress.annotations }}
{{- toYaml .Values.tap.ingress.annotations | nindent 4 }}
{{- end }}
creationTimestamp: null
labels:
{{- if .Values.tap.labels }}
{{- toYaml .Values.tap.labels | nindent 4 }}
{{- end }}
{{- include "kubeshark.labels" . | nindent 4 }}
name: kubeshark-ingress
namespace: {{ .Release.Namespace }}
spec:
{{- if .Values.tap.ingress.classname }}
ingressClassName: {{ .Values.tap.ingress.classname }}
{{- end }}
rules:
- host: {{ .Values.tap.ingress.host }}
http:
paths:
- backend:
service:
name: kubeshark-hub
port:
number: 80
path: /api(/|$)(.*)
pathType: Prefix
- backend:
service:
name: kubeshark-front
port:
number: 80
path: /()(.*)
path: /
pathType: Prefix
{{- if .Values.tap.ingress.tls }}
tls:
{{- if gt (len .Values.tap.ingress.tls) 0}}
{{- toYaml .Values.tap.ingress.tls | nindent 2 }}
{{- end }}
status:

View File

@@ -0,0 +1,46 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeshark-nginx-config-map
namespace: {{ .Release.Namespace }}
labels:
{{- include "kubeshark.labels" . | nindent 4 }}
data:
default.conf: |
server {
listen 80;
{{- if .Values.tap.ipv6 }}
listen [::]:80;
{{- end }}
access_log /dev/stdout;
error_log /dev/stdout;
location /api {
rewrite ^/api(.*)$ $1 break;
proxy_pass http://kubeshark-hub;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header Upgrade websocket;
proxy_set_header Connection Upgrade;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
proxy_connect_timeout 4s;
proxy_read_timeout 120s;
proxy_send_timeout 12s;
proxy_pass_request_headers on;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
add_header Cache-Control no-cache;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@@ -0,0 +1,17 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: kubeshark-config-map
namespace: {{ .Release.Namespace }}
labels:
app.kubeshark.co/app: hub
{{- include "kubeshark.labels" . | nindent 4 }}
data:
POD_REGEX: '{{ .Values.tap.regex }}'
NAMESPACES: '{{ gt (len .Values.tap.namespaces) 0 | ternary (join "," .Values.tap.namespaces) "" }}'
SCRIPTING_ENV: '{{ .Values.scripting.env | toJson }}'
SCRIPTING_SCRIPTS: '[]'
AUTH_ENABLED: '{{ .Values.tap.auth.enabled | ternary "true" "" }}'
AUTH_APPROVED_EMAILS: '{{ gt (len .Values.tap.auth.approvedemails) 0 | ternary (join "," .Values.tap.auth.approvedemails) "" }}'
AUTH_APPROVED_DOMAINS: '{{ gt (len .Values.tap.auth.approveddomains) 0 | ternary (join "," .Values.tap.auth.approveddomains) "" }}'
TELEMETRY_DISABLED: '{{ not .Values.tap.telemetry.enabled | ternary "true" "" }}'

View File

@@ -0,0 +1,10 @@
kind: Secret
apiVersion: v1
metadata:
name: kubeshark-secret
namespace: {{ .Release.Namespace }}
labels:
app.kubeshark.co/app: hub
{{- include "kubeshark.labels" . | nindent 4 }}
stringData:
LICENSE: '{{ .Values.license }}'

View File

@@ -0,0 +1,25 @@
Thank you for installing {{ title .Chart.Name }}.
Your deployment has been successful. The release is named {{ .Release.Name }} and it has been deployed in the {{ .Release.Namespace }} namespace.
{{- if .Values.tap.telemetry.enabled }}
Notice: Telemetry is enabled. Kubeshark will collect anonymous usage statistics.
{{ end }}
{{- if .Values.tap.ingress.enabled }}
You can now access the application through the following URL:
http{{ if .Values.tap.ingress.tls }}s{{ end }}://{{ .Values.tap.ingress.host }}
{{- else }}
To access the application, follow these steps:
1. Perform port forwarding with the following commands:
kubectl port-forward -n {{ .Release.Namespace }} service/kubeshark-hub 8898:80 & \
kubectl port-forward -n {{ .Release.Namespace }} service/kubeshark-front 8899:80
2. Once port forwarding is done, you can access the application by visiting the following URL in your web browser:
http://0.0.0.0:8899
{{ end }}

View File

@@ -0,0 +1,68 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "kubeshark.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kubeshark.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "kubeshark.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "kubeshark.labels" -}}
helm.sh/chart: {{ include "kubeshark.chart" . }}
{{ include "kubeshark.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.additionalLabels }}
{{ toYaml . }}
{{- end }}
{{- if .Values.tap.labels }}
{{ toYaml .Values.tap.labels }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "kubeshark.selectorLabels" -}}
app.kubernetes.io/name: {{ include "kubeshark.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "kubeshark.serviceAccountName" -}}
{{- if and .Values.serviceAccount .Values.serviceAccount.create }}
{{- default (include "kubeshark.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- printf "%s-service-account" .Release.Name }}
{{- end }}
{{- end }}

View File

@@ -1,38 +1,73 @@
config: {}
dumplogs: false
headless: false
kube:
configpath: ""
context: ""
license: ""
logs:
file: ""
manifests:
dump: false
scripting:
env: null
source: ""
watchscripts: true
tap:
annotations: {}
auth:
approveddomains: []
approvedemails: []
enabled: false
debug: false
docker:
imagepullpolicy: Always
imagepullsecrets: null
registry: docker.io/kubeshark
tag: latest
imagepullpolicy: Always
imagepullsecrets: []
dryrun: false
ignoretainted: false
ingress:
annotations: {}
classname: ""
enabled: false
host: ks.svc.cluster.local
tls: []
ipv6: true
labels: {}
namespaces: []
nodeselectorterms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
packetcapture: libpcap
pcap: ""
persistentstorage: false
proxy:
worker:
srvport: 8897
hub:
port: 8898
srvport: 8898
front:
port: 8899
host: 127.0.0.1
hub:
port: 8898
srvport: 8898
worker:
srvport: 8897
regex: .*
namespaces: []
release:
repo: https://helm.kubeshark.co
name: kubeshark
namespace: default
persistentstorage: false
storagelimit: 200Mi
storageclass: standard
dryrun: false
pcap: ""
repo: https://helm.kubeshark.co
resources:
worker:
hub:
limits:
cpu: 750m
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
hub:
worker:
limits:
cpu: 750m
memory: 1Gi
@@ -40,31 +75,8 @@ tap:
cpu: 50m
memory: 50Mi
servicemesh: true
storageclass: standard
storagelimit: 200Mi
telemetry:
enabled: true
tls: true
packetcapture: libpcap
ignoretainted: false
labels: {}
annotations: {}
nodeselectorterms: []
ingress:
enabled: false
classname: kubeshark-ingress-class
controller: k8s.io/ingress-nginx
host: ks.svc.cluster.local
tls: []
auth:
approveddomains: []
certmanager: letsencrypt-prod
debug: false
logs:
file: ""
kube:
configpath: ""
context: ""
dumplogs: false
headless: false
license: ""
scripting:
env: {}
source: ""
watchScripts: true

View File

@@ -5,8 +5,10 @@ import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"os"
"time"
"github.com/kubeshark/kubeshark/config"
@@ -340,3 +342,40 @@ func (connector *Connector) PostScriptDone() {
time.Sleep(DefaultSleep)
}
}
func (connector *Connector) PostPcapsMerge(out *os.File) {
postEnvUrl := fmt.Sprintf("%s/pcaps/merge", connector.url)
if envMarshalled, err := json.Marshal(map[string]string{"query": ""}); err != nil {
log.Error().Err(err).Msg("Failed to marshal the env:")
} else {
ok := false
for !ok {
var resp *http.Response
if resp, err = utils.Post(postEnvUrl, "application/json", bytes.NewBuffer(envMarshalled), connector.client, config.Config.License); err != nil || resp.StatusCode != http.StatusOK {
if _, ok := err.(*url.Error); ok {
break
}
log.Warn().Err(err).Msg("Failed exported PCAP download. Retrying...")
} else {
defer resp.Body.Close()
// Check server response
if resp.StatusCode != http.StatusOK {
log.Error().Str("status", resp.Status).Err(err).Msg("Failed exported PCAP download.")
return
}
// Writer the body to file
_, err = io.Copy(out, resp.Body)
if err != nil {
log.Error().Err(err).Msg("Failed writing PCAP export:")
return
}
log.Info().Str("path", out.Name()).Msg("Downloaded exported PCAP:")
return
}
time.Sleep(DefaultSleep)
}
}
}

View File

@@ -6,23 +6,6 @@ const (
FrontServiceName = FrontPodName
HubPodName = SelfResourcesPrefix + "hub"
HubServiceName = HubPodName
ClusterRoleBindingName = SelfResourcesPrefix + "cluster-role-binding"
ClusterRoleName = SelfResourcesPrefix + "cluster-role"
K8sAllNamespaces = ""
RoleBindingName = SelfResourcesPrefix + "role-binding"
RoleName = SelfResourcesPrefix + "role"
ServiceAccountName = SelfResourcesPrefix + "service-account"
WorkerDaemonSetName = SelfResourcesPrefix + "worker-daemon-set"
WorkerPodName = SelfResourcesPrefix + "worker"
PersistentVolumeName = SelfResourcesPrefix + "persistent-volume"
PersistentVolumeClaimName = SelfResourcesPrefix + "persistent-volume-claim"
IngressName = SelfResourcesPrefix + "ingress"
IngressClassName = SelfResourcesPrefix + "ingress-class"
PersistentVolumeHostPath = "/app/data"
MinKubernetesServerVersion = "1.16.0"
)
const (
LabelManagedBy = SelfResourcesPrefix + "managed-by"
LabelCreatedBy = SelfResourcesPrefix + "created-by"
)

View File

@@ -14,7 +14,6 @@ import (
"github.com/kubeshark/kubeshark/semver"
"github.com/kubeshark/kubeshark/utils"
"github.com/rs/zerolog/log"
auth "k8s.io/api/authorization/v1"
core "k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -71,61 +70,11 @@ func NewProvider(kubeConfigPath string, contextName string) (*Provider, error) {
}, nil
}
func (provider *Provider) CanI(ctx context.Context, namespace string, resource string, verb string, group string) (bool, error) {
selfSubjectAccessReview := &auth.SelfSubjectAccessReview{
Spec: auth.SelfSubjectAccessReviewSpec{
ResourceAttributes: &auth.ResourceAttributes{
Namespace: namespace,
Resource: resource,
Verb: verb,
Group: group,
},
},
}
response, err := provider.clientSet.AuthorizationV1().SelfSubjectAccessReviews().Create(ctx, selfSubjectAccessReview, metav1.CreateOptions{})
if err != nil {
return false, err
}
return response.Status.Allowed, nil
}
func (provider *Provider) DoesNamespaceExist(ctx context.Context, name string) (bool, error) {
namespaceResource, err := provider.clientSet.CoreV1().Namespaces().Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(namespaceResource, err)
}
func (provider *Provider) DoesServiceAccountExist(ctx context.Context, namespace string, name string) (bool, error) {
serviceAccountResource, err := provider.clientSet.CoreV1().ServiceAccounts(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(serviceAccountResource, err)
}
func (provider *Provider) DoesServiceExist(ctx context.Context, namespace string, name string) (bool, error) {
serviceResource, err := provider.clientSet.CoreV1().Services(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(serviceResource, err)
}
func (provider *Provider) DoesClusterRoleExist(ctx context.Context, name string) (bool, error) {
clusterRoleResource, err := provider.clientSet.RbacV1().ClusterRoles().Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(clusterRoleResource, err)
}
func (provider *Provider) DoesClusterRoleBindingExist(ctx context.Context, name string) (bool, error) {
clusterRoleBindingResource, err := provider.clientSet.RbacV1().ClusterRoleBindings().Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(clusterRoleBindingResource, err)
}
func (provider *Provider) DoesRoleExist(ctx context.Context, namespace string, name string) (bool, error) {
roleResource, err := provider.clientSet.RbacV1().Roles(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(roleResource, err)
}
func (provider *Provider) DoesRoleBindingExist(ctx context.Context, namespace string, name string) (bool, error) {
roleBindingResource, err := provider.clientSet.RbacV1().RoleBindings(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(roleBindingResource, err)
}
func (provider *Provider) doesResourceExist(resource interface{}, err error) (bool, error) {
// Getting NotFound error is the expected behavior when a resource does not exist.
if k8serrors.IsNotFound(err) {
@@ -178,8 +127,14 @@ func (provider *Provider) ListAllRunningPodsMatchingRegex(ctx context.Context, r
return matchingPods, nil
}
func (provider *Provider) ListPodsByAppLabel(ctx context.Context, namespaces string, labelName string) ([]core.Pod, error) {
pods, err := provider.clientSet.CoreV1().Pods(namespaces).List(ctx, metav1.ListOptions{LabelSelector: fmt.Sprintf("app=%s", labelName)})
func (provider *Provider) ListPodsByAppLabel(ctx context.Context, namespaces string, labels map[string]string) ([]core.Pod, error) {
pods, err := provider.clientSet.CoreV1().Pods(namespaces).List(ctx, metav1.ListOptions{
LabelSelector: metav1.FormatLabelSelector(
&metav1.LabelSelector{
MatchLabels: labels,
},
),
})
if err != nil {
return nil, err
}
@@ -187,15 +142,6 @@ func (provider *Provider) ListPodsByAppLabel(ctx context.Context, namespaces str
return pods.Items, err
}
func (provider *Provider) ListAllNamespaces(ctx context.Context) ([]core.Namespace, error) {
namespaces, err := provider.clientSet.CoreV1().Namespaces().List(ctx, metav1.ListOptions{})
if err != nil {
return nil, err
}
return namespaces.Items, err
}
func (provider *Provider) GetPodLogs(ctx context.Context, namespace string, podName string, containerName string) (string, error) {
podLogOpts := core.PodLogOptions{Container: containerName}
req := provider.clientSet.CoreV1().Pods(namespace).GetLogs(podName, &podLogOpts)

View File

@@ -24,6 +24,7 @@ const selfServicePort = 80
func StartProxy(kubernetesProvider *Provider, proxyHost string, srcPort uint16, selfNamespace string, selfServiceName string) (*http.Server, error) {
log.Info().
Str("proxy-host", proxyHost).
Str("namespace", selfNamespace).
Str("service", selfServiceName).
Int("src-port", int(srcPort)).
@@ -71,6 +72,10 @@ func GetProxyOnPort(port uint16) string {
return fmt.Sprintf("http://%s:%d", config.Config.Tap.Proxy.Host, port)
}
func GetHubUrl() string {
return fmt.Sprintf("%s/api", GetProxyOnPort(config.Config.Tap.Proxy.Hub.Port))
}
func getRerouteHttpHandlerSelfAPI(proxyHandler http.Handler, selfNamespace string, selfServiceName string) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
@@ -101,7 +106,7 @@ func getRerouteHttpHandlerSelfStatic(proxyHandler http.Handler, selfNamespace st
}
func NewPortForward(kubernetesProvider *Provider, namespace string, podRegex *regexp.Regexp, srcPort uint16, dstPort uint16, ctx context.Context) (*portforward.PortForwarder, error) {
pods, err := kubernetesProvider.ListAllRunningPodsMatchingRegex(ctx, podRegex, []string{namespace})
pods, err := kubernetesProvider.ListPodsByAppLabel(ctx, namespace, map[string]string{"app.kubeshark.co/app": "hub"})
if err != nil {
return nil, err
} else if len(pods) == 0 {

View File

@@ -3,18 +3,113 @@
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
labels:
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-service-account
namespace: default
---
# Source: kubeshark/templates/13-secret.yaml
kind: Secret
apiVersion: v1
metadata:
name: kubeshark-secret
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
stringData:
LICENSE: ''
---
# Source: kubeshark/templates/11-nginx-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeshark-nginx-config-map
namespace: default
labels:
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
data:
default.conf: |
server {
listen 80;
listen [::]:80;
access_log /dev/stdout;
error_log /dev/stdout;
location /api {
rewrite ^/api(.*)$ $1 break;
proxy_pass http://kubeshark-hub;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header Upgrade websocket;
proxy_set_header Connection Upgrade;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
proxy_connect_timeout 4s;
proxy_read_timeout 120s;
proxy_send_timeout 12s;
proxy_pass_request_headers on;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
expires -1;
add_header Cache-Control no-cache;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
# Source: kubeshark/templates/12-config-map.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: kubeshark-config-map
namespace: default
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
data:
POD_REGEX: '.*'
NAMESPACES: ''
SCRIPTING_ENV: 'null'
SCRIPTING_SCRIPTS: '[]'
AUTH_ENABLED: ''
AUTH_APPROVED_EMAILS: ''
AUTH_APPROVED_DOMAINS: ''
TELEMETRY_DISABLED: ''
---
# Source: kubeshark/templates/02-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-cluster-role
namespace: default
@@ -23,14 +118,11 @@ rules:
- ""
- extensions
- apps
- networking.k8s.io
resources:
- pods
- services
- services/proxy
- endpoints
- persistentvolumeclaims
- ingresses
verbs:
- list
- get
@@ -40,8 +132,12 @@ rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
labels:
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-cluster-role-binding
namespace: default
@@ -53,15 +149,67 @@ subjects:
- kind: ServiceAccount
name: kubeshark-service-account
namespace: default
- kind: User
name: system:anonymous
---
# Source: kubeshark/templates/02-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-self-secrets-role
namespace: default
rules:
- apiGroups:
- "v1"
- ""
resourceNames:
- kubeshark-secret
resources:
- secrets
verbs:
- get
- watch
- update
- patch
---
# Source: kubeshark/templates/03-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubeshark-self-secrets-role-binding
labels:
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
namespace: default
subjects:
- kind: ServiceAccount
name: kubeshark-service-account
namespace: default
roleRef:
kind: Role
name: kubeshark-self-secrets-role
apiGroup: rbac.authorization.k8s.io
---
# Source: kubeshark/templates/05-hub-service.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-hub
namespace: default
@@ -71,17 +219,19 @@ spec:
port: 80
targetPort: 80
selector:
app: kubeshark-hub
type: NodePort
status:
loadBalancer: {}
app.kubeshark.co/app: hub
type: ClusterIP
---
# Source: kubeshark/templates/07-front-service.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-front
namespace: default
@@ -91,31 +241,42 @@ spec:
port: 80
targetPort: 80
selector:
app: kubeshark-front
type: NodePort
status:
loadBalancer: {}
app.kubeshark.co/app: front
type: ClusterIP
---
# Source: kubeshark/templates/09-worker-daemon-set.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
app: kubeshark-worker-daemon-set
app.kubeshark.co/app: worker
sidecar.istio.io/inject: "false"
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
name: kubeshark-worker-daemon-set
namespace: default
spec:
selector:
matchLabels:
app: kubeshark-worker-daemon-set
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
template:
metadata:
creationTimestamp: null
labels:
app: kubeshark-worker-daemon-set
app.kubeshark.co/app: worker
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
name: kubeshark-worker-daemon-set
namespace: kubeshark
spec:
@@ -134,6 +295,9 @@ spec:
image: 'docker.io/kubeshark/worker:latest'
imagePullPolicy: Always
name: kubeshark-worker-daemon-set
envFrom:
- secretRef:
name: kubeshark-secret
resources:
limits:
cpu: 750m
@@ -150,8 +314,23 @@ spec:
- SYS_PTRACE
- DAC_OVERRIDE
- SYS_RESOURCE
- SYS_MODULE
drop:
- ALL
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
tcpSocket:
port: 8897
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 5
tcpSocket:
port: 8897
volumeMounts:
- mountPath: /hostproc
name: proc
@@ -168,6 +347,15 @@ spec:
operator: Exists
- effect: NoSchedule
operator: Exists
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
volumes:
- hostPath:
path: /proc
@@ -176,98 +364,132 @@ spec:
path: /sys
name: sys
---
# Source: kubeshark/templates/04-hub-pod.yaml
apiVersion: v1
kind: Pod
# Source: kubeshark/templates/04-hub-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: kubeshark-hub
sidecar.istio.io/inject: "false"
annotations:
name: kubeshark-hub
namespace: default
spec:
containers:
- command:
- ./hub
env:
- name: POD_REGEX
value: '.*'
- name: NAMESPACES
value: ''
- name: LICENSE
value: ''
- name: SCRIPTING_ENV
value: '{}'
- name: SCRIPTING_SCRIPTS
value: '[]'
- name: AUTH_APPROVED_DOMAINS
value: ''
image: 'docker.io/kubeshark/hub:latest'
imagePullPolicy: Always
name: kubeshark-hub
resources:
limits:
cpu: 750m
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: kubeshark-service-account
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
status: {}
---
# Source: kubeshark/templates/06-front-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
app: kubeshark-front
sidecar.istio.io/inject: "false"
app.kubeshark.co/app: hub
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
spec:
replicas: 1 # Set the desired number of replicas
selector:
matchLabels:
app.kubeshark.co/app: hub
template:
metadata:
labels:
app.kubeshark.co/app: hub
sidecar.istio.io/inject: "false"
spec:
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: kubeshark-service-account
containers:
- name: kubeshark-hub
command:
- ./hub
envFrom:
- configMapRef:
name: kubeshark-config-map
- secretRef:
name: kubeshark-secret
image: 'docker.io/kubeshark/hub:latest'
imagePullPolicy: Always
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
resources:
limits:
cpu: 750m
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
---
# Source: kubeshark/templates/06-front-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubeshark-front
namespace: default
labels:
app.kubeshark.co/app: front
helm.sh/chart: kubeshark-50.3
app.kubernetes.io/name: kubeshark
app.kubernetes.io/instance: kubeshark
app.kubernetes.io/version: "50.3"
app.kubernetes.io/managed-by: Helm
annotations:
spec:
containers:
- env:
- name: REACT_APP_DEFAULT_FILTER
value: ' '
- name: REACT_APP_HUB_HOST
value: ' '
- name: REACT_APP_HUB_PORT
value: ':8898'
image: 'docker.io/kubeshark/front:latest'
imagePullPolicy: Always
name: kubeshark-front
readinessProbe:
failureThreshold: 3
periodSeconds: 1
successThreshold: 1
tcpSocket:
port: 80
timeoutSeconds: 1
resources:
limits:
cpu: 750m
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: kubeshark-service-account
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
status: {}
replicas: 1 # Set the desired number of replicas
selector:
matchLabels:
app.kubeshark.co/app: front
template:
metadata:
labels:
app.kubeshark.co/app: front
spec:
containers:
- env:
- name: REACT_APP_DEFAULT_FILTER
value: ' '
- name: REACT_APP_HUB_HOST
value: ' '
- name: REACT_APP_HUB_PORT
value: ':8899/api'
image: 'docker.io/kubeshark/front:latest'
imagePullPolicy: Always
name: kubeshark-front
livenessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
readinessProbe:
periodSeconds: 1
failureThreshold: 3
successThreshold: 1
initialDelaySeconds: 3
tcpSocket:
port: 80
timeoutSeconds: 1
resources:
limits:
cpu: 750m
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
readOnly: true
volumes:
- name: nginx-config
configMap:
name: kubeshark-nginx-config-map
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: kubeshark-service-account

View File

@@ -2,23 +2,32 @@ package utils
import (
"bytes"
"encoding/json"
"gopkg.in/yaml.v3"
)
const (
empty = ""
tab = "\t"
)
func PrettyYaml(data interface{}) (result string, err error) {
var marshalled []byte
marshalled, err = json.Marshal(data)
if err != nil {
return
}
var unmarshalled interface{}
err = json.Unmarshal(marshalled, &unmarshalled)
if err != nil {
return
}
func PrettyYaml(data interface{}) (string, error) {
buffer := new(bytes.Buffer)
encoder := yaml.NewEncoder(buffer)
encoder.SetIndent(2)
err := encoder.Encode(data)
err = encoder.Encode(unmarshalled)
if err != nil {
return empty, err
return
}
return buffer.String(), nil
result = buffer.String()
return
}