mirror of
https://github.com/kubeshark/kubeshark.git
synced 2026-04-09 04:09:08 +00:00
Compare commits
15 Commits
fix-kfl-la
...
dissection
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4982bf9e01 | ||
|
|
fa03da2fd4 | ||
|
|
a005ef8f58 | ||
|
|
5c02f79f07 | ||
|
|
dc5b4487df | ||
|
|
a4df20d651 | ||
|
|
4de0ac6abd | ||
|
|
9b5ac2821f | ||
|
|
1ba6ed94e0 | ||
|
|
4695acb41e | ||
|
|
b80723edfb | ||
|
|
ddc2e57f12 | ||
|
|
e80fc3319b | ||
|
|
868b4c1f36 | ||
|
|
c63740ec45 |
16
Makefile
16
Makefile
@@ -268,7 +268,7 @@ release-pr: ## Step 1: Tag sibling repos, bump version, create release PR.
|
||||
@cd ../hub && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
|
||||
@cd ../front && git checkout master && git pull && git tag -d v$(VERSION); git tag v$(VERSION) && git push origin --tags
|
||||
@cd ../kubeshark && git checkout master && git pull
|
||||
@sed -i "s/^version:.*/version: \"$(shell echo $(VERSION) | sed -E 's/^([0-9]+\.[0-9]+\.[0-9]+)\..*/\1/')\"/" helm-chart/Chart.yaml
|
||||
@sed -i '' "s/^version:.*/version: \"$(shell echo $(VERSION) | sed -E 's/^([0-9]+\.[0-9]+\.[0-9]+)\..*/\1/')\"/" helm-chart/Chart.yaml
|
||||
@$(MAKE) build VER=$(VERSION)
|
||||
@if [ "$(shell uname)" = "Darwin" ]; then \
|
||||
codesign --sign - --force --preserve-metadata=entitlements,requirements,flags,runtime ./bin/kubeshark__; \
|
||||
@@ -282,10 +282,12 @@ release-pr: ## Step 1: Tag sibling repos, bump version, create release PR.
|
||||
--body "Automated release PR for v$(VERSION)." \
|
||||
--base master \
|
||||
--reviewer corest
|
||||
@rm -rf ../kubeshark.github.io/charts/chart
|
||||
@mkdir ../kubeshark.github.io/charts/chart
|
||||
@cp -r helm-chart/ ../kubeshark.github.io/charts/chart/
|
||||
@cd ../kubeshark.github.io && git checkout master && git pull \
|
||||
@git checkout master && git pull
|
||||
@cd ../kubeshark.github.io \
|
||||
&& git checkout master && git pull \
|
||||
&& rm -rf charts/chart \
|
||||
&& mkdir charts/chart \
|
||||
&& cp -r ../kubeshark/helm-chart/ charts/chart/ \
|
||||
&& git checkout -b helm-v$(VERSION) \
|
||||
&& git add -A . \
|
||||
&& git commit -m ":sparkles: Update the Helm chart to v$(VERSION)" \
|
||||
@@ -293,8 +295,8 @@ release-pr: ## Step 1: Tag sibling repos, bump version, create release PR.
|
||||
&& gh pr create --title ":sparkles: Helm chart v$(VERSION)" \
|
||||
--body "Update Helm chart for release v$(VERSION)." \
|
||||
--base master \
|
||||
--reviewer corest
|
||||
@cd ../kubeshark
|
||||
--reviewer corest \
|
||||
&& git checkout master
|
||||
@echo ""
|
||||
@echo "Release PRs created:"
|
||||
@echo " - kubeshark: Review and merge the release PR."
|
||||
|
||||
75
README.md
75
README.md
@@ -17,17 +17,14 @@
|
||||
|
||||
---
|
||||
|
||||
Kubeshark captures cluster-wide network traffic at the speed and scale of Kubernetes, continuously, at the kernel level using eBPF. It consolidates a highly fragmented picture — dozens of nodes, thousands of workloads, millions of connections — into a single, queryable view with full Kubernetes and API context.
|
||||
Kubeshark indexes cluster-wide network traffic at the kernel level using eBPF — delivering instant answers to any query using network, API, and Kubernetes semantics.
|
||||
|
||||
Network data is available to **AI agents via [MCP](https://docs.kubeshark.com/en/mcp)** and to **human operators via a [dashboard](https://docs.kubeshark.com/en/v2)**.
|
||||
**What you can do:**
|
||||
|
||||
**What's captured, cluster-wide:**
|
||||
|
||||
- **L4 Packets & TCP Metrics** — retransmissions, RTT, window saturation, connection lifecycle, packet loss across every node-to-node path ([TCP insights →](https://docs.kubeshark.com/en/mcp/tcp_insights))
|
||||
- **L7 API Calls** — real-time request/response matching with full payload parsing: HTTP, gRPC, GraphQL, Redis, Kafka, DNS ([API dissection →](https://docs.kubeshark.com/en/v2/l7_api_dissection))
|
||||
- **Decrypted TLS** — eBPF-based TLS decryption without key management
|
||||
- **Kubernetes Context** — every packet and API call resolved to pod, service, namespace, and node
|
||||
- **PCAP Retention** — point-in-time raw packet snapshots, exportable for Wireshark ([Snapshots →](https://docs.kubeshark.com/en/v2/traffic_snapshots))
|
||||
- **Download Retrospective PCAPs** — cluster-wide packet captures filtered by nodes, time, workloads, and IPs. Store PCAPs for long-term retention and later investigation.
|
||||
- **Visualize Network Data** — explore traffic matching queries with API, Kubernetes, or network semantics through a real-time dashboard.
|
||||
- **See Encrypted Traffic in Plain Text** — automatically decrypt TLS/mTLS traffic using eBPF, with no key management or sidecars required.
|
||||
- **Integrate with AI** — connect your favorite AI assistant (e.g. Claude, Copilot) to include network data in AI-driven workflows like incident response and root cause analysis.
|
||||
|
||||

|
||||
|
||||
@@ -38,9 +35,12 @@ Network data is available to **AI agents via [MCP](https://docs.kubeshark.com/en
|
||||
```bash
|
||||
helm repo add kubeshark https://helm.kubeshark.com
|
||||
helm install kubeshark kubeshark/kubeshark
|
||||
kubectl port-forward svc/kubeshark-front 8899:80
|
||||
```
|
||||
|
||||
Dashboard opens automatically. You're capturing traffic.
|
||||
Open `http://localhost:8899` in your browser. You're capturing traffic.
|
||||
|
||||
> For production use, we recommend using an [ingress controller](https://docs.kubeshark.com/en/ingress) instead of port-forward.
|
||||
|
||||
**Connect an AI agent** via MCP:
|
||||
|
||||
@@ -53,9 +53,9 @@ claude mcp add kubeshark -- kubeshark mcp
|
||||
|
||||
---
|
||||
|
||||
### AI-Powered Network Analysis
|
||||
### Network Data for AI Agents
|
||||
|
||||
Kubeshark exposes all cluster-wide network data via MCP (Model Context Protocol). AI agents can query L4 metrics, investigate L7 API calls, analyze traffic patterns, and run root cause analysis — through natural language. Use cases include incident response, root cause analysis, troubleshooting, debugging, and reliability workflows.
|
||||
Kubeshark exposes cluster-wide network data via [MCP](https://docs.kubeshark.com/en/mcp) — enabling AI agents to query traffic, investigate API calls, and perform root cause analysis through natural language.
|
||||
|
||||
> *"Why did checkout fail at 2:15 PM?"*
|
||||
> *"Which services have error rates above 1%?"*
|
||||
@@ -68,31 +68,51 @@ Works with Claude Code, Cursor, and any MCP-compatible AI.
|
||||
|
||||
[MCP setup guide →](https://docs.kubeshark.com/en/mcp)
|
||||
|
||||
### AI Skills
|
||||
|
||||
Open-source, reusable skills that teach AI agents domain-specific workflows on top of Kubeshark's MCP tools:
|
||||
|
||||
| Skill | Description |
|
||||
|-------|-------------|
|
||||
| **[Network RCA](skills/network-rca/)** | Retrospective root cause analysis — snapshots, dissection, PCAP extraction, trend comparison |
|
||||
| **[KFL](skills/kfl/)** | KFL (Kubeshark Filter Language) expert — writes, debugs, and optimizes traffic filters |
|
||||
|
||||
Install as a Claude Code plugin:
|
||||
|
||||
```
|
||||
/plugin marketplace add kubeshark/kubeshark
|
||||
/plugin install kubeshark
|
||||
```
|
||||
|
||||
Or clone and use directly — skills trigger automatically based on conversation context.
|
||||
|
||||
[AI Skills docs →](https://docs.kubeshark.com/en/mcp/skills)
|
||||
|
||||
---
|
||||
|
||||
### L7 API Dissection
|
||||
### Query with API, Kubernetes, and Network Semantics
|
||||
|
||||
Cluster-wide request/response matching with full payloads, parsed according to protocol specifications. HTTP, gRPC, Redis, Kafka, DNS, and more. Every API call resolved to source and destination pod, service, namespace, and node. No code instrumentation required.
|
||||
Kubeshark indexes cluster-wide network traffic by parsing it according to protocol specifications, with support for HTTP, gRPC, Redis, Kafka, DNS, and more. A single [KFL query](https://docs.kubeshark.com/en/v2/kfl2) can combine all three semantic layers — Kubernetes identity, API context, and network attributes — to pinpoint exactly the traffic you need. No code instrumentation required.
|
||||
|
||||

|
||||

|
||||
|
||||
[Learn more →](https://docs.kubeshark.com/en/v2/l7_api_dissection)
|
||||
[KFL reference →](https://docs.kubeshark.com/en/v2/kfl2) · [Traffic indexing →](https://docs.kubeshark.com/en/v2/l7_api_dissection)
|
||||
|
||||
### L4/L7 Workload Map
|
||||
### Workload Dependency Map
|
||||
|
||||
Cluster-wide view of service communication: dependencies, traffic flow, and anomalies across all nodes and namespaces.
|
||||
A visual map of how workloads communicate, showing dependencies, traffic volume, and protocol usage across the cluster.
|
||||
|
||||

|
||||
|
||||
[Learn more →](https://docs.kubeshark.com/en/v2/service_map)
|
||||
|
||||
### Traffic Retention
|
||||
### Traffic Retention & PCAP Export
|
||||
|
||||
Continuous raw packet capture with point-in-time snapshots. Export PCAP files for offline analysis with Wireshark or other tools.
|
||||
Capture and retain raw network traffic cluster-wide, including decrypted TLS. Download PCAPs scoped by time range, nodes, workloads, and IPs — ready for Wireshark or any PCAP-compatible tool. Store snapshots in cloud storage (S3, Azure Blob, GCS) for long-term retention and cross-cluster sharing.
|
||||
|
||||

|
||||

|
||||
|
||||
[Snapshots guide →](https://docs.kubeshark.com/en/v2/traffic_snapshots)
|
||||
[Snapshots guide →](https://docs.kubeshark.com/en/v2/traffic_snapshots) · [Cloud storage →](https://docs.kubeshark.com/en/snapshots_cloud_storage)
|
||||
|
||||
---
|
||||
|
||||
@@ -100,13 +120,12 @@ Continuous raw packet capture with point-in-time snapshots. Export PCAP files fo
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| [**Raw Capture**](https://docs.kubeshark.com/en/v2/raw_capture) | Continuous cluster-wide packet capture with minimal overhead |
|
||||
| [**Traffic Snapshots**](https://docs.kubeshark.com/en/v2/traffic_snapshots) | Point-in-time snapshots, export as PCAP for Wireshark |
|
||||
| [**L7 API Dissection**](https://docs.kubeshark.com/en/v2/l7_api_dissection) | Request/response matching with full payloads and protocol parsing |
|
||||
| [**Traffic Snapshots**](https://docs.kubeshark.com/en/v2/traffic_snapshots) | Point-in-time snapshots with cloud storage (S3, Azure Blob, GCS), PCAP export for Wireshark |
|
||||
| [**Traffic Indexing**](https://docs.kubeshark.com/en/v2/l7_api_dissection) | Real-time and delayed L7 indexing with request/response matching and full payloads |
|
||||
| [**Protocol Support**](https://docs.kubeshark.com/en/protocols) | HTTP, gRPC, GraphQL, Redis, Kafka, DNS, and more |
|
||||
| [**TLS Decryption**](https://docs.kubeshark.com/en/encrypted_traffic) | eBPF-based decryption without key management |
|
||||
| [**AI-Powered Analysis**](https://docs.kubeshark.com/en/v2/ai_powered_analysis) | Query cluster-wide network data with Claude, Cursor, or any MCP-compatible AI |
|
||||
| [**Display Filters**](https://docs.kubeshark.com/en/v2/kfl2) | Wireshark-inspired display filters for precise traffic analysis |
|
||||
| [**TLS Decryption**](https://docs.kubeshark.com/en/encrypted_traffic) | eBPF-based decryption without key management, included in snapshots |
|
||||
| [**AI Integration**](https://docs.kubeshark.com/en/mcp) | MCP server + open-source AI skills for network RCA and traffic filtering |
|
||||
| [**KFL Query Language**](https://docs.kubeshark.com/en/v2/kfl2) | CEL-based query language with Kubernetes, API, and network semantics |
|
||||
| [**100% On-Premises**](https://docs.kubeshark.com/en/air_gapped) | Air-gapped support, no external dependencies |
|
||||
|
||||
---
|
||||
|
||||
@@ -86,9 +86,9 @@ type mcpContent struct {
|
||||
}
|
||||
|
||||
type mcpPrompt struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Arguments []mcpPromptArg `json:"arguments,omitempty"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Arguments []mcpPromptArg `json:"arguments,omitempty"`
|
||||
}
|
||||
|
||||
type mcpPromptArg struct {
|
||||
@@ -117,11 +117,11 @@ type mcpGetPromptResult struct {
|
||||
// Hub MCP API response types
|
||||
|
||||
type hubMCPResponse struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Version string `json:"version"`
|
||||
Tools []hubMCPTool `json:"tools"`
|
||||
Prompts []hubMCPPrompt `json:"prompts"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Version string `json:"version"`
|
||||
Tools []hubMCPTool `json:"tools"`
|
||||
Prompts []hubMCPPrompt `json:"prompts"`
|
||||
}
|
||||
|
||||
type hubMCPTool struct {
|
||||
@@ -131,9 +131,9 @@ type hubMCPTool struct {
|
||||
}
|
||||
|
||||
type hubMCPPrompt struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Arguments []hubMCPPromptArg `json:"arguments,omitempty"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Arguments []hubMCPPromptArg `json:"arguments,omitempty"`
|
||||
}
|
||||
|
||||
type hubMCPPromptArg struct {
|
||||
@@ -151,10 +151,10 @@ type mcpServer struct {
|
||||
stdout io.Writer
|
||||
backendInitialized bool
|
||||
backendMu sync.Mutex
|
||||
setFlags []string // --set flags to pass to 'kubeshark tap' when starting
|
||||
directURL string // If set, connect directly to this URL (no kubectl/proxy)
|
||||
urlMode bool // True when using direct URL mode
|
||||
allowDestructive bool // If true, enable start/stop tools
|
||||
setFlags []string // --set flags to pass to 'kubeshark tap' when starting
|
||||
directURL string // If set, connect directly to this URL (no kubectl/proxy)
|
||||
urlMode bool // True when using direct URL mode
|
||||
allowDestructive bool // If true, enable start/stop tools
|
||||
cachedHubMCP *hubMCPResponse // Cached tools/prompts from Hub
|
||||
cachedAt time.Time // When the cache was populated
|
||||
hubMCPMu sync.Mutex
|
||||
@@ -772,7 +772,6 @@ func (s *mcpServer) callHubTool(toolName string, args map[string]any) (string, b
|
||||
return prettyJSON.String(), false
|
||||
}
|
||||
|
||||
|
||||
func (s *mcpServer) callGetFileURL(args map[string]any) (string, bool) {
|
||||
filePath, _ := args["path"].(string)
|
||||
if filePath == "" {
|
||||
@@ -869,8 +868,8 @@ func (s *mcpServer) callStartKubeshark(args map[string]any) (string, bool) {
|
||||
|
||||
// Add namespaces if provided
|
||||
if v, ok := args["namespaces"].(string); ok && v != "" {
|
||||
namespaces := strings.Split(v, ",")
|
||||
for _, ns := range namespaces {
|
||||
namespaces := strings.SplitSeq(v, ",")
|
||||
for ns := range namespaces {
|
||||
ns = strings.TrimSpace(ns)
|
||||
if ns != "" {
|
||||
cmdArgs = append(cmdArgs, "-n", ns)
|
||||
|
||||
@@ -417,7 +417,7 @@ func TestMCP_CommandArgs(t *testing.T) {
|
||||
cmdArgs = append(cmdArgs, v)
|
||||
}
|
||||
if v, _ := tc.args["namespaces"].(string); v != "" {
|
||||
for _, ns := range strings.Split(v, ",") {
|
||||
for ns := range strings.SplitSeq(v, ",") {
|
||||
cmdArgs = append(cmdArgs, "-n", strings.TrimSpace(ns))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -40,9 +40,11 @@ type Readiness struct {
|
||||
}
|
||||
|
||||
var ready *Readiness
|
||||
var proxyOnce sync.Once
|
||||
|
||||
func tap() {
|
||||
ready = &Readiness{}
|
||||
proxyOnce = sync.Once{}
|
||||
state.startTime = time.Now()
|
||||
log.Info().Str("registry", config.Config.Tap.Docker.Registry).Str("tag", config.Config.Tap.Docker.Tag).Msg("Using Docker:")
|
||||
|
||||
@@ -147,11 +149,21 @@ func printNoPodsFoundSuggestion(targetNamespaces []string) {
|
||||
log.Warn().Msg(fmt.Sprintf("Did not find any currently running pods that match the regex argument, %s will automatically target matching pods if any are created later%s", misc.Software, suggestionStr))
|
||||
}
|
||||
|
||||
func isPodReady(pod *core.Pod) bool {
|
||||
for _, condition := range pod.Status.Conditions {
|
||||
if condition.Type == core.PodReady {
|
||||
return condition.Status == core.ConditionTrue
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
|
||||
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.HubPodName))
|
||||
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
|
||||
eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.Tap.Release.Namespace}, podWatchHelper)
|
||||
isPodReady := false
|
||||
podReady := false
|
||||
podRunning := false
|
||||
|
||||
timeAfter := time.After(120 * time.Second)
|
||||
for {
|
||||
@@ -183,26 +195,30 @@ func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, c
|
||||
Interface("containers-statuses", modifiedPod.Status.ContainerStatuses).
|
||||
Msg("Watching pod.")
|
||||
|
||||
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
|
||||
isPodReady = true
|
||||
if isPodReady(modifiedPod) && !podReady {
|
||||
podReady = true
|
||||
|
||||
ready.Lock()
|
||||
ready.Hub = true
|
||||
ready.Unlock()
|
||||
log.Info().Str("pod", kubernetes.HubPodName).Msg("Ready.")
|
||||
} else if modifiedPod.Status.Phase == core.PodRunning && !podRunning {
|
||||
podRunning = true
|
||||
log.Info().Str("pod", kubernetes.HubPodName).Msg("Waiting for readiness...")
|
||||
}
|
||||
|
||||
ready.Lock()
|
||||
proxyDone := ready.Proxy
|
||||
hubPodReady := ready.Hub
|
||||
frontPodReady := ready.Front
|
||||
ready.Unlock()
|
||||
|
||||
if !proxyDone && hubPodReady && frontPodReady {
|
||||
ready.Lock()
|
||||
ready.Proxy = true
|
||||
ready.Unlock()
|
||||
postFrontStarted(ctx, kubernetesProvider, cancel)
|
||||
if hubPodReady && frontPodReady {
|
||||
proxyOnce.Do(func() {
|
||||
ready.Lock()
|
||||
ready.Proxy = true
|
||||
ready.Unlock()
|
||||
postFrontStarted(ctx, kubernetesProvider, cancel)
|
||||
})
|
||||
}
|
||||
case kubernetes.EventBookmark:
|
||||
break
|
||||
@@ -223,7 +239,7 @@ func watchHubPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, c
|
||||
cancel()
|
||||
|
||||
case <-timeAfter:
|
||||
if !isPodReady {
|
||||
if !podReady {
|
||||
log.Error().
|
||||
Str("pod", kubernetes.HubPodName).
|
||||
Msg("Pod was not ready in time.")
|
||||
@@ -242,7 +258,8 @@ func watchFrontPod(ctx context.Context, kubernetesProvider *kubernetes.Provider,
|
||||
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.FrontPodName))
|
||||
podWatchHelper := kubernetes.NewPodWatchHelper(kubernetesProvider, podExactRegex)
|
||||
eventChan, errorChan := kubernetes.FilteredWatch(ctx, podWatchHelper, []string{config.Config.Tap.Release.Namespace}, podWatchHelper)
|
||||
isPodReady := false
|
||||
podReady := false
|
||||
podRunning := false
|
||||
|
||||
timeAfter := time.After(120 * time.Second)
|
||||
for {
|
||||
@@ -274,25 +291,29 @@ func watchFrontPod(ctx context.Context, kubernetesProvider *kubernetes.Provider,
|
||||
Interface("containers-statuses", modifiedPod.Status.ContainerStatuses).
|
||||
Msg("Watching pod.")
|
||||
|
||||
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
|
||||
isPodReady = true
|
||||
if isPodReady(modifiedPod) && !podReady {
|
||||
podReady = true
|
||||
ready.Lock()
|
||||
ready.Front = true
|
||||
ready.Unlock()
|
||||
log.Info().Str("pod", kubernetes.FrontPodName).Msg("Ready.")
|
||||
} else if modifiedPod.Status.Phase == core.PodRunning && !podRunning {
|
||||
podRunning = true
|
||||
log.Info().Str("pod", kubernetes.FrontPodName).Msg("Waiting for readiness...")
|
||||
}
|
||||
|
||||
ready.Lock()
|
||||
proxyDone := ready.Proxy
|
||||
hubPodReady := ready.Hub
|
||||
frontPodReady := ready.Front
|
||||
ready.Unlock()
|
||||
|
||||
if !proxyDone && hubPodReady && frontPodReady {
|
||||
ready.Lock()
|
||||
ready.Proxy = true
|
||||
ready.Unlock()
|
||||
postFrontStarted(ctx, kubernetesProvider, cancel)
|
||||
if hubPodReady && frontPodReady {
|
||||
proxyOnce.Do(func() {
|
||||
ready.Lock()
|
||||
ready.Proxy = true
|
||||
ready.Unlock()
|
||||
postFrontStarted(ctx, kubernetesProvider, cancel)
|
||||
})
|
||||
}
|
||||
case kubernetes.EventBookmark:
|
||||
break
|
||||
@@ -312,7 +333,7 @@ func watchFrontPod(ctx context.Context, kubernetesProvider *kubernetes.Provider,
|
||||
Msg("Failed creating pod.")
|
||||
|
||||
case <-timeAfter:
|
||||
if !isPodReady {
|
||||
if !podReady {
|
||||
log.Error().
|
||||
Str("pod", kubernetes.FrontPodName).
|
||||
Msg("Pod was not ready in time.")
|
||||
@@ -429,9 +450,6 @@ func postFrontStarted(ctx context.Context, kubernetesProvider *kubernetes.Provid
|
||||
watchScripts(ctx, kubernetesProvider, false)
|
||||
}
|
||||
|
||||
if config.Config.Scripting.Console {
|
||||
go runConsoleWithoutProxy()
|
||||
}
|
||||
}
|
||||
|
||||
func updateConfig(kubernetesProvider *kubernetes.Provider) {
|
||||
|
||||
@@ -128,6 +128,7 @@ func CreateDefaultConfig() ConfigStruct {
|
||||
"http",
|
||||
"icmp",
|
||||
"kafka",
|
||||
"mongodb",
|
||||
"redis",
|
||||
// "sctp",
|
||||
// "syscall",
|
||||
@@ -147,6 +148,7 @@ func CreateDefaultConfig() ConfigStruct {
|
||||
HTTP: []uint16{80, 443, 8080},
|
||||
AMQP: []uint16{5671, 5672},
|
||||
KAFKA: []uint16{9092},
|
||||
MONGODB: []uint16{27017},
|
||||
REDIS: []uint16{6379},
|
||||
LDAP: []uint16{389},
|
||||
DIAMETER: []uint16{3868},
|
||||
|
||||
@@ -282,6 +282,7 @@ type PortMapping struct {
|
||||
HTTP []uint16 `yaml:"http" json:"http"`
|
||||
AMQP []uint16 `yaml:"amqp" json:"amqp"`
|
||||
KAFKA []uint16 `yaml:"kafka" json:"kafka"`
|
||||
MONGODB []uint16 `yaml:"mongodb" json:"mongodb"`
|
||||
REDIS []uint16 `yaml:"redis" json:"redis"`
|
||||
LDAP []uint16 `yaml:"ldap" json:"ldap"`
|
||||
DIAMETER []uint16 `yaml:"diameter" json:"diameter"`
|
||||
@@ -353,8 +354,10 @@ type SnapshotsConfig struct {
|
||||
}
|
||||
|
||||
type DelayedDissectionConfig struct {
|
||||
CPU string `yaml:"cpu" json:"cpu" default:"1"`
|
||||
Memory string `yaml:"memory" json:"memory" default:"4Gi"`
|
||||
CPU string `yaml:"cpu" json:"cpu" default:"1"`
|
||||
Memory string `yaml:"memory" json:"memory" default:"4Gi"`
|
||||
StorageSize string `yaml:"storageSize" json:"storageSize" default:""`
|
||||
StorageClass string `yaml:"storageClass" json:"storageClass" default:""`
|
||||
}
|
||||
|
||||
type DissectionConfig struct {
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
apiVersion: v2
|
||||
name: kubeshark
|
||||
version: "53.1.0"
|
||||
version: "53.2.0"
|
||||
description: The API Traffic Analyzer for Kubernetes
|
||||
home: https://kubeshark.com
|
||||
keywords:
|
||||
|
||||
@@ -164,6 +164,8 @@ Example for overriding image names:
|
||||
| `tap.snapshots.cloud.gcs.credentialsJson` | Service account JSON key. When set, the chart auto-creates a Secret with `SNAPSHOT_GCS_CREDENTIALS_JSON`. | `""` |
|
||||
| `tap.delayedDissection.cpu` | CPU allocation for delayed dissection jobs | `1` |
|
||||
| `tap.delayedDissection.memory` | Memory allocation for delayed dissection jobs | `4Gi` |
|
||||
| `tap.delayedDissection.storageSize` | Storage size for dissection job PVC. When empty, falls back to `tap.snapshots.local.storageSize`. When the resolved value is non-empty, a PVC is created; otherwise an `emptyDir` is used. | `""` |
|
||||
| `tap.delayedDissection.storageClass` | Storage class for dissection job PVC. When empty, falls back to `tap.snapshots.local.storageClass`. | `""` |
|
||||
| `tap.release.repo` | URL of the Helm chart repository | `https://helm.kubeshark.com` |
|
||||
| `tap.release.name` | Helm release name | `kubeshark` |
|
||||
| `tap.release.namespace` | Helm release namespace | `default` |
|
||||
|
||||
@@ -86,6 +86,15 @@ rules:
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- persistentvolumeclaims
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- list
|
||||
- delete
|
||||
- apiGroups:
|
||||
- batch
|
||||
resources:
|
||||
|
||||
@@ -56,6 +56,16 @@ spec:
|
||||
- -dissector-memory
|
||||
- '{{ .Values.tap.delayedDissection.memory }}'
|
||||
{{- end }}
|
||||
{{- $dissectorStorageSize := .Values.tap.delayedDissection.storageSize | default .Values.tap.snapshots.local.storageSize }}
|
||||
{{- if $dissectorStorageSize }}
|
||||
- -dissector-storage-size
|
||||
- '{{ $dissectorStorageSize }}'
|
||||
{{- end }}
|
||||
{{- $dissectorStorageClass := .Values.tap.delayedDissection.storageClass | default .Values.tap.snapshots.local.storageClass }}
|
||||
{{- if $dissectorStorageClass }}
|
||||
- -dissector-storage-class
|
||||
- '{{ $dissectorStorageClass }}'
|
||||
{{- end }}
|
||||
{{- if .Values.tap.gitops.enabled }}
|
||||
- -gitops
|
||||
{{- end }}
|
||||
|
||||
@@ -70,7 +70,7 @@ spec:
|
||||
value: '{{- if and (not .Values.demoModeEnabled) (not .Values.tap.capture.dissection.enabled) -}}
|
||||
true
|
||||
{{- else -}}
|
||||
{{ not (default false .Values.demoModeEnabled) | ternary false true }}
|
||||
{{ (default false .Values.demoModeEnabled) | ternary false true }}
|
||||
{{- end -}}'
|
||||
- name: 'REACT_APP_CLOUD_LICENSE_ENABLED'
|
||||
value: '{{- if or (and .Values.cloudLicenseEnabled (not (empty .Values.license))) (not .Values.internetConnectivity) -}}
|
||||
|
||||
127
helm-chart/tests/dissection_storage_test.yaml
Normal file
127
helm-chart/tests/dissection_storage_test.yaml
Normal file
@@ -0,0 +1,127 @@
|
||||
suite: dissection storage configuration
|
||||
templates:
|
||||
- templates/04-hub-deployment.yaml
|
||||
tests:
|
||||
- it: should fallback to snapshot storageSize when dissection storageSize is empty
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-size
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: "20Gi"
|
||||
|
||||
- it: should fallback to snapshot storageClass when dissection storageClass is empty
|
||||
set:
|
||||
tap.snapshots.local.storageClass: gp2
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-class
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: gp2
|
||||
|
||||
- it: should not render dissector-storage-class when both dissection and snapshot storageClass are empty
|
||||
asserts:
|
||||
- notContains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-class
|
||||
|
||||
- it: should prefer dissection storageSize over snapshot storageSize
|
||||
set:
|
||||
tap.delayedDissection.storageSize: 100Gi
|
||||
tap.snapshots.local.storageSize: 50Gi
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-size
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: "100Gi"
|
||||
|
||||
- it: should prefer dissection storageClass over snapshot storageClass
|
||||
set:
|
||||
tap.delayedDissection.storageClass: io2
|
||||
tap.snapshots.local.storageClass: gp2
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-class
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: io2
|
||||
|
||||
- it: should fallback to snapshot config for both storageSize and storageClass
|
||||
set:
|
||||
tap.snapshots.local.storageSize: 30Gi
|
||||
tap.snapshots.local.storageClass: gp3
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-size
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: "30Gi"
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-class
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: gp3
|
||||
|
||||
- it: should not render dissector-storage-size when both dissection and snapshot storageSize are empty
|
||||
set:
|
||||
tap.delayedDissection.storageSize: ""
|
||||
tap.snapshots.local.storageSize: ""
|
||||
asserts:
|
||||
- notContains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-size
|
||||
|
||||
- it: should render all dissector args together with custom values
|
||||
set:
|
||||
tap.delayedDissection.cpu: "4"
|
||||
tap.delayedDissection.memory: 8Gi
|
||||
tap.delayedDissection.storageSize: 200Gi
|
||||
tap.delayedDissection.storageClass: local-path
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-cpu
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: "4"
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-memory
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: 8Gi
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-size
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: "200Gi"
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-storage-class
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: local-path
|
||||
|
||||
- it: should still render existing dissector-cpu and dissector-memory args
|
||||
asserts:
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-cpu
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: "1"
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: -dissector-memory
|
||||
- contains:
|
||||
path: spec.template.spec.containers[0].command
|
||||
content: 4Gi
|
||||
@@ -37,6 +37,8 @@ tap:
|
||||
delayedDissection:
|
||||
cpu: "1"
|
||||
memory: 4Gi
|
||||
storageSize: ""
|
||||
storageClass: ""
|
||||
snapshots:
|
||||
local:
|
||||
storageClass: ""
|
||||
@@ -205,6 +207,7 @@ tap:
|
||||
- http
|
||||
- icmp
|
||||
- kafka
|
||||
- mongodb
|
||||
- redis
|
||||
- ws
|
||||
- ldap
|
||||
@@ -224,6 +227,8 @@ tap:
|
||||
- 5672
|
||||
kafka:
|
||||
- 9092
|
||||
mongodb:
|
||||
- 27017
|
||||
redis:
|
||||
- 6379
|
||||
ldap:
|
||||
|
||||
@@ -4,10 +4,10 @@ apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-hub-network-policy
|
||||
namespace: default
|
||||
@@ -33,10 +33,10 @@ apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
annotations:
|
||||
name: kubeshark-front-network-policy
|
||||
@@ -60,10 +60,10 @@ apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
annotations:
|
||||
name: kubeshark-dex-network-policy
|
||||
@@ -87,10 +87,10 @@ apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
annotations:
|
||||
name: kubeshark-worker-network-policy
|
||||
@@ -116,10 +116,10 @@ apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-service-account
|
||||
namespace: default
|
||||
@@ -132,10 +132,10 @@ metadata:
|
||||
namespace: default
|
||||
labels:
|
||||
app.kubeshark.com/app: hub
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
stringData:
|
||||
LICENSE: ''
|
||||
@@ -151,10 +151,10 @@ metadata:
|
||||
namespace: default
|
||||
labels:
|
||||
app.kubeshark.com/app: hub
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
stringData:
|
||||
AUTH_SAML_X509_CRT: |
|
||||
@@ -167,10 +167,10 @@ metadata:
|
||||
namespace: default
|
||||
labels:
|
||||
app.kubeshark.com/app: hub
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
stringData:
|
||||
AUTH_SAML_X509_KEY: |
|
||||
@@ -182,10 +182,10 @@ metadata:
|
||||
name: kubeshark-nginx-config-map
|
||||
namespace: default
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
data:
|
||||
default.conf: |
|
||||
@@ -248,10 +248,10 @@ metadata:
|
||||
namespace: default
|
||||
labels:
|
||||
app.kubeshark.com/app: hub
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
data:
|
||||
POD_REGEX: '.*'
|
||||
@@ -276,9 +276,9 @@ data:
|
||||
AUTH_OIDC_BYPASS_SSL_CA_CHECK: 'false'
|
||||
TELEMETRY_DISABLED: 'false'
|
||||
SCRIPTING_DISABLED: 'false'
|
||||
TARGETED_PODS_UPDATE_DISABLED: ''
|
||||
TARGETED_PODS_UPDATE_DISABLED: 'false'
|
||||
PRESET_FILTERS_CHANGING_ENABLED: 'true'
|
||||
RECORDING_DISABLED: ''
|
||||
RECORDING_DISABLED: 'false'
|
||||
DISSECTION_CONTROL_ENABLED: 'true'
|
||||
GLOBAL_FILTER: ""
|
||||
DEFAULT_FILTER: ""
|
||||
@@ -292,6 +292,8 @@ data:
|
||||
ENABLED_DISSECTORS: 'amqp,dns,http,icmp,kafka,redis,ws,ldap,radius,diameter,udp-flow,tcp-flow,udp-conn,tcp-conn'
|
||||
CUSTOM_MACROS: '{"https":"tls and (http or http2)"}'
|
||||
DISSECTORS_UPDATING_ENABLED: 'true'
|
||||
SNAPSHOTS_UPDATING_ENABLED: 'true'
|
||||
DEMO_MODE_ENABLED: 'false'
|
||||
DETECT_DUPLICATES: 'false'
|
||||
PCAP_DUMP_ENABLE: 'false'
|
||||
PCAP_TIME_INTERVAL: '1m'
|
||||
@@ -306,10 +308,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-cluster-role-default
|
||||
namespace: default
|
||||
@@ -353,10 +355,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-cluster-role-binding-default
|
||||
namespace: default
|
||||
@@ -374,10 +376,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
annotations:
|
||||
name: kubeshark-self-config-role
|
||||
@@ -424,10 +426,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
annotations:
|
||||
name: kubeshark-self-config-role-binding
|
||||
@@ -447,10 +449,10 @@ kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app.kubeshark.com/app: hub
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-hub
|
||||
namespace: default
|
||||
@@ -468,10 +470,10 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-front
|
||||
namespace: default
|
||||
@@ -489,10 +491,10 @@ kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
@@ -502,10 +504,10 @@ metadata:
|
||||
spec:
|
||||
selector:
|
||||
app.kubeshark.com/app: worker
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
ports:
|
||||
- name: metrics
|
||||
@@ -518,10 +520,10 @@ kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
@@ -531,10 +533,10 @@ metadata:
|
||||
spec:
|
||||
selector:
|
||||
app.kubeshark.com/app: hub
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
ports:
|
||||
- name: metrics
|
||||
@@ -549,10 +551,10 @@ metadata:
|
||||
labels:
|
||||
app.kubeshark.com/app: worker
|
||||
sidecar.istio.io/inject: "false"
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-worker-daemon-set
|
||||
namespace: default
|
||||
@@ -566,10 +568,10 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubeshark.com/app: worker
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-worker-daemon-set
|
||||
namespace: kubeshark
|
||||
@@ -579,7 +581,7 @@ spec:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- mkdir -p /sys/fs/bpf && mount | grep -q '/sys/fs/bpf' || mount -t bpf bpf /sys/fs/bpf
|
||||
image: 'docker.io/kubeshark/worker:v53.1'
|
||||
image: 'docker.io/kubeshark/worker:v53.2'
|
||||
imagePullPolicy: Always
|
||||
name: mount-bpf
|
||||
securityContext:
|
||||
@@ -618,7 +620,7 @@ spec:
|
||||
- '500Mi'
|
||||
- -cloud-api-url
|
||||
- 'https://api.kubeshark.com'
|
||||
image: 'docker.io/kubeshark/worker:v53.1'
|
||||
image: 'docker.io/kubeshark/worker:v53.2'
|
||||
imagePullPolicy: Always
|
||||
name: sniffer
|
||||
ports:
|
||||
@@ -690,7 +692,7 @@ spec:
|
||||
- -disable-tls-log
|
||||
- -loglevel
|
||||
- 'warning'
|
||||
image: 'docker.io/kubeshark/worker:v53.1'
|
||||
image: 'docker.io/kubeshark/worker:v53.2'
|
||||
imagePullPolicy: Always
|
||||
name: tracer
|
||||
env:
|
||||
@@ -782,10 +784,10 @@ kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app.kubeshark.com/app: hub
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-hub
|
||||
namespace: default
|
||||
@@ -800,10 +802,10 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubeshark.com/app: hub
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
spec:
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
@@ -819,9 +821,9 @@ spec:
|
||||
- -capture-stop-after
|
||||
- "5m"
|
||||
- -snapshot-size-limit
|
||||
- ''
|
||||
- '20Gi'
|
||||
- -dissector-image
|
||||
- 'docker.io/kubeshark/worker:v53.1'
|
||||
- 'docker.io/kubeshark/worker:v53.2'
|
||||
- -dissector-cpu
|
||||
- '1'
|
||||
- -dissector-memory
|
||||
@@ -843,7 +845,7 @@ spec:
|
||||
value: 'production'
|
||||
- name: PROFILING_ENABLED
|
||||
value: 'false'
|
||||
image: 'docker.io/kubeshark/hub:v53.1'
|
||||
image: 'docker.io/kubeshark/hub:v53.2'
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
periodSeconds: 5
|
||||
@@ -911,10 +913,10 @@ kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app.kubeshark.com/app: front
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
name: kubeshark-front
|
||||
namespace: default
|
||||
@@ -929,10 +931,10 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubeshark.com/app: front
|
||||
helm.sh/chart: kubeshark-53.1.0
|
||||
helm.sh/chart: kubeshark-53.2.0
|
||||
app.kubernetes.io/name: kubeshark
|
||||
app.kubernetes.io/instance: kubeshark
|
||||
app.kubernetes.io/version: "53.1.0"
|
||||
app.kubernetes.io/version: "53.2.0"
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
spec:
|
||||
containers:
|
||||
@@ -973,13 +975,19 @@ spec:
|
||||
value: 'false'
|
||||
- name: REACT_APP_DISSECTORS_UPDATING_ENABLED
|
||||
value: 'true'
|
||||
- name: REACT_APP_SNAPSHOTS_UPDATING_ENABLED
|
||||
value: 'true'
|
||||
- name: REACT_APP_DEMO_MODE_ENABLED
|
||||
value: 'false'
|
||||
- name: REACT_APP_CLUSTER_WIDE_MAP_ENABLED
|
||||
value: 'false'
|
||||
- name: REACT_APP_RAW_CAPTURE_ENABLED
|
||||
value: 'true'
|
||||
- name: REACT_APP_SENTRY_ENABLED
|
||||
value: 'false'
|
||||
- name: REACT_APP_SENTRY_ENVIRONMENT
|
||||
value: 'production'
|
||||
image: 'docker.io/kubeshark/front:v53.1'
|
||||
image: 'docker.io/kubeshark/front:v53.2'
|
||||
imagePullPolicy: Always
|
||||
name: kubeshark-front
|
||||
livenessProbe:
|
||||
|
||||
@@ -29,6 +29,31 @@ Unlike real-time monitoring, retrospective analysis lets you go back in time:
|
||||
reconstruct what happened, compare against known-good baselines, and pinpoint
|
||||
root causes with full L4/L7 visibility.
|
||||
|
||||
## Timezone Handling
|
||||
|
||||
All timestamps presented to the user **must use the local timezone** of the environment
|
||||
where the agent is running. Users think in local time ("this happened around 3pm"), and
|
||||
UTC-only output adds friction during incident response when speed matters.
|
||||
|
||||
### Rules
|
||||
|
||||
1. **Detect the local timezone** at the start of every investigation. Use the system
|
||||
clock or environment (e.g., `date +%Z` or equivalent) to determine the timezone.
|
||||
2. **Present local time as the primary reference** in all output — summaries, event
|
||||
correlations, time-range references, and tables.
|
||||
3. **Show UTC in parentheses** for clarity, e.g., `15:03:22 IST (12:03:22 UTC)`.
|
||||
4. **Convert tool responses** — Kubeshark MCP tools return timestamps in UTC. Always
|
||||
convert these to local time before presenting to the user.
|
||||
5. **Use local time in natural language** — when describing events, say "the spike at
|
||||
3:23 PM" not "the spike at 12:23 UTC".
|
||||
|
||||
### Snapshot Creation
|
||||
|
||||
When creating snapshots, Kubeshark MCP tools accept UTC timestamps. Convert the user's
|
||||
local time references to UTC before passing them to tools like `create_snapshot` or
|
||||
`export_snapshot_pcap`. Confirm the converted window with the user if there's any
|
||||
ambiguity.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting any analysis, verify the environment is ready.
|
||||
@@ -103,6 +128,11 @@ Both routes are valid and complementary. Use PCAP when you need raw packets
|
||||
for human analysis or compliance. Use Dissection when you want an AI agent
|
||||
to search and analyze traffic programmatically.
|
||||
|
||||
**Default to Dissection.** Unless the user explicitly asks for a PCAP file or
|
||||
Wireshark export, assume Dissection is needed. Any question about workloads,
|
||||
APIs, services, pods, error rates, latency, or traffic patterns requires
|
||||
dissected data.
|
||||
|
||||
## Snapshot Operations
|
||||
|
||||
Both routes start here. A snapshot is an immutable freeze of all cluster traffic
|
||||
@@ -116,19 +146,19 @@ Check what raw capture data exists across the cluster. You can only create
|
||||
snapshots within these boundaries — data outside the window has been rotated
|
||||
out of the FIFO buffer.
|
||||
|
||||
**Example response**:
|
||||
**Example response** (raw tool output is in UTC — convert to local time before presenting):
|
||||
```
|
||||
Cluster-wide:
|
||||
Oldest: 2026-03-14 16:12:34 UTC
|
||||
Newest: 2026-03-14 18:05:20 UTC
|
||||
Oldest: 2026-03-14 18:12:34 IST (16:12:34 UTC)
|
||||
Newest: 2026-03-14 20:05:20 IST (18:05:20 UTC)
|
||||
|
||||
Per node:
|
||||
┌─────────────────────────────┬──────────┬──────────┐
|
||||
│ Node │ Oldest │ Newest │
|
||||
├─────────────────────────────┼──────────┼──────────┤
|
||||
│ ip-10-0-25-170.ec2.internal │ 16:12:34 │ 18:03:39 │
|
||||
│ ip-10-0-32-115.ec2.internal │ 16:13:45 │ 18:05:20 │
|
||||
└─────────────────────────────┴──────────┴──────────┘
|
||||
┌─────────────────────────────┬───────────────────────────────┬───────────────────────────────┐
|
||||
│ Node │ Oldest │ Newest │
|
||||
├─────────────────────────────┼───────────────────────────────┼───────────────────────────────┤
|
||||
│ ip-10-0-25-170.ec2.internal │ 18:12:34 IST (16:12:34 UTC) │ 20:03:39 IST (18:03:39 UTC) │
|
||||
│ ip-10-0-32-115.ec2.internal │ 18:13:45 IST (16:13:45 UTC) │ 20:05:20 IST (18:05:20 UTC) │
|
||||
└─────────────────────────────┴───────────────────────────────┴───────────────────────────────┘
|
||||
```
|
||||
|
||||
If the incident falls outside the available window, the data has been rotated
|
||||
@@ -191,18 +221,48 @@ When you know the workload names but not their IPs, resolve them from the
|
||||
snapshot's metadata. Snapshots preserve pod-to-IP mappings from capture time,
|
||||
so resolution is accurate even if pods have been rescheduled since.
|
||||
|
||||
**Tool**: `resolve_workload`
|
||||
**Tool**: `list_workloads`
|
||||
|
||||
**Example workflow** — extract PCAP for specific workloads:
|
||||
Use `list_workloads` with `name` + `namespace` for a singular lookup (works
|
||||
live and against snapshots), or with `snapshot_id` + filters for a broader
|
||||
scan.
|
||||
|
||||
1. Resolve IPs: `resolve_workload` for `orders-594487879c-7ddxf` → `10.0.53.101`
|
||||
2. Resolve IPs: `resolve_workload` for `payment-service-6b8f9d-x2k4p` → `10.0.53.205`
|
||||
**Example workflow — singular lookup** — extract PCAP for specific workloads:
|
||||
|
||||
1. Resolve IPs: `list_workloads` with `name: "orders-594487879c-7ddxf"`, `namespace: "prod"` → IPs: `["10.0.53.101"]`
|
||||
2. Resolve IPs: `list_workloads` with `name: "payment-service-6b8f9d-x2k4p"`, `namespace: "prod"` → IPs: `["10.0.53.205"]`
|
||||
3. Build BPF: `host 10.0.53.101 or host 10.0.53.205`
|
||||
4. Export: `export_snapshot_pcap` with that BPF filter
|
||||
|
||||
**Example workflow — filtered scan** — extract PCAP for all workloads
|
||||
matching a pattern in a snapshot:
|
||||
|
||||
1. List workloads: `list_workloads` with `snapshot_id`, `namespaces: ["prod"]`,
|
||||
`name_regex: "payment.*"` → returns all matching workloads with their IPs
|
||||
2. Collect all IPs from the response
|
||||
3. Build BPF: `host 10.0.53.205 or host 10.0.53.210 or ...`
|
||||
4. Export: `export_snapshot_pcap` with that BPF filter
|
||||
|
||||
This gives you a cluster-wide PCAP filtered to exactly the workloads involved
|
||||
in the incident — ready for Wireshark or long-term storage.
|
||||
|
||||
### IP-to-Workload Resolution
|
||||
|
||||
When you have an IP address (e.g., from a PCAP or L4 flow) and need to
|
||||
identify the workload behind it:
|
||||
|
||||
**Tool**: `list_ips`
|
||||
|
||||
Use `list_ips` with `ip` for a singular lookup (works live and against
|
||||
snapshots), or with `snapshot_id` + filters for a broader scan.
|
||||
|
||||
**Example — singular lookup**: `list_ips` with `ip: "10.0.53.101"`,
|
||||
`snapshot_id: "snap-abc"` → returns pod/service identity for that IP.
|
||||
|
||||
**Example — filtered scan**: `list_ips` with `snapshot_id: "snap-abc"`,
|
||||
`namespaces: ["prod"]`, `labels: {"app": "payment"}` → returns all IPs
|
||||
associated with workloads matching those filters.
|
||||
|
||||
---
|
||||
|
||||
## Route 2: Dissection
|
||||
@@ -232,7 +292,30 @@ KFL field names differ from what you might expect (e.g., `status_code` not
|
||||
`response.status`, `src.pod.namespace` not `src.namespace`). Using incorrect
|
||||
fields produces wrong results without warning.
|
||||
|
||||
### Activate Dissection
|
||||
### Dissection Is Required — Do Not Skip This
|
||||
|
||||
**Any question about workloads, Kubernetes resources, services, pods, namespaces,
|
||||
or API calls requires dissection.** Only the PCAP route works without it. If the
|
||||
user asks anything about traffic content, API behavior, error rates, latency,
|
||||
or service-to-service communication, you **must** ensure dissection is active
|
||||
before attempting to answer.
|
||||
|
||||
**Do not wait for dissection to complete on its own — it will not start by itself.**
|
||||
|
||||
Follow this sequence every time before using `list_api_calls`, `get_api_call`,
|
||||
or `get_api_stats`:
|
||||
|
||||
1. **Check status**: Call `get_snapshot_dissection_status` (or `list_snapshot_dissections`)
|
||||
to see if a dissection already exists for this snapshot.
|
||||
2. **If dissection exists and is completed** — proceed with your query. No further
|
||||
action needed.
|
||||
3. **If dissection is in progress** — wait for it to complete, then proceed.
|
||||
4. **If no dissection exists** — you **must** call `start_snapshot_dissection` to
|
||||
trigger it. Then monitor progress with `get_snapshot_dissection_status` until
|
||||
it completes.
|
||||
|
||||
Never assume dissection is running. Never wait for a dissection that was not started.
|
||||
The agent is responsible for triggering dissection when it is missing.
|
||||
|
||||
**Tool**: `start_snapshot_dissection`
|
||||
|
||||
@@ -243,6 +326,27 @@ become available:
|
||||
- `get_api_call` — Drill into a specific call (headers, body, timing, payload)
|
||||
- `get_api_stats` — Aggregated statistics (throughput, error rates, latency)
|
||||
|
||||
### Every Question Is a Query
|
||||
|
||||
**Every user prompt that involves APIs, workloads, services, pods, namespaces,
|
||||
or Kubernetes semantics should translate into a `list_api_calls` call with an
|
||||
appropriate KFL filter.** Do not answer from memory or prior results — always
|
||||
run a fresh query that matches what the user is asking.
|
||||
|
||||
Examples of user prompts and the queries they should trigger:
|
||||
|
||||
| User says | Action |
|
||||
|---|---|
|
||||
| "Show me all 500 errors" | `list_api_calls` with KFL: `http && status_code == 500` |
|
||||
| "What's hitting the payment service?" | `list_api_calls` with KFL: `dst.service.name == "payment-service"` |
|
||||
| "Any DNS failures?" | `list_api_calls` with KFL: `dns && status_code != 0` |
|
||||
| "Show traffic from namespace prod to staging" | `list_api_calls` with KFL: `src.pod.namespace == "prod" && dst.pod.namespace == "staging"` |
|
||||
| "What are the slowest API calls?" | `list_api_calls` with KFL: `http && elapsed_time > 5000000` |
|
||||
|
||||
The user's natural language maps to KFL. Your job is to translate intent into
|
||||
the right filter and run the query — don't summarize old results or speculate
|
||||
without fresh data.
|
||||
|
||||
### Investigation Strategy
|
||||
|
||||
Start broad, then narrow:
|
||||
@@ -255,16 +359,17 @@ Start broad, then narrow:
|
||||
full payload to understand what went wrong.
|
||||
4. Use KFL filters to slice by namespace, service, protocol, or any combination.
|
||||
|
||||
**Example `list_api_calls` response** (filtered to `http && status_code >= 500`):
|
||||
**Example `list_api_calls` response** (filtered to `http && status_code >= 500`,
|
||||
timestamps converted from UTC to local):
|
||||
```
|
||||
┌──────────────────────┬────────┬──────────────────────────┬────────┬───────────┐
|
||||
│ Timestamp │ Method │ URL │ Status │ Elapsed │
|
||||
├──────────────────────┼────────┼──────────────────────────┼────────┼───────────┤
|
||||
│ 2026-03-14 17:23:45 │ POST │ /api/v1/orders/charge │ 503 │ 12,340 ms │
|
||||
│ 2026-03-14 17:23:46 │ POST │ /api/v1/orders/charge │ 503 │ 11,890 ms │
|
||||
│ 2026-03-14 17:23:48 │ GET │ /api/v1/inventory/check │ 500 │ 8,210 ms │
|
||||
│ 2026-03-14 17:24:01 │ POST │ /api/v1/payments/process │ 502 │ 30,000 ms │
|
||||
└──────────────────────┴────────┴──────────────────────────┴────────┴───────────┘
|
||||
┌──────────────────────────────────────────┬────────┬──────────────────────────┬────────┬───────────┐
|
||||
│ Timestamp │ Method │ URL │ Status │ Elapsed │
|
||||
├──────────────────────────────────────────┼────────┼──────────────────────────┼────────┼───────────┤
|
||||
│ 2026-03-14 19:23:45 IST (17:23:45 UTC) │ POST │ /api/v1/orders/charge │ 503 │ 12,340 ms │
|
||||
│ 2026-03-14 19:23:46 IST (17:23:46 UTC) │ POST │ /api/v1/orders/charge │ 503 │ 11,890 ms │
|
||||
│ 2026-03-14 19:23:48 IST (17:23:48 UTC) │ GET │ /api/v1/inventory/check │ 500 │ 8,210 ms │
|
||||
│ 2026-03-14 19:24:01 IST (17:24:01 UTC) │ POST │ /api/v1/payments/process │ 502 │ 30,000 ms │
|
||||
└──────────────────────────────────────────┴────────┴──────────────────────────┴────────┴───────────┘
|
||||
Src: api-gateway (prod) → Dst: payment-service (prod)
|
||||
```
|
||||
|
||||
@@ -305,8 +410,9 @@ conn && conn_state == "open" && conn_local_bytes > 1000000 // High-volume conne
|
||||
The two routes are complementary. A common pattern:
|
||||
|
||||
1. Start with **Dissection** — let the AI agent search and identify the root cause
|
||||
2. Once you've pinpointed the problematic workloads, use `resolve_workload`
|
||||
to get their IPs
|
||||
2. Once you've pinpointed the problematic workloads, use `list_workloads`
|
||||
to get their IPs (singular lookup by name+namespace, or filtered scan
|
||||
by namespace/regex/labels against the snapshot)
|
||||
3. Switch to **PCAP** — export a filtered PCAP of just those workloads for
|
||||
Wireshark deep-dive, sharing with the network team, or compliance archival
|
||||
|
||||
@@ -319,7 +425,7 @@ The two routes are complementary. A common pattern:
|
||||
3. `create_snapshot` covering the incident window (add 15 minutes buffer)
|
||||
4. **Dissection route**: `start_snapshot_dissection` → `get_api_stats` →
|
||||
`list_api_calls` → `get_api_call` → follow the dependency chain
|
||||
5. **PCAP route**: `resolve_workload` → `export_snapshot_pcap` with BPF →
|
||||
5. **PCAP route**: `list_workloads` → `export_snapshot_pcap` with BPF →
|
||||
hand off to Wireshark or archive
|
||||
|
||||
### Other Use Cases
|
||||
|
||||
Reference in New Issue
Block a user